Friday, October 21, 2016
Thursday, October 20, 2016
The Cult of the Expert – and How It Collapsed
On Tuesday 16 September 2008, early in the afternoon, a self-effacing professor with a neatly clipped beard sat with the president in the Roosevelt Room of the White House. Flanked by a square-shouldered banker who had recently run Goldman Sachs, the professor was there to tell the elected leader of the world’s most powerful country how to rescue its economy. Following the bankruptcy of one of the nation’s storied investment banks, a global insurance company was now on the brink, but drawing on a lifetime of scholarly research, the professor had resolved to commit $85bn of public funds to stabilising it.
The sum involved was extraordinary: $85bn was more than the US Congress spent annually on transportation, and nearly three times as much as it spent on fighting Aids, a particular priority of the president’s. But the professor encountered no resistance. “Sometimes you have to make the tough decisions,” the president reflected. “If you think this has to be done, you have my blessing.”
Later that same afternoon, Federal Reserve chairman Ben Bernanke, the bearded hero of this tale, showed up on Capitol Hill, at the other end of Pennsylvania Avenue. At the White House, he had at least been on familiar ground: he had spent eight months working there. But now Bernanke appeared in the Senate majority leader’s conference room, where he and his ex-Wall Street comrade, Treasury secretary Hank Paulson, would meet the senior leaders of both chambers of Congress. A quiet, balding, unassuming technocrat confronted the lions of the legislative branch, armed with nothing but his expertise in monetary plumbing.
Bernanke repeated his plan to commit $85bn of public money to the takeover of an insurance company.
“Do you have 85bn?” one sceptical lawmaker demanded.
“I have 800bn,” Bernanke replied evenly – a central bank could conjure as much money as it deemed necessary.
But did the Federal Reserve have the legal right to take this sort of action unilaterally, another lawmaker inquired?
Yes, Bernanke answered: as Fed chairman, he wielded the largest chequebook in the world – and the only counter-signatures required would come from other Fed experts, who were no more elected or accountable than he was. Somehow America’s famous apparatus of democratic checks and balances did not apply to the monetary priesthood. Their authority derived from technocratic virtuosity.
When the history is written of the revolt against experts, September 2008 will be seen as a milestone. The $85bn rescue of the American International Group (AIG)dramatised the power of monetary gurus in all its anti-democratic majesty. The president and Congress could decide to borrow money, or raise it from taxpayers; the Fed could simply create it. And once the AIG rescue had legitimised the broadest possible use of this privilege, the Fed exploited it unflinchingly. Over the course of 2009, it injected a trillion dollars into the economy – a sum equivalent to nearly 30% of the federal budget – via its newly improvised policy of “quantitative easing”. Time magazine anointed Bernanke its person of the year. “The decisions he has made, and those he has yet to make, will shape the path of our prosperity, the direction of our politics and our relationship to the world,” the magazine declared admiringly.
The Fed’s swashbuckling example galvanized central bankers in all the big economies. Soon Europe saw the rise of its own path-shaping monetary chieftain, when Mario Draghi, president of the European Central Bank, defused panic in the eurozone in July 2012 with two magical sentences. “Within our mandate, the ECB is ready to do whatever it takes to preserve the euro,” he vowed, adding, with a twist of Clint Eastwood menace, “And believe me, it will be enough.” For months, Europe’s elected leaders had waffled ineffectually, inviting hedge-fund speculators to test the cohesion of the eurozone. But now Draghi was announcing that he was badder than the baddest hedge-fund goon. Whatever it takes. Believe me.
In the summer of 2013, when Hollywood rolled out its latest Superman film, cartoonists quickly seized upon a gag that would soon become obvious. Caricatures depicted central-bank chieftains decked out in Superman outfits. One showed Bernanke ripping off his banker’s shirt and tie, exposing that thrilling S emblazoned on his vest. Another showed the bearded hero hurtling through space, red cape fluttering, right arm stretched forward, a powerful fist punching at the void in front of him. “Superman and Federal Reserve chairman Ben Bernanke are both mild-mannered,” a financial columnist deadpanned. “They are both calm, even in the face of global disasters. They are both sometimes said to be from other planets.”
At some point towards the middle of the decade, shortly before the cult of the expert smashed into the populist backlash, the shocking power of central banks came to feel normal. Nobody blinked an eye when Haruhiko Kuroda, the head of Japan’s central bank, created money at a rate that made his western counterparts seem timid. Nobody thought it strange when Britain’s government, perhaps emulating the style of the national football team, conducted a worldwide talent search for the new Bank of England chief. Nobody was surprised when the winner of that contest, the telegenic Canadian Mark Carney, quickly appeared in newspaper cartoons in his own superman outfit. And nobody missed a beat when India’s breathless journalists described Raghuram Rajan, the new head of the Reserve Bank of India, as a “rock star”, or when he was pictured as James Bond in the country’s biggest business newspaper. “Clearly I am not a superman,” Rajan modestly responded.
If Bernanke’s laconic “I have 800bn” moment signalled a new era of central-banking power, Rajan’s “I am not a superman” wisecrack marked its apotheosis. And it was a high watermark for a wider phenomenon as well, for the cult of the central banker was only the most pronounced example of a broader cult that had taken shape over the previous quarter of a century: the cult of the expert. Even before Bernanke rescued the global economy, technocrats of all stripes – business leaders, scientists, foreign and domestic policy wonks – were enthralled by the notion that politicians might defer to the authority of experts armed with facts and rational analysis. Those moments when Bernanke faced down Congress, or when Draghi succeeded where bickering politicians had failed, made it seem possible that this technocratic vision, with its apolitical ideal of government, might actually be realised.
The key to the power of the central bankers – and the envy of all the other experts – lay precisely in their ability to escape political interference. Democratically elected leaders had given them a mission – to vanquish inflation – and then let them get on with it. To public-health experts, climate scientists and other members of the knowledge elite, this was the model of how things should be done. Experts had built Microsoft. Experts were sequencing the genome. Experts were laying fibre-optic cable beneath the great oceans. No senator would have his child’s surgery performed by an amateur. So why would he not entrust experts with the economy?
by Sebastian Mallaby, The Guardian | Read more:
Image: Ben Bernanke via:
The sum involved was extraordinary: $85bn was more than the US Congress spent annually on transportation, and nearly three times as much as it spent on fighting Aids, a particular priority of the president’s. But the professor encountered no resistance. “Sometimes you have to make the tough decisions,” the president reflected. “If you think this has to be done, you have my blessing.”

Bernanke repeated his plan to commit $85bn of public money to the takeover of an insurance company.
“Do you have 85bn?” one sceptical lawmaker demanded.
“I have 800bn,” Bernanke replied evenly – a central bank could conjure as much money as it deemed necessary.
But did the Federal Reserve have the legal right to take this sort of action unilaterally, another lawmaker inquired?
Yes, Bernanke answered: as Fed chairman, he wielded the largest chequebook in the world – and the only counter-signatures required would come from other Fed experts, who were no more elected or accountable than he was. Somehow America’s famous apparatus of democratic checks and balances did not apply to the monetary priesthood. Their authority derived from technocratic virtuosity.
When the history is written of the revolt against experts, September 2008 will be seen as a milestone. The $85bn rescue of the American International Group (AIG)dramatised the power of monetary gurus in all its anti-democratic majesty. The president and Congress could decide to borrow money, or raise it from taxpayers; the Fed could simply create it. And once the AIG rescue had legitimised the broadest possible use of this privilege, the Fed exploited it unflinchingly. Over the course of 2009, it injected a trillion dollars into the economy – a sum equivalent to nearly 30% of the federal budget – via its newly improvised policy of “quantitative easing”. Time magazine anointed Bernanke its person of the year. “The decisions he has made, and those he has yet to make, will shape the path of our prosperity, the direction of our politics and our relationship to the world,” the magazine declared admiringly.
The Fed’s swashbuckling example galvanized central bankers in all the big economies. Soon Europe saw the rise of its own path-shaping monetary chieftain, when Mario Draghi, president of the European Central Bank, defused panic in the eurozone in July 2012 with two magical sentences. “Within our mandate, the ECB is ready to do whatever it takes to preserve the euro,” he vowed, adding, with a twist of Clint Eastwood menace, “And believe me, it will be enough.” For months, Europe’s elected leaders had waffled ineffectually, inviting hedge-fund speculators to test the cohesion of the eurozone. But now Draghi was announcing that he was badder than the baddest hedge-fund goon. Whatever it takes. Believe me.
In the summer of 2013, when Hollywood rolled out its latest Superman film, cartoonists quickly seized upon a gag that would soon become obvious. Caricatures depicted central-bank chieftains decked out in Superman outfits. One showed Bernanke ripping off his banker’s shirt and tie, exposing that thrilling S emblazoned on his vest. Another showed the bearded hero hurtling through space, red cape fluttering, right arm stretched forward, a powerful fist punching at the void in front of him. “Superman and Federal Reserve chairman Ben Bernanke are both mild-mannered,” a financial columnist deadpanned. “They are both calm, even in the face of global disasters. They are both sometimes said to be from other planets.”
At some point towards the middle of the decade, shortly before the cult of the expert smashed into the populist backlash, the shocking power of central banks came to feel normal. Nobody blinked an eye when Haruhiko Kuroda, the head of Japan’s central bank, created money at a rate that made his western counterparts seem timid. Nobody thought it strange when Britain’s government, perhaps emulating the style of the national football team, conducted a worldwide talent search for the new Bank of England chief. Nobody was surprised when the winner of that contest, the telegenic Canadian Mark Carney, quickly appeared in newspaper cartoons in his own superman outfit. And nobody missed a beat when India’s breathless journalists described Raghuram Rajan, the new head of the Reserve Bank of India, as a “rock star”, or when he was pictured as James Bond in the country’s biggest business newspaper. “Clearly I am not a superman,” Rajan modestly responded.
If Bernanke’s laconic “I have 800bn” moment signalled a new era of central-banking power, Rajan’s “I am not a superman” wisecrack marked its apotheosis. And it was a high watermark for a wider phenomenon as well, for the cult of the central banker was only the most pronounced example of a broader cult that had taken shape over the previous quarter of a century: the cult of the expert. Even before Bernanke rescued the global economy, technocrats of all stripes – business leaders, scientists, foreign and domestic policy wonks – were enthralled by the notion that politicians might defer to the authority of experts armed with facts and rational analysis. Those moments when Bernanke faced down Congress, or when Draghi succeeded where bickering politicians had failed, made it seem possible that this technocratic vision, with its apolitical ideal of government, might actually be realised.
The key to the power of the central bankers – and the envy of all the other experts – lay precisely in their ability to escape political interference. Democratically elected leaders had given them a mission – to vanquish inflation – and then let them get on with it. To public-health experts, climate scientists and other members of the knowledge elite, this was the model of how things should be done. Experts had built Microsoft. Experts were sequencing the genome. Experts were laying fibre-optic cable beneath the great oceans. No senator would have his child’s surgery performed by an amateur. So why would he not entrust experts with the economy?
by Sebastian Mallaby, The Guardian | Read more:
Image: Ben Bernanke via:
I Hope Haruki Murakami Wins the Nobel Prize - And Will Be Thrilled When He Doesn't
[ed. This is exactly how I've felt about every Murakami novel I've ever read (and I've read nearly all of them). This was published just before Bob Dylan won the Nobel prize for Literature this year (you notice he's been kind of quiet lately?... maybe a little conflicted about accepting an honor from the inventor of dynamite and other lethal weapons?). Anyway, I do hope Mr. Murakami wins it one of these years because he really does have a masterful narrative style, but I'll still be baffled by what it all means.]
Well, it’s Nobel season again, and with it the annual ritual of speculating and gambling over who will win the literature prize.
Every year, Haruki Murakami’s name comes up. This year The Guardian reports he’s the 4/1 favorite.
Every year I’m both disappointed and relieved when he doesn’t win.
Disappointed because—well, ethnic pride. He’s Japanese, and I’m sort of Japanese.
Kenzaburo Ōe was the last Japanese literature Nobelist, and that was more than two decades ago. There’s only been one other literature laureate from Japan, the great Yasunari Kawabata, in 1968. Sure, I’m biased, but that seems like an oversight, although certainly not the only such oversight in the prize’s history (cf., only 14 women among 112 laureates, no black African winner since Wole Soyinka in 1986, etc.).
I’m also disappointed because he’s a writer whose work I actually know. I’ve read more Murakami than I have of any of the other writers on these annual lists—probably more than any of the top five to ten also-rans put together. If a writer you know wins, there’s this largely unearned but nevertheless pleasurable feeling of personal validation. Oh yes, you think, I’ve read that writer! I felt that way when Ōe won. And Lessing. And especially Munro. I wouldn’t mind feeling that way again.
And I do admire Murakami’s work. Some of it. I often like his short stories. And the novel excerpts published as short stories in The New Yorker, like “The Zoo Attack” and “Another Way to Die,” two excerpts from The Wind-Up Bird Chronicle that still haunt me 20 years after I read them. I quite liked after the quake, his collection of short fiction that reflects, in ways direct and indirect, on the devastating 1995 Kobe earthquake. My students are reading “U.F.O. in Kushiro” this week, and I can’t wait to talk about it. I appreciate the obsession with disasters both natural and human-made, the latter explored in Underground, his non-fiction book about the sarin gas attack on the Tokyo subway system (also 1995, a bad year in Japan).
So yes, I’m disappointed when Murakami doesn’t win.
But mostly I feel relieved because—okay, don’t hate me, but—I can’t stand the novels.
I know, I know, I know. Heresy! He’s the darling of the literary world! No other Japanese writer in my lifetime is likely to command so much attention in the West! He was really nice that one time I met him in Berkeley, more than 20 years ago, before he became so famous here! His novels are so weird and compelling and cool!
Yes, so weird and compelling and cool. But for me, reading a Murakami novel is a lot like eating a party-sized bag of potato chips by myself in one sitting. The bag is so enticing, and the potato chips look so good. The first one I crunch down is delicious, and the next one is pretty good too, and the next one and the next one. Before I know it, I’ve eaten the entire bag. But now I just feel gross and full of self-loathing. I didn’t even enjoy the last 30 potato chips, which were greasy and salty and nasty. I ate them because they were there. Because I wanted to recapture the taste sensation that was the first chip. Because I thought for some reason there would be a prize at the bottom of the bag. Even though I’ve eaten through many bags of potato chips, and there’s never a prize at the bottom.
So it is with Murakami’s novels. I love the inventive set-ups, the pell-mell zaniness, the quotable zingers. I love the international flavor—the pasta, the jazz, the references to Chekhov and Bashō and Janáček, oh my! If I leaf through my copies of his books, I can see where I’ve penciled “Whoa” and “Creepy” and “Yes!” in the margins. But my comments gradually betray my growing frustration: “Duh” and “I don’t buy this” and “Enough with the brand names already” and “I’m really tired of the plot hinging on someone’s ‘sixth sense’” and “This contradicts p. 165” and “Wait. What?” I love its parts, like the excerpts I mention above. But the whole is always somehow less than the sum of its parts. I devoured The Wind-Up Bird Chronicle when it appeared in English, but was left scratching my head afterward, wondering what I’d missed.
Novel after novel seem perversely to manipulate reasonable reader expectations, deploying plot elements that go nowhere and details that seem to be placed simply for kicks or shock value. All too often the books read like first drafts written straight through from beginning to end with no backward glance, as if the author forgot what he was up to between writing sessions or changed course a few times and didn’t realize it or care. No one else seems to notice these things or mind. I feel like the little boy in the fairytale pointing at the emperor and saying, “But… but… but… he’s naked?” (...)
At this point you’re probably yelling at the screen, Jesus, if you dislike his work so much, just stop reading it!
This is easier said than done, as it turns out. I’ve tried, really. Every time I read a Murakami novel, I say, Okay, that’s it. No more Murakami. I’m done.
But then another book comes out in translation, and there I am, munching down on those greasy, high-calorie chips as if they’re the best thing ever, then feeling bloated and pissed off afterward.
Well, it’s Nobel season again, and with it the annual ritual of speculating and gambling over who will win the literature prize.
Every year, Haruki Murakami’s name comes up. This year The Guardian reports he’s the 4/1 favorite.
Every year I’m both disappointed and relieved when he doesn’t win.

Kenzaburo Ōe was the last Japanese literature Nobelist, and that was more than two decades ago. There’s only been one other literature laureate from Japan, the great Yasunari Kawabata, in 1968. Sure, I’m biased, but that seems like an oversight, although certainly not the only such oversight in the prize’s history (cf., only 14 women among 112 laureates, no black African winner since Wole Soyinka in 1986, etc.).
I’m also disappointed because he’s a writer whose work I actually know. I’ve read more Murakami than I have of any of the other writers on these annual lists—probably more than any of the top five to ten also-rans put together. If a writer you know wins, there’s this largely unearned but nevertheless pleasurable feeling of personal validation. Oh yes, you think, I’ve read that writer! I felt that way when Ōe won. And Lessing. And especially Munro. I wouldn’t mind feeling that way again.
And I do admire Murakami’s work. Some of it. I often like his short stories. And the novel excerpts published as short stories in The New Yorker, like “The Zoo Attack” and “Another Way to Die,” two excerpts from The Wind-Up Bird Chronicle that still haunt me 20 years after I read them. I quite liked after the quake, his collection of short fiction that reflects, in ways direct and indirect, on the devastating 1995 Kobe earthquake. My students are reading “U.F.O. in Kushiro” this week, and I can’t wait to talk about it. I appreciate the obsession with disasters both natural and human-made, the latter explored in Underground, his non-fiction book about the sarin gas attack on the Tokyo subway system (also 1995, a bad year in Japan).
So yes, I’m disappointed when Murakami doesn’t win.
But mostly I feel relieved because—okay, don’t hate me, but—I can’t stand the novels.
I know, I know, I know. Heresy! He’s the darling of the literary world! No other Japanese writer in my lifetime is likely to command so much attention in the West! He was really nice that one time I met him in Berkeley, more than 20 years ago, before he became so famous here! His novels are so weird and compelling and cool!
Yes, so weird and compelling and cool. But for me, reading a Murakami novel is a lot like eating a party-sized bag of potato chips by myself in one sitting. The bag is so enticing, and the potato chips look so good. The first one I crunch down is delicious, and the next one is pretty good too, and the next one and the next one. Before I know it, I’ve eaten the entire bag. But now I just feel gross and full of self-loathing. I didn’t even enjoy the last 30 potato chips, which were greasy and salty and nasty. I ate them because they were there. Because I wanted to recapture the taste sensation that was the first chip. Because I thought for some reason there would be a prize at the bottom of the bag. Even though I’ve eaten through many bags of potato chips, and there’s never a prize at the bottom.
So it is with Murakami’s novels. I love the inventive set-ups, the pell-mell zaniness, the quotable zingers. I love the international flavor—the pasta, the jazz, the references to Chekhov and Bashō and Janáček, oh my! If I leaf through my copies of his books, I can see where I’ve penciled “Whoa” and “Creepy” and “Yes!” in the margins. But my comments gradually betray my growing frustration: “Duh” and “I don’t buy this” and “Enough with the brand names already” and “I’m really tired of the plot hinging on someone’s ‘sixth sense’” and “This contradicts p. 165” and “Wait. What?” I love its parts, like the excerpts I mention above. But the whole is always somehow less than the sum of its parts. I devoured The Wind-Up Bird Chronicle when it appeared in English, but was left scratching my head afterward, wondering what I’d missed.
Novel after novel seem perversely to manipulate reasonable reader expectations, deploying plot elements that go nowhere and details that seem to be placed simply for kicks or shock value. All too often the books read like first drafts written straight through from beginning to end with no backward glance, as if the author forgot what he was up to between writing sessions or changed course a few times and didn’t realize it or care. No one else seems to notice these things or mind. I feel like the little boy in the fairytale pointing at the emperor and saying, “But… but… but… he’s naked?” (...)
At this point you’re probably yelling at the screen, Jesus, if you dislike his work so much, just stop reading it!
This is easier said than done, as it turns out. I’ve tried, really. Every time I read a Murakami novel, I say, Okay, that’s it. No more Murakami. I’m done.
But then another book comes out in translation, and there I am, munching down on those greasy, high-calorie chips as if they’re the best thing ever, then feeling bloated and pissed off afterward.
by Naomi J. Williams, LitHub | Read more:
Image: uncredited
Wednesday, October 19, 2016
The Exploding Helicopter Clause
I used to take pride in how quickly I could read. Because I was committed to story, to discovering what happened next, I turned the pages so swiftly they made a breeze on my face. In college, for the first time, I deliberately slowed down. Because despite all the books I had gobbled up, I didn’t understand the careful carpentry of storytelling. Reading became less of an emotional experience and more of a mechanical inquiry. I kept a pen in hand, scribbling so many notes the pages of my books appeared spider-webbed. (...)
I came across an essay a few years ago called “How We Listen to Music” by the composer Aaron Copland. He identifies three planes of listening. The first, the sensuous plane, is the simplest. “You listen for the sheer pleasure of the musical sound itself.” I think it’s safe to say that this is the way most people dial in to the radio—when blasting down the freeway or washing dishes in their kitchen—for background noise, something to tap their feet to, a way to manipulate their mood, to escape. I think it is also safe to say that this is the way most people read. Stories and music have that same potent, primitive force. We bend an ear toward them as distractions from the everyday.
The second plane he calls the expressive. The listener leans forward instead of leaning back. They discern the expressive power of the notes and lyrics. Are there Satanic messages and Lord of the Rings references nested in “Stairway to Heaven”? What does Bob Dylan mean when he sings, “Woozle wazzle weezel whoa”? What is the piece trying to say? What is the piece about?
The third plane most listeners are not conscious of, what Copland identifies as the sheerly musical. The way music “does exist in terms of the notes themselves and their manipulation.” The rhythm, the melody, the harmonies, the tone colors—the principles of musical form and orchestration—what you can only identify through training and deep concentration.
Not all at once, but slowly, slowly, like a snake shedding its skin, I broke through each of these planes as a writer by first becoming a strenuous reader, able to engage with a text with critical literacy. Whereas before, I was committed purely to the sensuous, I could now recognize the larger orchestration of notes, the mechanics of the component parts. (...)
These days, literary fiction is largely owned by the academy, and academics are obsessed with taxonomy. Go to the Associated Writing Programs (AWP) conference some time if you want proof of this. Most of the panels consist of people trying to figure out what to call something—postmodernism, new masculinity, magical realism, post-industrialism—Midwest writer, mother writer, Asian writer, Caribbean writer, war writer—and whatever that label might require. I know it makes people feel better in a neat-freaky sort of way. Like balling their socks and organizing them in a drawer according to color. And I know it’s a talking point, a frame for discussion. But really, you nerdy fussbudget, when you start to worry over whether someone is literary or genre, or literary crossover (whatever that means), you are devoting valuable brain energy to something that ultimately doesn’t matter. These are phantom barricades that serve only to restrict. (...)
When hiking in the woods, I would strike a tree with a stick three times and tell my sister that was how you called Bigfoot. When playing on the beach, I imagined the long tuberous seaweed as the tentacles of a kraken. When eating at a restaurant, the waiters and the chef became cannibals who in the kitchen kept a storage locker full of bodies from which they hacked steaks and chops. I am different, and it is this difference that compels me to propose an aesthetic barometer. Let’s call it the Exploding Helicopter clause.
If a story does not contain an exploding helicopter, an editor will not publish it, no matter how pretty its sentences and orgasmic its epiphany might be. The exploding helicopter is an inclusive term that may refer, but is not limited to giant sharks, robots with lasers for eyes, pirates, poltergeists, were-kittens, demons, slow zombies, fast zombies, talking unicorns, probe-wielding Martians, sexy vampires, barbarians in hairy underwear, and all forms of apocalyptic and post-apocalyptic mayhem.
I’m joking, but I’m not. I’m embracing what so many journals and workshops seem allergic to . . . Go ahead. Complain about genre. You’re allowed. The worst of it features formulaic plots, pedestrian language, paper-thin characters, gender and ethnic stereotypes and a general lack of diversity. I, too, cringe and stifle a laugh when I read lines like this one: “Renowned curator Jacques Saunière staggered through the vaulted archway of the museum’s Grand Gallery.”
But while we’re at it, let’s complain about literary fiction. The worst of it features a pile of pretty sentences that add up to nothing happening. Maybe a marital spat is followed by someone drinking tea and remembering some distant precious moment and then gazing out the window at a roiling bank of clouds that gives them a visual counterpoint to their heart-trembling, loin-shivering epiphany.
It’s easy to grouse and make fun. Flip the equation and study what works best instead. Literary fiction highlights exquisite sentences, glowing metaphors, subterranean themes, fully realized characters. And genre fiction excels at raising the most important question:what happens next? What happens next? is why most people read. It’s what made us fall in love with books and made some of us hope to write one of our own some day, though we may have forgotten that if we’ve fallen under the indulgent spell of our pretty sentences.
Toss out the worst elements of genre and literary fiction—and merge the best. We might then create a new taxonomy, so that when you walk into the bookstore, the stock is divided according to “Stories that suck” and “Stories that will make your mind and heart explode with their goodness.”
by Benjamin Percy, LitHub | Read more:
Image: uncredited
I came across an essay a few years ago called “How We Listen to Music” by the composer Aaron Copland. He identifies three planes of listening. The first, the sensuous plane, is the simplest. “You listen for the sheer pleasure of the musical sound itself.” I think it’s safe to say that this is the way most people dial in to the radio—when blasting down the freeway or washing dishes in their kitchen—for background noise, something to tap their feet to, a way to manipulate their mood, to escape. I think it is also safe to say that this is the way most people read. Stories and music have that same potent, primitive force. We bend an ear toward them as distractions from the everyday.

The third plane most listeners are not conscious of, what Copland identifies as the sheerly musical. The way music “does exist in terms of the notes themselves and their manipulation.” The rhythm, the melody, the harmonies, the tone colors—the principles of musical form and orchestration—what you can only identify through training and deep concentration.
Not all at once, but slowly, slowly, like a snake shedding its skin, I broke through each of these planes as a writer by first becoming a strenuous reader, able to engage with a text with critical literacy. Whereas before, I was committed purely to the sensuous, I could now recognize the larger orchestration of notes, the mechanics of the component parts. (...)
These days, literary fiction is largely owned by the academy, and academics are obsessed with taxonomy. Go to the Associated Writing Programs (AWP) conference some time if you want proof of this. Most of the panels consist of people trying to figure out what to call something—postmodernism, new masculinity, magical realism, post-industrialism—Midwest writer, mother writer, Asian writer, Caribbean writer, war writer—and whatever that label might require. I know it makes people feel better in a neat-freaky sort of way. Like balling their socks and organizing them in a drawer according to color. And I know it’s a talking point, a frame for discussion. But really, you nerdy fussbudget, when you start to worry over whether someone is literary or genre, or literary crossover (whatever that means), you are devoting valuable brain energy to something that ultimately doesn’t matter. These are phantom barricades that serve only to restrict. (...)
When hiking in the woods, I would strike a tree with a stick three times and tell my sister that was how you called Bigfoot. When playing on the beach, I imagined the long tuberous seaweed as the tentacles of a kraken. When eating at a restaurant, the waiters and the chef became cannibals who in the kitchen kept a storage locker full of bodies from which they hacked steaks and chops. I am different, and it is this difference that compels me to propose an aesthetic barometer. Let’s call it the Exploding Helicopter clause.
If a story does not contain an exploding helicopter, an editor will not publish it, no matter how pretty its sentences and orgasmic its epiphany might be. The exploding helicopter is an inclusive term that may refer, but is not limited to giant sharks, robots with lasers for eyes, pirates, poltergeists, were-kittens, demons, slow zombies, fast zombies, talking unicorns, probe-wielding Martians, sexy vampires, barbarians in hairy underwear, and all forms of apocalyptic and post-apocalyptic mayhem.
I’m joking, but I’m not. I’m embracing what so many journals and workshops seem allergic to . . . Go ahead. Complain about genre. You’re allowed. The worst of it features formulaic plots, pedestrian language, paper-thin characters, gender and ethnic stereotypes and a general lack of diversity. I, too, cringe and stifle a laugh when I read lines like this one: “Renowned curator Jacques Saunière staggered through the vaulted archway of the museum’s Grand Gallery.”
But while we’re at it, let’s complain about literary fiction. The worst of it features a pile of pretty sentences that add up to nothing happening. Maybe a marital spat is followed by someone drinking tea and remembering some distant precious moment and then gazing out the window at a roiling bank of clouds that gives them a visual counterpoint to their heart-trembling, loin-shivering epiphany.
It’s easy to grouse and make fun. Flip the equation and study what works best instead. Literary fiction highlights exquisite sentences, glowing metaphors, subterranean themes, fully realized characters. And genre fiction excels at raising the most important question:what happens next? What happens next? is why most people read. It’s what made us fall in love with books and made some of us hope to write one of our own some day, though we may have forgotten that if we’ve fallen under the indulgent spell of our pretty sentences.
Toss out the worst elements of genre and literary fiction—and merge the best. We might then create a new taxonomy, so that when you walk into the bookstore, the stock is divided according to “Stories that suck” and “Stories that will make your mind and heart explode with their goodness.”
by Benjamin Percy, LitHub | Read more:
Image: uncredited
The Scientists Who Make Apps Addictive
Earlier this year I travelled to Palo Alto to attend a workshop on behaviour design run by Fogg on behalf of his employer, Stanford University. Roaming charges being what they are, I spent a lot of time hooking onto Wi-Fi in coffee bars. The phrase “accept and connect” became so familiar that I started to think of it as a Californian mantra. Accept and connect, accept and connect, accept and connect.
I had never used Uber before, and since I figured there is no better place on Earth to try it out, I opened the app in Starbucks one morning and summoned a driver to take me to Stanford’s campus. Within two minutes, my car pulled up, and an engineering student from Oakland whisked me to my destination. I paid without paying. It felt magical. The workshop was attended by 20 or so executives from America, Brazil and Japan, charged with bringing the secrets of behaviour design home to their employers.
Fogg is 53. He travels everywhere with two cuddly toys, a frog and a monkey, which he introduced to the room at the start of the day. Fogg dings a toy xylophone to signal the end of a break or group exercise. Tall, energetic and tirelessly amiable, he frequently punctuates his speech with peppy exclamations such as “awesome” and “amazing”. As an Englishman, I found this full-beam enthusiasm a little disconcerting at first, but after a while, I learned to appreciate it, just as Europeans who move to California eventually cease missing the seasons and become addicted to sunshine. Besides, Fogg was likeable. His toothy grin and nasal delivery made him endearingly nerdy.
In a phone conversation prior to the workshop, Fogg told me that he read the classics in the course of a master’s degree in the humanities. He never found much in Plato, but strongly identified with Aristotle’s drive to organise and catalogue the world, to see systems and patterns behind the confusion of phenomena. He says that when he read Aristotle’s “Rhetoric”, a treatise on the art of persuasion, “It just struck me, oh my gosh, this stuff is going to be rolled out in tech one day!”
In 1997, during his final year as a doctoral student, Fogg spoke at a conference in Atlanta on the topic of how computers might be used to influence the behaviour of their users. He noted that “interactive technologies” were no longer just tools for work, but had become part of people’s everyday lives: used to manage finances, study and stay healthy. Yet technologists were still focused on the machines they were making rather than on the humans using those machines. What, asked Fogg, if we could design educational software that persuaded students to study for longer or a financial-management programme that encouraged users to save more? Answering such questions, he argued, required the application of insights from psychology.
Fogg presented the results of a simple experiment he had run at Stanford, which showed that people spent longer on a task if they were working on a computer which they felt had previously been helpful to them. In other words, their interaction with the machine followed the same “rule of reciprocity” that psychologists had identified in social life. The experiment was significant, said Fogg, not so much for its specific finding as for what it implied: that computer applications could be methodically designed to exploit the rules of psychology in order to get people to do things they might not otherwise do. In the paper itself, he added a qualification: “Exactly when and where such persuasion is beneficial and ethical should be the topic of further research and debate.”
Fogg called for a new field, sitting at the intersection of computer science and psychology, and proposed a name for it: “captology” (Computers as Persuasive Technologies). Captology later became behaviour design, which is now embedded into the invisible operating system of our everyday lives. The emails that induce you to buy right away, the apps and games that rivet your attention, the online forms that nudge you towards one decision over another: all are designed to hack the human brain and capitalise on its instincts, quirks and flaws. The techniques they use are often crude and blatantly manipulative, but they are getting steadily more refined, and, as they do so, less noticeable.
Fogg’s Atlanta talk provoked strong responses from his audience, falling into two groups: either “This is dangerous. It’s like giving people the tools to construct an atomic bomb”; or “This is amazing. It could be worth billions of dollars.”
The second group has certainly been proved right. Fogg has been called “the millionaire maker”. Numerous Silicon Valley entrepreneurs and engineers have passed through his laboratory at Stanford, and some have made themselves wealthy.
Fogg himself has not made millions of dollars from his insights. He stayed at Stanford, and now does little commercial work. He is increasingly troubled by the thought that those who told him his ideas were dangerous may have been on to something.

Fogg is 53. He travels everywhere with two cuddly toys, a frog and a monkey, which he introduced to the room at the start of the day. Fogg dings a toy xylophone to signal the end of a break or group exercise. Tall, energetic and tirelessly amiable, he frequently punctuates his speech with peppy exclamations such as “awesome” and “amazing”. As an Englishman, I found this full-beam enthusiasm a little disconcerting at first, but after a while, I learned to appreciate it, just as Europeans who move to California eventually cease missing the seasons and become addicted to sunshine. Besides, Fogg was likeable. His toothy grin and nasal delivery made him endearingly nerdy.
In a phone conversation prior to the workshop, Fogg told me that he read the classics in the course of a master’s degree in the humanities. He never found much in Plato, but strongly identified with Aristotle’s drive to organise and catalogue the world, to see systems and patterns behind the confusion of phenomena. He says that when he read Aristotle’s “Rhetoric”, a treatise on the art of persuasion, “It just struck me, oh my gosh, this stuff is going to be rolled out in tech one day!”
In 1997, during his final year as a doctoral student, Fogg spoke at a conference in Atlanta on the topic of how computers might be used to influence the behaviour of their users. He noted that “interactive technologies” were no longer just tools for work, but had become part of people’s everyday lives: used to manage finances, study and stay healthy. Yet technologists were still focused on the machines they were making rather than on the humans using those machines. What, asked Fogg, if we could design educational software that persuaded students to study for longer or a financial-management programme that encouraged users to save more? Answering such questions, he argued, required the application of insights from psychology.
Fogg presented the results of a simple experiment he had run at Stanford, which showed that people spent longer on a task if they were working on a computer which they felt had previously been helpful to them. In other words, their interaction with the machine followed the same “rule of reciprocity” that psychologists had identified in social life. The experiment was significant, said Fogg, not so much for its specific finding as for what it implied: that computer applications could be methodically designed to exploit the rules of psychology in order to get people to do things they might not otherwise do. In the paper itself, he added a qualification: “Exactly when and where such persuasion is beneficial and ethical should be the topic of further research and debate.”
Fogg called for a new field, sitting at the intersection of computer science and psychology, and proposed a name for it: “captology” (Computers as Persuasive Technologies). Captology later became behaviour design, which is now embedded into the invisible operating system of our everyday lives. The emails that induce you to buy right away, the apps and games that rivet your attention, the online forms that nudge you towards one decision over another: all are designed to hack the human brain and capitalise on its instincts, quirks and flaws. The techniques they use are often crude and blatantly manipulative, but they are getting steadily more refined, and, as they do so, less noticeable.
Fogg’s Atlanta talk provoked strong responses from his audience, falling into two groups: either “This is dangerous. It’s like giving people the tools to construct an atomic bomb”; or “This is amazing. It could be worth billions of dollars.”
The second group has certainly been proved right. Fogg has been called “the millionaire maker”. Numerous Silicon Valley entrepreneurs and engineers have passed through his laboratory at Stanford, and some have made themselves wealthy.
Fogg himself has not made millions of dollars from his insights. He stayed at Stanford, and now does little commercial work. He is increasingly troubled by the thought that those who told him his ideas were dangerous may have been on to something.
by Ian Leslie, 1843 | Read more:
Image: Bill Butcher
My Drunk Kitchen Creator Hannah Hart on Life as a YouTube Star
[ed. I love Hannah Hart. One of my favorite MDK episodes takes place at Burning Man.]
One might guess Hannah Hart aspired for celebrity. But according to her, it all started by accident.

Hart got in on the ground floor, before YouTube became this highly curated land of sponsored content, late night TV clips, and Vevo view tickers. In 2011, weird was good and a little bit sad was even better. In 2016, her followers are still hanging out with her in the kitchen, probably in part because it’s a holdover from a better time (and Hart is still very, very funny).
The Verge spoke to her recently about why it was time to write about her life, how fame makes you responsible for other people, and why she’ll never stop getting drunk and making food.
This interview has been edited for clarity and length.
How did you start making the My Drunk Kitchen videos?
In early 2011, I moved from San Francisco to New York to be a proofreader at a translation firm. I was working nights and weekends because my specialty was East Asian languages. One day I was Gchatting with a friend of mine because Gchat had just added a video feature and I had just gotten a laptop with a camera in it for the first time. My friend, who was my roommate, was like “Man, I miss you, I miss just hanging out, you’ve been gone three months.” And I was like “I miss you too, I’m gonna make a video for you right now where I just do a cooking show and get drunk and cook.”
So, I opened up Photo Booth, recorded it onto Photo Booth, imported it into iMovie, chopped it up and sent it to her.
So you put in on YouTube, or she put it on YouTube?
I sent it to her via YouTube, because that is a way you send video files. There was like Send File or something like that [on Mac] but she didn’t even have a Mac. And remember when you had to convert files to work on different [vide players]? So I put it on YouTube so she could watch it.
I really can’t imagine something like that today going viral.
It also didn’t go viral by today’s standards. My Drunk Kitchen episode one didn’t get a million views, it got like 80,000. And I was like “WHAT?” It was truly bizarre. But then people were like, “This is my new favorite show on YouTube” and I was like “... show on YouTube. What are you talking about?” I wasn’t a fan of YouTube culture, I didn’t know that people like the Fine Brothers existed at all. I didn’t know that people were putting shows online.
It’s become very common for people to say that the Wild West days of YouTube are over. Do you think that’s true? If you started My Drunk Kitchen today, what would happen?
I think that one of the reasons that I’m so grateful for the channel and for the community and the way it’s evolved to this point is that in 2011 it was still so not intentional and people didn’t really have goals of becoming a quote-unquote YouTube Star. So, now I think the landscape is pretty oversaturated in terms of the amount of people who are on it. That being said, I think if your goal is to be famous then I don’t know if there’s ever any amount of views that’s going to be satisfying to you. When I started on YouTube, people were making stuff because we were like “Hey, cool we have this free space to make stuff.” Now people are like, “If I don’t get a million views, it’s not a success.” And that makes me sad, for the current creator’s space.
Do you think YouTube is less of an accurate cultural cross-section than it was eight years ago?
It’s an entertainment platform now. But the good news is that there are tons of really great, innovative, entertaining channels out there and ideas out there. I really want to stress that I love that YouTube allows space where people can just create content and post it. Like, have you ever watched Hydraulic Press Channel?
Yes! It’s so weird.
But it’s so satisfying! There would never be, there’s no room for that on television. No one would ever make that a TV show. But Hydraulic Press Channel is great. So in that way, YouTube is still a really valuable space even if it isn’t exactly what it used to be. That’s my official stance. (...)
What I love about My Drunk Kitchen is that I feel like it embraces the way that loneliness can be sad but can also be creative and productive and joyful. What do you think broadly people find appealing about just watching someone get drunk and cook?
I like to think of it like this: if YouTube is a house party, there’s going to be different parts of a house party that appeal to different people. When you walk in the door and you see people break-dancing in the living room, you’re going to look at it and be like “Wow, those people are break-dancing.” That’s one of those popular, big, you-can’t-resist-looking-at-it types of channels. There are going to be people who are more like, talking shit, saying “I feel this about this!” And then there would be people playing games, people around beer pong, stuff like that. My channel is for the people who want to hang out in the kitchen. That’s where I hang out when I’m at a party. If I’m at a house party I go into the kitchen because it’s a little bit quieter, you’re still drinking, you’re having fun, but it’s kind of a space where you have good conversations. It’s that quality that makes it more appealing than just the drinking and just the comedy, I think it’s the intimacy.
So how did you realize, I can do this, this could be my thing, and I’m going to dedicate time to it?
It was never like that. It was more like, “Oh, cool, that was kind of fun, I can make another?” Two and a half weeks later I posted another one. And then I was like “Cool! I can make another.” Then two and a half weeks after that I was like “I don’t really want to be known for being drunk,” so I made a video that wasn’t about that. And that was it. I just enjoyed it more and more. It takes up more and more of your time. I took a plunge, I was like “I’m gonna get rid of my apartment so I don’t have to pay the rent, I’m gonna sleep on my friends’ couches, and I’m going to see if it’s going to go somewhere.” It wasn’t like “Great, I’m a superstar.” People always ask, “How did you know?” But I just want to shout it from the rooftops, sometimes you don’t know.
Has producing the show stayed pretty much the same since the beginning?
Before I came on this trip, I set up my camera in my kitchen, got drunk, and filmed a video that I’m going to post on Thursday. Every time I get interviewed by traditional media outlets, they’re always like “So your crew...” and I’m like “I don’t have a crew.” And they’re like “Really?” And I’m like “... have you watched it?” You think there’s a crew behind that? Like, somebody rolling sound? Maybe I wouldn’t have forgotten to turn the mic on so many times if that were the case.
by Kaitlyn Tiffany, The Verge | Read more:
Image: Hanah Hart
Tuesday, October 18, 2016
Warren Builds Political Capital
[ed. I can't wait to vote for Elizabeth Warren as President. The only thing better would be to see her on the Supreme Court, and that won't happen unless both the House and Senate get a complete overhaul. Sorry to all Bernie supporters and what might have been, he was well intentioned but had no leverage in Congress. Elizabeth does.]

Already Warren has been laying down markers for Clinton, in public and private, to consider activist progressives over Wall Street allies for appointments to key financial positions like Treasury secretary. The months to come will tell whether Warren serves as ally, antagonist, or both, to a new Democratic president and leadership in Congress.
Warren's stature has never been more evident. The wind-down of Vermont Sen. Bernie Sanders' presidential campaign has left her onstage as arguably the most influential liberal politician in the country.
She gets rock-star treatment from Democrats everywhere she goes. "This is bucket list territory. ... She is a hero!" Judy Baker, Democratic candidate for Missouri state treasurer, shouted to an excited crowd in Kansas City, Missouri, before Warren appeared last Friday with Senate candidate Jason Kander.
She's emerged as one of Donald Trump's most pointed antagonists, attacking him over Twitter and goading him into labeling her Pocahontas, a reference to her disputed claim of Native American heritage.
And hacked emails from Clinton's campaign chairman, John Podesta, show just how anxious the Clinton team has been about keeping her happy. In one email, campaign manager Robby Mook frets about how it would be "such a big deal" for an early meeting between Warren and Clinton to go well. In another exchange, Clinton adviser Dan Schwerin details a lengthy meeting with Warren's top aide, Dan Geldon, in which Geldon makes the case for progressive appointments to financial positions.
It all underscores Warren's role as what allies call the "north star" of the Democratic Party. Thanks to Sanders' candidacy and her influence, many Democrats say the party's center of gravity has moved to the left, away from centrist policies on health care and entitlements in favor of embracing expanded Social Security, a higher minimum wage, debt-free college and a new government insurance option in Obama's health law.
Now the question is how Warren, 67, will use her influence if Clinton becomes president. With Sen. Chuck Schumer set to become the Democratic leader in the Senate, the party would have two New Yorkers with Wall Street ties in top roles.
At the same time, a whole group of Democratic senators from red states like North Dakota, West Virginia and Montana will be up for election in 2018. Will liberal policies on wages, tuition and other issues resonate in those states?
"The way I see this, Hillary Clinton has run on the most progressive agenda in decades, so I think it's the job of progressives like me to help her get elected on that agenda and then help her enact that agenda," Warren said in a brief phone interview Friday in Missouri.
As for her advocacy on appointments, Warren said: "There's no 'hell no' list. But I'll say the same thing publicly that I've said privately - personnel is policy. Hillary Clinton needs a team around her that is ambitious about using the tools of government to make this economy work better for middle class families. That happens only if she has the right people around her."
by Erica Werner, AP | Read more:
Image: Pete Marovich/ZUMAPressIn Refrigerators, Tomatoes Lose Flavor at the Genetic Level
The tomato hitching a ride home in your grocery bag today is not the tomato it used to be. No matter if you bought plum, cherry or heirloom, if you wanted the tastiest tomato, you should have picked it yourself and eaten it immediately.
That’s because a tomato’s flavor — made up of sugars, acids and chemicals called volatiles — degrades as soon as it’s picked from the vine. There’s only one thing you can do now: Keep it out of the fridge.
Researchers at The University of Florida have found in a study published Monday in Proceedings of the National Academy of Sciences that when tomatoes are stored at the temperature kept in most refrigerators, irreversible genetic changes take place that erase some of their flavors forever.
Harry J. Klee, a professor of horticultural sciences who led the study, and his colleagues took two varieties of tomatoes — an heirloom and a more common modern variety — and stored them at 41 degrees Fahrenheit before letting them recover at room temperature (68 degrees Fahrenheit). When they looked at what happened inside the tomatoes in cold temperatures, Dr. Klee said the subtropical fruit went into shock, producing especially damaging changes after a week of storage. After they were allowed to warm up, even for a day, some genes in the tomatoes that created its flavor volatiles had turned off and stayed off. (...)
But this research may seem mostly academic. The average American consumes nearly 20 pounds of fresh tomatoes a year. And despite researchers, industries and farmers all striving to create the tastiest tomatoes, there are some things we can’t yet control.
After all, most of the tomatoes we eat out of season are plucked from their vines probably in Florida or Mexico, just as they started to ripen. They are sorted, sized, graded and packed into a box with other tomatoes, totaling 25 pounds. Then they stay in a humidity and temperature-controlled room (no less than 55 degrees Fahrenheit) and ingest ethylene, a gas to make them ripen, for two to four days before being transported on a temperature-controlled truck to a warehouse. There they are repackaged, re-sorted and shipped to your grocer. There, if demand is low or if there’s no room, they may be stored in a fridge, and by the time you get them, it’s been a week to ten days.
“It’s probably never going to equal the one that matured in your backyard over the 80 or 90 days that you grew it, but it beats stone soup” said Reggie Brown, a manager at Florida Tomato Committee, which produces up to half of America’s fresh tomatoes in the winter.
That’s because a tomato’s flavor — made up of sugars, acids and chemicals called volatiles — degrades as soon as it’s picked from the vine. There’s only one thing you can do now: Keep it out of the fridge.

Harry J. Klee, a professor of horticultural sciences who led the study, and his colleagues took two varieties of tomatoes — an heirloom and a more common modern variety — and stored them at 41 degrees Fahrenheit before letting them recover at room temperature (68 degrees Fahrenheit). When they looked at what happened inside the tomatoes in cold temperatures, Dr. Klee said the subtropical fruit went into shock, producing especially damaging changes after a week of storage. After they were allowed to warm up, even for a day, some genes in the tomatoes that created its flavor volatiles had turned off and stayed off. (...)
But this research may seem mostly academic. The average American consumes nearly 20 pounds of fresh tomatoes a year. And despite researchers, industries and farmers all striving to create the tastiest tomatoes, there are some things we can’t yet control.
After all, most of the tomatoes we eat out of season are plucked from their vines probably in Florida or Mexico, just as they started to ripen. They are sorted, sized, graded and packed into a box with other tomatoes, totaling 25 pounds. Then they stay in a humidity and temperature-controlled room (no less than 55 degrees Fahrenheit) and ingest ethylene, a gas to make them ripen, for two to four days before being transported on a temperature-controlled truck to a warehouse. There they are repackaged, re-sorted and shipped to your grocer. There, if demand is low or if there’s no room, they may be stored in a fridge, and by the time you get them, it’s been a week to ten days.
“It’s probably never going to equal the one that matured in your backyard over the 80 or 90 days that you grew it, but it beats stone soup” said Reggie Brown, a manager at Florida Tomato Committee, which produces up to half of America’s fresh tomatoes in the winter.
by Joanna Klein, NY Times | Read more:
Image: Fred Tanneau/Agence France-Presse — Getty ImagesIs the Self-Driving Car UnAmerican?
“If I were asked to condense the whole of the present century into one mental picture,” the novelist J. G. Ballard wrote in 1971, “I would pick a familiar everyday sight: a man in a motor car, driving along a concrete highway to some unknown destination. Almost every aspect of modern life is there, both for good and for ill — our sense of speed, drama, and aggression, the worlds of advertising and consumer goods, engineering and mass-manufacture, and the shared experience of moving together through an elaborately signaled landscape.” In other words: Life is a highway. And the highway, Ballard believed, was a bloody, beautiful mess.
At the time, Ballard was still a relatively obscure science-fiction writer whose novels portrayed a future beset by profound ecological crises (drought, flood, hurricane winds) and psychotic outbursts of violence. His work notably lacked the kinds of gleaming gadgetry that decorated most sci-fi. But by the turn of the 1970s, he had begun developing an obsession with one technology in particular: the old-fashioned automobile. Cars had deep, mythic resonances for him. He had grown up a coddled kid in colonial Shanghai, where a chauffeur drove him to school in a big American-made Packard. When he was 11, during the Second World War, the Japanese invaded Shanghai and the car was confiscated, reducing the family to riding bicycles. A few years later, his world shrank once again when he was interned in a Japanese concentration camp, where he remained for over two years. He emerged with a visceral horror of barbed wire and a love for “mastodonic” American automobiles (and American fighter jets, which he called “the Cadillacs of air combat”).
For Ballard, the car posed a beguiling paradox. How could it be such an erotic object, at once muscular and voluptuous, virginal and “fast,” while also being one of history’s deadliest inventions? Was its popularity simply a triumph of open-road optimism — a blind trust that the crash would only ever happen to someone else? Ballard thought not. His hunch was that, on some level, drivers are turned on by the danger, and perhaps even harbor a desire to be involved in a spectacular crash. A few years later, this notion would unfurl, like a corpse flower, into Crash, his incendiary novel about a group of people who fetishize demolished cars and mangled bodies.
Over the course of a century, Ballard wrote, the “perverse technology” of the automobile had colonized our mental landscape and transformed the physical one. But he sensed that the car’s toxic side effects — the traffic, the carnage, the pollution, the suburban sprawl — would soon lead to its demise. At some point in the middle of the 21st century, he wrote, human drivers would be replaced with “direct electronic control,” and it would become illegal to pilot a car. The sensuous machines would be neutered, spayed: stripped of their brake pedals, their accelerators, their steering wheels. Driving, and with it, car culture as we know it, would end. With the exception of select “motoring parks,” where it would persist as a nostalgic curiosity, the act of actually steering a motor vehicle would become an anachronism.
The finer details of his prediction now appear quaint. For example, he believed that the steering wheel would be replaced by a rotary dial and an address book, allowing riders to “dial in” their destination. The car would then be controlled via radio waves emitted by metal strips in the road. “Say you were in Toronto and you dial New York, and a voice might reply saying, ‘Sorry, New York is full. How about Philadelphia, or how about Saskatoon?’ ” (Back then, the notion was not as far-fetched as it sounds; American engineers worked to invent a “smart highway” from the 1930s all the way until the 1990s.) Ballard failed to foresee that it would be cars, not highways, that would one day become radically smarter, their controls seized not by Big Brother but by tech bros. In 2014, in a move that would have horrified Ballard, Google unveiled its first fully self-driving car, which has been shorn of its steering wheel and given an aggressively cute façade, like a lobotomized Herbie The Love Bug.
In Ballard’s grim reckoning, the end of driving would be just one step in our long march toward the “benign dystopia” of rampant consumerism and the surveillance state, in which people willingly give up control of their lives in exchange for technological comforts. The car, flawed as it was, functioned as a bulwark against “the remorseless spread of the regimented, electronic society.” “The car as we know it now is on the way out,” Ballard wrote. “To a large extent I deplore its passing, for as a basically old-fashioned machine it enshrines a basically old-fashioned idea — freedom. (...)
The potential benefits of such a world are far-reaching. Self-driving cars could grant the freedom of mobility to an increasingly elderly and infirm population (not to mention children and pets and inanimate objects) for whom driving is not an option. Since human error accounts for more than 90 percent of car accidents, each year driverless cars have the potential to save millions of lives. Fewer accidents means fewer traffic jams, and less traffic means less pollution. A new ecosystem of driverless futurists has sprouted up to calculate the technology’s effects on urbanism (the end of parking!), work-life balance (the end of dead time!), the environment (the end of smog!), public health (the end of drunken driving!), and manufacturing (the end of the automobile workforce as we know it!).
But these are just slivers of the vast changes that will take place — culturally, politically, economically, and experientially — in the world of the driverless car. Stop for a moment to consider the magnitude of this transformation: Our republic of drivers is poised to become a nation of passengers.
The experience of driving a car has been the mythopoeic heart of America for half a century. How will its absence be felt? We are still probably too close to it to know for sure. Will we mourn the loss of control? Will it subtly warp our sense of personal freedom — of having our destiny in our hands? Will it diminish our daily proximity to death? Will it scramble our (too often) gendered, racialized notions of who gets to drive which kinds of cars? Will middle-aged men still splurge on outlandishly fast (or, at least, fast-looking) self-driving vehicles? Will young men still buy cheap ones and then blow their paychecks tricking them out? If we are no longer forced to steer our way through a traffic jam, will it become less existentially frustrating, or more? What will become of the cinematic car chase? What about the hackneyed country song where driving is a metaphor for life? Will race-car drivers one day seem as remotely seraphic to us as stunt pilots? Will we all one day assume the entitled air of the habitually chauffeured?
by Robert Moor, NY Mag/Select/All | Read more:
Image:Mary Evans/Ronald Grant/Everett Collection (Wayne's World).

For Ballard, the car posed a beguiling paradox. How could it be such an erotic object, at once muscular and voluptuous, virginal and “fast,” while also being one of history’s deadliest inventions? Was its popularity simply a triumph of open-road optimism — a blind trust that the crash would only ever happen to someone else? Ballard thought not. His hunch was that, on some level, drivers are turned on by the danger, and perhaps even harbor a desire to be involved in a spectacular crash. A few years later, this notion would unfurl, like a corpse flower, into Crash, his incendiary novel about a group of people who fetishize demolished cars and mangled bodies.
Over the course of a century, Ballard wrote, the “perverse technology” of the automobile had colonized our mental landscape and transformed the physical one. But he sensed that the car’s toxic side effects — the traffic, the carnage, the pollution, the suburban sprawl — would soon lead to its demise. At some point in the middle of the 21st century, he wrote, human drivers would be replaced with “direct electronic control,” and it would become illegal to pilot a car. The sensuous machines would be neutered, spayed: stripped of their brake pedals, their accelerators, their steering wheels. Driving, and with it, car culture as we know it, would end. With the exception of select “motoring parks,” where it would persist as a nostalgic curiosity, the act of actually steering a motor vehicle would become an anachronism.
The finer details of his prediction now appear quaint. For example, he believed that the steering wheel would be replaced by a rotary dial and an address book, allowing riders to “dial in” their destination. The car would then be controlled via radio waves emitted by metal strips in the road. “Say you were in Toronto and you dial New York, and a voice might reply saying, ‘Sorry, New York is full. How about Philadelphia, or how about Saskatoon?’ ” (Back then, the notion was not as far-fetched as it sounds; American engineers worked to invent a “smart highway” from the 1930s all the way until the 1990s.) Ballard failed to foresee that it would be cars, not highways, that would one day become radically smarter, their controls seized not by Big Brother but by tech bros. In 2014, in a move that would have horrified Ballard, Google unveiled its first fully self-driving car, which has been shorn of its steering wheel and given an aggressively cute façade, like a lobotomized Herbie The Love Bug.
In Ballard’s grim reckoning, the end of driving would be just one step in our long march toward the “benign dystopia” of rampant consumerism and the surveillance state, in which people willingly give up control of their lives in exchange for technological comforts. The car, flawed as it was, functioned as a bulwark against “the remorseless spread of the regimented, electronic society.” “The car as we know it now is on the way out,” Ballard wrote. “To a large extent I deplore its passing, for as a basically old-fashioned machine it enshrines a basically old-fashioned idea — freedom. (...)
The potential benefits of such a world are far-reaching. Self-driving cars could grant the freedom of mobility to an increasingly elderly and infirm population (not to mention children and pets and inanimate objects) for whom driving is not an option. Since human error accounts for more than 90 percent of car accidents, each year driverless cars have the potential to save millions of lives. Fewer accidents means fewer traffic jams, and less traffic means less pollution. A new ecosystem of driverless futurists has sprouted up to calculate the technology’s effects on urbanism (the end of parking!), work-life balance (the end of dead time!), the environment (the end of smog!), public health (the end of drunken driving!), and manufacturing (the end of the automobile workforce as we know it!).
But these are just slivers of the vast changes that will take place — culturally, politically, economically, and experientially — in the world of the driverless car. Stop for a moment to consider the magnitude of this transformation: Our republic of drivers is poised to become a nation of passengers.
The experience of driving a car has been the mythopoeic heart of America for half a century. How will its absence be felt? We are still probably too close to it to know for sure. Will we mourn the loss of control? Will it subtly warp our sense of personal freedom — of having our destiny in our hands? Will it diminish our daily proximity to death? Will it scramble our (too often) gendered, racialized notions of who gets to drive which kinds of cars? Will middle-aged men still splurge on outlandishly fast (or, at least, fast-looking) self-driving vehicles? Will young men still buy cheap ones and then blow their paychecks tricking them out? If we are no longer forced to steer our way through a traffic jam, will it become less existentially frustrating, or more? What will become of the cinematic car chase? What about the hackneyed country song where driving is a metaphor for life? Will race-car drivers one day seem as remotely seraphic to us as stunt pilots? Will we all one day assume the entitled air of the habitually chauffeured?
by Robert Moor, NY Mag/Select/All | Read more:
Image:Mary Evans/Ronald Grant/Everett Collection (Wayne's World).
Labels:
Business,
Culture,
Psychology,
Relationships,
Technology
Monday, October 17, 2016
The Secret of Excess
[ed. In honor of Mario's selection as chef for the Obama's last State Dinner at the White House. My son and I used to watch cooking shows all the time (before the Food Channel really got going... Graham Kerr, Sara Moulton, Emeril Lagasse... but I always liked Mario the best (for his awesome insights into Italian cooking and his cool guitar playing at the end of each episode).]

On trips to Italy made with his Babbo co-owner, Joe Bastianich, Batali has been known to share an entire case of wine during dinner, and, while we didn’t drink anything like that, we were all infected by his live-very-hard-for-now approach and had more than was sensible. I don’t know. I don’t really remember. There was also the grappa and the nocino, and one of my last recollections is of Batali around three in the morning—back arched, eyes closed, an unlit cigarette dangling from his mouth, his red Converse high-tops pounding the floor—playing air guitar to Neil Young’s “Southern Man.” Batali had recently turned forty, and I remember thinking that it was a long time since I’d seen a grown man playing air guitar. He then found the soundtrack for “Buena Vista Social Club,” tried to salsa with one of the guests (who promptly fell over a sofa), tried to dance with her boyfriend (who was unresponsive), and then put on a Tom Waits CD and sang along as he went into the kitchen, where, with a machinelike speed, he washed the dishes and mopped the floor. He reminded me that we had an arrangement for the next day—he’d got tickets to a New York Giants game, courtesy of the commissioner of the N.F.L., who had just eaten at Babbo—and disappeared with three of my friends. They ended up at Marylou’s, in the Village—in Batali’s description, “a wise-guy joint where you get anything at any time of night, none of it good.”
It was nearly daylight when he got home, the doorman of his apartment building told me the next day as the two of us tried to get Batali to wake up: the N.F.L. commissioner’s driver was waiting outside. When Batali was roused, forty-five minutes later, he was momentarily perplexed, standing in his doorway in his underwear and wondering why I was there. Batali has a remarkable girth, and it was a little startling to see him so clad, but within minutes he had transformed himself into the famous television chef: shorts, high-tops, sunglasses, his red hair pulled back into a ponytail. He had become Molto Mario—the many-layered name of his cooking program, which, in one of its senses, means, literally, Very Mario (that is, an intensified Mario, an exaggerated Mario, and an utterly over-the-top Mario)—and a figure whose renown I didn’t fully appreciate until, as guests of the commissioner, we were allowed on the field before the game. Fans of the New York Giants are happy caricatures (the ethic is old-fashioned blue-collar, even if they’re corporate managers), and I was surprised by how many of them recognized the ponytailed chef, who stood on the field facing them, arms crossed over his chest, and beaming. “Hey, Molto!” one of them shouted. “What’s cooking, Mario?” “Mario, make me a pasta!” On the East Coast, “Molto Mario” is on twice a day (at eleven-thirty in the morning and five-thirty in the afternoon). I had a complex picture of the metropolitan working male—policeman, Con Ed worker, plumber—rushing home to catch lessons in how to braise his broccoli rabe and get just the right forked texture on his homemade orecchiette. (Batali later told me that when the viewing figures for his show first came in they were so overwhelmingly male that the producers thought they weren’t going to be able to carry on.) I stood back, with one of the security people, taking in the spectacle (by now a crowd was chanting “Molto! Molto! Molto!”)— this proudly round man, whose whole manner said, “Dude, where’s the party?”
“I love this guy,” the security man said. “Just lookin’ at him makes me hungry.”
Mario Batali arrived in New York in 1992, when he was thirty-one. He had two hundred dollars, a duffelbag, and a guitar. Since then, he has become the city’s most widely recognized chef and, almost single-handedly, has changed the way people think about Italian cooking in America. The food he prepares at Babbo, which was given three stars by the Times when the restaurant opened, in 1998, is characterized by intensity—of ingredients, of flavor—and when people talk of it they use words like “heat” and “vibrancy,” “exaggeration” and “surprise.” Batali is not thought of as a conventional cook, in the business of serving food for profit; he’s in the much murkier enterprise of stimulating outrageous appetites and satisfying them aggressively. (In Batali’s language, appetites blur: a pasta made with butter “swells like the lips of a woman aroused,” roasted lotus roots are like “sucking the toes of the Shah’s mistress,” and just about anything powerfully flavored—the first cherries of the season, the first ramps, a cheese from Piedmont—”gives me wood.”) Chefs are regular visitors and are subjected to extreme versions of what is already an extreme experience. “We’re going to kill him,” Batali said to me with maniacal glee as he prepared a meal for Wylie Dufresne, the former chef of 71 Clinton, who had ordered a seven-course tasting menu, to which Batali then added a lethal-seeming number of impossible-to-resist extra courses. The starters (variations, again, in the key of pig) included a plate of lonza (the cured backstrap from one of Batali’s cream-apple-and-walnut-fattened pigs); a plate of coppa (made from the same creamy pig’s shoulder); a fried pig foot; a porcini mushroom, stuffed with garlic and thyme, and roasted with a piece of Batali’s own pancetta (cured pig belly) wrapped around its stem; plus (“just for the hell of it”) tagliatelle topped withguanciale (cured pig jowls), parsnips, and black truffle. A publisher who was fed by Batali while talking to him about booking a party came away vowing to eat only soft fruit and water until he’d recovered: “This guy knows no middle ground. It’s just excess on a level I’ve never known before—it’s food and drink, food and drink, food and drink, until you start to feel as though you’re on drugs.” This spring, Mario was trying out a new motto, borrowed from the writer Shirley O. Corriher: “Wretched excess is just barely enough.”
by Bill Buford, New Yorker | Read more:
Image: Ruven Afanador
The Meaning of Open Trade and Open Borders
Near the end of his 1817 treatise, “On the Principles of Political Economy and Taxation,” David Ricardo advanced the “law of comparative advantage,” the idea that each country—not to mention the world that countries add up to—would be better off if each specialized in the thing it did most efficiently. Portugal may be more productive than Britain in both clothmaking and winemaking; but if Portugal is comparatively more productive in winemaking than clothmaking, and Britain the other way around, Portugal should make the wine, Britain the cloth, and they should trade freely with one another. The math will work, even if Portuguese weavers will not, at least for a while—and even if each country’s countryside will come to seem less pleasingly variegated. The worker, in the long run, would be compensated, owing to “a fall in the value of the necessaries on which his wages are expended.” Accordingly, Ricardo argued in Parliament for the abolition of Britain’s “corn laws,” tariffs on imported grain, which protected the remnants of the landed aristocracy, along with their rural retainers. Those tariffs were eventually lifted in 1846, a generation after his death; bread got cheaper, and lords got quainter.
The case for free trade, embodied in deals like the North American Free Trade Agreement, the Permanent Normal Trade Relations with China, and the proposed Trans-Pacific Partnership, remains at bottom Ricardo’s. And the long run is still working out pretty much as he assumed. The McKinsey Global Institute, or M.G.I., reported in 2014 that some twenty-six trillion dollars in goods, services, and financial investments crossed borders in 2012, representing about thirty-six percent of global G.D.P. The report, looking country by country, reckoned that burgeoning trade added fifteen to twenty-five per cent to global G.D.P. growth—as much as four hundred and fifty billion dollars. “Countries with a larger number of connections in the global network of flows increase their GDP growth by up to 40 percent more than less connected countries do,” the M.G.I. said.
But the case against free trade seems also to be based on Ricardo’s premises, albeit with heightened compassion for the Portuguese weavers and British wheat farmers. American critics—Bernie Sanders, earnestly; Donald Trump, cannily—argue that trade decimated U.S. manufacturing by forcing American products into competition with countries where wages, labor, and environmental standards are not nearly as strong as those in America, or by ignoring how some countries, especially China, manipulate their currency to encourage exports. Sanders launched his post-primary movement, “Our Revolution,” in late August, with an e-mail to potential donors. The most conspicuous demand to rally the troops was opposition to the T.P.P. “Since 2001, nearly 60,000 manufacturing plants in this country have been shut down,” the e-mail said, “and we have lost almost 5 million decent-paying manufacturing jobs. nafta alone led to the loss of almost three-quarters of a million jobs—the Permanent Normalized Trade Agreement with China cost America four times that number: almost 3 million jobs.” These agreements “are not the only reason” why manufacturing in the United States has declined, the e-mail goes on, but “they are important factors.”
Such evidence for why the T.P.P. should be thrown out is hard to dispute, since the e-mail doesn’t say what jobs were gained because of past deals, or explore what other “factors” may be important. President Obama, the champion of the T.P.P., may grant that certain provisions of the deal might be strengthened in favor of American standards without agreeing with “Our Revolution” on what’s bathwater and what’s baby. What’s clearer is that the anti-trade message is hitting home, especially among the hundred and fifty million Americans, about sixty-one per cent of the adult population, with no post-high-school degree of any kind.
The investor Jeremy Grantham in July wrote an op-ed in Barron’s noting that some ten million net new jobs were created in the U.S. since the lows of 2009 (the actual number being fifteen million), while “a remarkable 99 percent” excluded people without a university degree. That’s a crisis, not of unemployment but of unemployability, which backshadows skepticism about the T.P.P. and trade as a whole. Trump’s lead over Hillary Clinton among less-well-educated white voters remains solid, in spite of his alleged sexual predations; a large number of voters remain drawn to his grousing about the balance-of-trade deficit—which he presents as if it were a losing football score. Clinton has apparently decided to pass up the teachable moment, pretty much adopting Sanders’s anti-trade line, though her private views almost certainly remain more nuanced. In an e-mail exposed by the WikiLeaks hack, purporting to detail a conversation between Clinton aides, she allegedly told Banco Itau, a Brazilian bank, in 2013, that she favored, at “some time in the future,” a “hemispheric common market, with open trade and open borders”—a curious case of a leak embarrassing a candidate by showing her to be more visionary and expert than she wants to appear.
The anxiety is understandable, but the focus on trade deals seriously underestimates the changes that have reshaped global corporations over the past generation. Trade, increasingly, is mostly not in finished goods like Portuguese wine. It is, rather, in components moving within corporate networks—that is, from federated sources toward final assembly, then on to sales channels, in complex supply chains. An estimated sixty per cent of international trade happens within, rather than between, global corporations: that is, across national boundaries but within the same corporate group. It is hard to shake the image of global corporations as versions of post-Second World War U.S. multinationals: huge command-and-control pyramids, replicating their operations in places where, say, customers are particularly eager or labor is particularly cheap. This is wrong. Corporations are hierarchies of product teams, which live in a global cloud. “Made in America” is an idealization.
The product manager of the Chevrolet Volt, which Obama singled out at the time of the auto bailout, told me in 2009 that the car was a kind of Lego build: the design was developed by an international team in Michigan, the chassis came from the U.S., the battery cells from Korea, the small battery-charging engine from Germany, the electrical harnesses from Mexico, suspension parts from Canada, and smartphone-integration software from Silicon Valley. More and more, the design of products and services happens in distributed hubs. The serendipitous sourcing of technologies and customer characteristics, the lowering of transaction costs, the trade flows enabled by accelerated financing and logistics—all of these—presume growing network integration and social media, the latter increasingly important to glean marketing data.
The point is that each component, and each step in production, adds value differently. Where value is added will depend on what corporate accountants call “cost structure”: how much of the component or step requires local materials, or unskilled labor, or skilled labor wedded to expensive capital equipment, or high transportation costs, and so forth. Some components, like the fuel injectors assembled into the Volt’s German engine, required high-technology production systems. Labor was a trivial part of the cost structure, and the engine could be built in the highest-wage region on the planet. Harnesses, in contrast, which required a much higher proportion of manual labor—and were relatively easy to ship—were inevitably sourced from places where workers make as little as a tenth as much as Americans, thousands of miles away. No tariffs can reverse this trend, and no currency manipulation can drive it. Between 2005 and 2012, the M.G.I. found, thirty-eight per cent of trade derived from “emerging economies,” up from fourteen per cent in 1990.
by Bernard Avishai , New Yorker | Read more:
Image: Bill Pugliano

But the case against free trade seems also to be based on Ricardo’s premises, albeit with heightened compassion for the Portuguese weavers and British wheat farmers. American critics—Bernie Sanders, earnestly; Donald Trump, cannily—argue that trade decimated U.S. manufacturing by forcing American products into competition with countries where wages, labor, and environmental standards are not nearly as strong as those in America, or by ignoring how some countries, especially China, manipulate their currency to encourage exports. Sanders launched his post-primary movement, “Our Revolution,” in late August, with an e-mail to potential donors. The most conspicuous demand to rally the troops was opposition to the T.P.P. “Since 2001, nearly 60,000 manufacturing plants in this country have been shut down,” the e-mail said, “and we have lost almost 5 million decent-paying manufacturing jobs. nafta alone led to the loss of almost three-quarters of a million jobs—the Permanent Normalized Trade Agreement with China cost America four times that number: almost 3 million jobs.” These agreements “are not the only reason” why manufacturing in the United States has declined, the e-mail goes on, but “they are important factors.”
Such evidence for why the T.P.P. should be thrown out is hard to dispute, since the e-mail doesn’t say what jobs were gained because of past deals, or explore what other “factors” may be important. President Obama, the champion of the T.P.P., may grant that certain provisions of the deal might be strengthened in favor of American standards without agreeing with “Our Revolution” on what’s bathwater and what’s baby. What’s clearer is that the anti-trade message is hitting home, especially among the hundred and fifty million Americans, about sixty-one per cent of the adult population, with no post-high-school degree of any kind.
The investor Jeremy Grantham in July wrote an op-ed in Barron’s noting that some ten million net new jobs were created in the U.S. since the lows of 2009 (the actual number being fifteen million), while “a remarkable 99 percent” excluded people without a university degree. That’s a crisis, not of unemployment but of unemployability, which backshadows skepticism about the T.P.P. and trade as a whole. Trump’s lead over Hillary Clinton among less-well-educated white voters remains solid, in spite of his alleged sexual predations; a large number of voters remain drawn to his grousing about the balance-of-trade deficit—which he presents as if it were a losing football score. Clinton has apparently decided to pass up the teachable moment, pretty much adopting Sanders’s anti-trade line, though her private views almost certainly remain more nuanced. In an e-mail exposed by the WikiLeaks hack, purporting to detail a conversation between Clinton aides, she allegedly told Banco Itau, a Brazilian bank, in 2013, that she favored, at “some time in the future,” a “hemispheric common market, with open trade and open borders”—a curious case of a leak embarrassing a candidate by showing her to be more visionary and expert than she wants to appear.
The anxiety is understandable, but the focus on trade deals seriously underestimates the changes that have reshaped global corporations over the past generation. Trade, increasingly, is mostly not in finished goods like Portuguese wine. It is, rather, in components moving within corporate networks—that is, from federated sources toward final assembly, then on to sales channels, in complex supply chains. An estimated sixty per cent of international trade happens within, rather than between, global corporations: that is, across national boundaries but within the same corporate group. It is hard to shake the image of global corporations as versions of post-Second World War U.S. multinationals: huge command-and-control pyramids, replicating their operations in places where, say, customers are particularly eager or labor is particularly cheap. This is wrong. Corporations are hierarchies of product teams, which live in a global cloud. “Made in America” is an idealization.
The product manager of the Chevrolet Volt, which Obama singled out at the time of the auto bailout, told me in 2009 that the car was a kind of Lego build: the design was developed by an international team in Michigan, the chassis came from the U.S., the battery cells from Korea, the small battery-charging engine from Germany, the electrical harnesses from Mexico, suspension parts from Canada, and smartphone-integration software from Silicon Valley. More and more, the design of products and services happens in distributed hubs. The serendipitous sourcing of technologies and customer characteristics, the lowering of transaction costs, the trade flows enabled by accelerated financing and logistics—all of these—presume growing network integration and social media, the latter increasingly important to glean marketing data.
The point is that each component, and each step in production, adds value differently. Where value is added will depend on what corporate accountants call “cost structure”: how much of the component or step requires local materials, or unskilled labor, or skilled labor wedded to expensive capital equipment, or high transportation costs, and so forth. Some components, like the fuel injectors assembled into the Volt’s German engine, required high-technology production systems. Labor was a trivial part of the cost structure, and the engine could be built in the highest-wage region on the planet. Harnesses, in contrast, which required a much higher proportion of manual labor—and were relatively easy to ship—were inevitably sourced from places where workers make as little as a tenth as much as Americans, thousands of miles away. No tariffs can reverse this trend, and no currency manipulation can drive it. Between 2005 and 2012, the M.G.I. found, thirty-eight per cent of trade derived from “emerging economies,” up from fourteen per cent in 1990.
by Bernard Avishai , New Yorker | Read more:
Image: Bill Pugliano
Subscribe to:
Posts (Atom)