Monday, April 18, 2016
Sunday, April 17, 2016
Neoliberalism – The Ideology At The Root of All Our Problems
Imagine if the people of the Soviet Union had never heard of communism. The ideology that dominates our lives has, for most of us, no name. Mention it in conversation and you’ll be rewarded with a shrug. Even if your listeners have heard the term before, they will struggle to define it. Neoliberalism: do you know what it is?
Its anonymity is both a symptom and cause of its power. It has played a major role in a remarkable variety of crises: the financial meltdown of 2007‑8, the offshoring of wealth and power, of which the Panama Papers offer us merely a glimpse, the slow collapse of public health and education, resurgent child poverty, the epidemic of loneliness, the collapse of ecosystems, the rise of Donald Trump. But we respond to these crises as if they emerge in isolation, apparently unaware that they have all been either catalysed or exacerbated by the same coherent philosophy; a philosophy that has – or had – a name. What greater power can there be than to operate namelessly?
So pervasive has neoliberalism become that we seldom even recognise it as an ideology. We appear to accept the proposition that this utopian, millenarian faith describes a neutral force; a kind of biological law, like Darwin’s theory of evolution. But the philosophy arose as a conscious attempt to reshape human life and shift the locus of power.
Neoliberalism sees competition as the defining characteristic of human relations. It redefines citizens as consumers, whose democratic choices are best exercised by buying and selling, a process that rewards merit and punishes inefficiency. It maintains that “the market” delivers benefits that could never be achieved by planning.
Attempts to limit competition are treated as inimical to liberty. Tax and regulation should be minimised, public services should be privatised. The organisation of labour and collective bargaining by trade unions are portrayed as market distortions that impede the formation of a natural hierarchy of winners and losers. Inequality is recast as virtuous: a reward for utility and a generator of wealth, which trickles down to enrich everyone. Efforts to create a more equal society are both counterproductive and morally corrosive. The market ensures that everyone gets what they deserve.
We internalise and reproduce its creeds. The rich persuade themselves that they acquired their wealth through merit, ignoring the advantages – such as education, inheritance and class – that may have helped to secure it. The poor begin to blame themselves for their failures, even when they can do little to change their circumstances.
Never mind structural unemployment: if you don’t have a job it’s because you are unenterprising. Never mind the impossible costs of housing: if your credit card is maxed out, you’re feckless and improvident. Never mind that your children no longer have a school playing field: if they get fat, it’s your fault. In a world governed by competition, those who fall behind become defined and self-defined as losers. (...)
The term neoliberalism was coined at a meeting in Paris in 1938. Among the delegates were two men who came to define the ideology, Ludwig von Mises and Friedrich Hayek. Both exiles from Austria, they saw social democracy, exemplified by Franklin Roosevelt’s New Deal and the gradual development of Britain’s welfare state, as manifestations of a collectivism that occupied the same spectrum as nazism and communism.
In The Road to Serfdom, published in 1944, Hayek argued that government planning, by crushing individualism, would lead inexorably to totalitarian control. Like Mises’s book Bureaucracy, The Road to Serfdom was widely read. It came to the attention of some very wealthy people, who saw in the philosophy an opportunity to free themselves from regulation and tax. When, in 1947, Hayek founded the first organisation that would spread the doctrine of neoliberalism – the Mont Pelerin Society – it was supported financially by millionaires and their foundations.
With their help, he began to create what Daniel Stedman Jones describes in Masters of the Universe as “a kind of neoliberal international”: a transatlantic network of academics, businessmen, journalists and activists. The movement’s rich backers funded a series of thinktanks which would refine and promote the ideology. Among them were the American Enterprise Institute, the Heritage Foundation, the Cato Institute, the Institute of Economic Affairs, the Centre for Policy Studies and the Adam Smith Institute. They also financed academic positions and departments, particularly at the universities of Chicago and Virginia.
As it evolved, neoliberalism became more strident. Hayek’s view that governments should regulate competition to prevent monopolies from forming gave way – among American apostles such as Milton Friedman – to the belief that monopoly power could be seen as a reward for efficiency.
Something else happened during this transition: the movement lost its name. In 1951, Friedman was happy to describe himself as a neoliberal. But soon after that, the term began to disappear. Stranger still, even as the ideology became crisper and the movement more coherent, the lost name was not replaced by any common alternative.
by George Monbiot, The Guardian | Read more:
Image: Naomi Klein by Anya Chibis
Its anonymity is both a symptom and cause of its power. It has played a major role in a remarkable variety of crises: the financial meltdown of 2007‑8, the offshoring of wealth and power, of which the Panama Papers offer us merely a glimpse, the slow collapse of public health and education, resurgent child poverty, the epidemic of loneliness, the collapse of ecosystems, the rise of Donald Trump. But we respond to these crises as if they emerge in isolation, apparently unaware that they have all been either catalysed or exacerbated by the same coherent philosophy; a philosophy that has – or had – a name. What greater power can there be than to operate namelessly?
So pervasive has neoliberalism become that we seldom even recognise it as an ideology. We appear to accept the proposition that this utopian, millenarian faith describes a neutral force; a kind of biological law, like Darwin’s theory of evolution. But the philosophy arose as a conscious attempt to reshape human life and shift the locus of power.Neoliberalism sees competition as the defining characteristic of human relations. It redefines citizens as consumers, whose democratic choices are best exercised by buying and selling, a process that rewards merit and punishes inefficiency. It maintains that “the market” delivers benefits that could never be achieved by planning.
Attempts to limit competition are treated as inimical to liberty. Tax and regulation should be minimised, public services should be privatised. The organisation of labour and collective bargaining by trade unions are portrayed as market distortions that impede the formation of a natural hierarchy of winners and losers. Inequality is recast as virtuous: a reward for utility and a generator of wealth, which trickles down to enrich everyone. Efforts to create a more equal society are both counterproductive and morally corrosive. The market ensures that everyone gets what they deserve.
We internalise and reproduce its creeds. The rich persuade themselves that they acquired their wealth through merit, ignoring the advantages – such as education, inheritance and class – that may have helped to secure it. The poor begin to blame themselves for their failures, even when they can do little to change their circumstances.
Never mind structural unemployment: if you don’t have a job it’s because you are unenterprising. Never mind the impossible costs of housing: if your credit card is maxed out, you’re feckless and improvident. Never mind that your children no longer have a school playing field: if they get fat, it’s your fault. In a world governed by competition, those who fall behind become defined and self-defined as losers. (...)
The term neoliberalism was coined at a meeting in Paris in 1938. Among the delegates were two men who came to define the ideology, Ludwig von Mises and Friedrich Hayek. Both exiles from Austria, they saw social democracy, exemplified by Franklin Roosevelt’s New Deal and the gradual development of Britain’s welfare state, as manifestations of a collectivism that occupied the same spectrum as nazism and communism.
In The Road to Serfdom, published in 1944, Hayek argued that government planning, by crushing individualism, would lead inexorably to totalitarian control. Like Mises’s book Bureaucracy, The Road to Serfdom was widely read. It came to the attention of some very wealthy people, who saw in the philosophy an opportunity to free themselves from regulation and tax. When, in 1947, Hayek founded the first organisation that would spread the doctrine of neoliberalism – the Mont Pelerin Society – it was supported financially by millionaires and their foundations.
With their help, he began to create what Daniel Stedman Jones describes in Masters of the Universe as “a kind of neoliberal international”: a transatlantic network of academics, businessmen, journalists and activists. The movement’s rich backers funded a series of thinktanks which would refine and promote the ideology. Among them were the American Enterprise Institute, the Heritage Foundation, the Cato Institute, the Institute of Economic Affairs, the Centre for Policy Studies and the Adam Smith Institute. They also financed academic positions and departments, particularly at the universities of Chicago and Virginia.
As it evolved, neoliberalism became more strident. Hayek’s view that governments should regulate competition to prevent monopolies from forming gave way – among American apostles such as Milton Friedman – to the belief that monopoly power could be seen as a reward for efficiency.
Something else happened during this transition: the movement lost its name. In 1951, Friedman was happy to describe himself as a neoliberal. But soon after that, the term began to disappear. Stranger still, even as the ideology became crisper and the movement more coherent, the lost name was not replaced by any common alternative.
by George Monbiot, The Guardian | Read more:
Image: Naomi Klein by Anya Chibis
When Bitcoin Grows Up
It’s impossible to discuss new developments in money without thinking for a moment about what money is. The best place to start thinking about that is with money itself. Consider the UK’s most common paper money, the English five or ten or twenty quid note. On one side we have a famous dead person: Elizabeth Fry or Charles Dickens or Adam Smith, depending on whether it’s a five or ten or twenty. On the other we have a picture of the queen, and just above that the words ‘I promise to pay the bearer on demand the sum of’, and then the value of the note, and the signature of the cashier of the Bank of England.
It’s worth thinking about that promise to ‘pay the bearer on demand the sum of ten pounds’. When we parse it, it’s not clear what it means. Ten pounds of what? We’ve already got ten pounds. That’s exactly what we’re holding in our hand. It doesn’t mean, pay the bearer on demand ten pounds’ worth of gold: the link between currency and gold was ended in 1971, and anyway, Gordon Brown sold off the Bank of England’s gold reserves in the 1990s.
The fact is, there’s no answer to the question, ten pounds of what? The ten pound note is worth what it claims it is because the state, in the form of the Bank of England, says so, and we choose to believe it. This is what students of currency call ‘fiat’ money, money whose value has been willed into being by the state. The value of fiat money is an act of faith. There are quirks to this. In the case of the pound coin, if we ask how much it’s worth, the answer is obvious: a pound is worth a pound. It shouldn’t be, though. According to the Royal Mint, which actually makes the stuff, 3 per cent of all pound coins in circulation are fake. Allowing for that, we should discount the price of our pound coin, and mathematically assign it a value of 97p.
In real life, there’s no need to do that, because the overwhelming probability is that you won’t have any difficulty spending your fake pound for its full nominal value. (That’s unless you’re caught out by a coin slot which rejects your money. Most people attribute the annoying frequency with which this happens to a problem with coin slots; mostly, though, it’s a problem with the currency. The other time you’ll have trouble with your fake coin is when you get one of the mutant squishy ones which look like partially chewed fruit pastilles and are so badly forged they verge on the endearing.) They’re worth what they claim because we choose to believe in them. Your mathematically determined 97p of coin is worth a quid because we believe it’s worth a quid. We trust it. That’s the first main point about money. Its value rests on our belief in its value, underwritten by the authority of the state.
For the second main point about the nature of money, we need to travel to the Pacific Ocean. In Micronesia, about 1800 miles north of the eastern corner of Australia, there’s a group of islands called Yap. It has a population of 11,000 and is largely unvisited except by divers, but it’s a very popular place with economists talking about the nature of money, starting with a fascinating paper by Milton Friedman, ‘The Island of Stone Money’, published in 1991. There’s a particularly good retelling of the story by Felix Martin in his 2013 book Money: The Unauthorised Biography.
Yap has no metal. There’s nothing to make into coins. What the Yapese do instead is sail 250 miles to an island called Palau, where there’s a particular kind of limestone not available on their home island. They quarry the limestone, and then shape it into circular wheel-like forms with a hole in the middle, called fei. Some of these fei stones are absolutely huge, fully 12 feet across. Then they sail the fei back to Yap, where they’re used as money.
The great advantage of the fei being made from this particular stone is that they’re impossible to counterfeit, because there’s none of the limestone on Yap. The fei are rare and difficult to get by definition, so they hold their value well. You can’t fake a fei. Just as you have to work to get money in a developed economy – so the money constitutes a record of labour – the fei are an unfakeable record of the labour that went into their creation. In addition, the big ones have the advantage that they’re impossible to steal. By the same token, though, they’re impossible to move, so what happens is that if you want to spend some of the money, you just agree that somebody else now owns the coin. A coin sitting outside somebody’s house can be transferred backwards and forwards as part of a series of transactions, and all that actually happens is that people change their minds about who now owns it. Everyone agrees that the money has been transferred. The real money isn’t the fei, but the idea of who owns the fei. The register of ownership, held in the community memory, is the money.
It has sometimes happened to the Yapese that their boats are hit by stormy weather on the way back from Palau, and to save their own lives, the men have to chuck the big stones overboard. But when they get back to Palau they report what happened, and everyone accepts it, and the ownership of the stone is assigned to whoever quarried it, and the stone can still be used as a valid form of money because ownership can be exchanged even though the actual stone is five miles down at the bottom of the Pacific.
That example seems bizarre, because the details are so vivid and exotic, but our money functions in the same way. The register is the money. This is the second main point about the nature of money. We think of money as being the stuff in our wallets and purses; but most money isn’t that. It’s not notes and coins. In 2006, for instance, the total amount of money in the world in terms of value was $473 trillion. That’s a number so big it’s very difficult to get your head round: about £45,000 per head for all seven billion people on the planet. Of that $473 trillion, less than a tenth, about $46 trillion, was cash in the form of banknotes and coins. More than 90 per cent of money isn’t money in a physical sense. That number is even bigger in the UK, where only about 4 per cent of money is in the form of cash. What it is instead is entries on a ledger. It’s numbers on your bank balance, the electronic records of debits and credits that are created every time we spend money.
When we say we spend money, what we’re mainly doing is making entries on registers. Your work results in a weekly or monthly credit from your employer’s account to your account, maybe with another transfer of PAYE tax to the government, also your pension contribution if you make one, any forms of insurance, then a chunk automatically going off to your landlord or mortgage provider – all heading to different parts of the financial system, all of them nothing other than movement between and among all these various ledgers and registers. This is what almost all of what we call money mainly is: numbers moving on registers. It’s the same system they have on Yap. (...)
Bitcoin is a new form of electronic money, launched in a paper published on 31 October 2008 by a pseudonymous person or persons calling himself, herself or themselves Satoshi Nakamoto. Note the date: this was shortly after the collapse of Lehman Brothers on 15 September, and the near death of the global financial system. Just as the Civil War was the prompt for the United States to end private money, and the crisis of Kenyan democracy led to the explosive growth of M-Pesa, the global financial crisis seems to have been a crucial spur, if not to the development of bitcoin, then certainly to the timing of its launch.
Bitcoin’s central and most exciting piece of technology is something called the blockchain. This is a register of all the bitcoin transactions that have ever happened. Every time something is bought or sold using bitcoin – remember, that means every time something moves from one place in the register to somewhere else – the new transaction is added to the blockchain and authenticated by a network of computers. The techniques are cryptographic. It’s impossible to fake a new addition to the chain, but it’s relatively easy (by relatively easy, I mean relatively easy for a huge assembled array of computing power) to verify a legitimate transaction. So: impossible to fake but simple to verify. The entities transferring the money are anonymous, and at the same time completely transparent: anyone can see the bitcoin addresses involved, but nobody necessarily knows to whom they belong.
This combination of features has extraordinary power. It means that you can trust the blockchain, while knowing nothing about anyone else attached to it. Bitcoin is in effect a register like the one kept in people’s memory on Yap, but it’s a register that anyone can see and to which everyone assents. For the first time in human history, we have a register that does not need to be underwritten by some form of authority or state power, other than itself – and, as I’ve argued, that register isn’t some glossy add-on to the nature of money, it actually is how money works. A decentralised, anonymous, self-verifying and completely reliable register of this sort is the biggest potential change to the money system since the Medici. It’s banking without banks, and money without money.
by John Lanchester, LRB | Read more:
Image: via:
It’s worth thinking about that promise to ‘pay the bearer on demand the sum of ten pounds’. When we parse it, it’s not clear what it means. Ten pounds of what? We’ve already got ten pounds. That’s exactly what we’re holding in our hand. It doesn’t mean, pay the bearer on demand ten pounds’ worth of gold: the link between currency and gold was ended in 1971, and anyway, Gordon Brown sold off the Bank of England’s gold reserves in the 1990s.The fact is, there’s no answer to the question, ten pounds of what? The ten pound note is worth what it claims it is because the state, in the form of the Bank of England, says so, and we choose to believe it. This is what students of currency call ‘fiat’ money, money whose value has been willed into being by the state. The value of fiat money is an act of faith. There are quirks to this. In the case of the pound coin, if we ask how much it’s worth, the answer is obvious: a pound is worth a pound. It shouldn’t be, though. According to the Royal Mint, which actually makes the stuff, 3 per cent of all pound coins in circulation are fake. Allowing for that, we should discount the price of our pound coin, and mathematically assign it a value of 97p.
In real life, there’s no need to do that, because the overwhelming probability is that you won’t have any difficulty spending your fake pound for its full nominal value. (That’s unless you’re caught out by a coin slot which rejects your money. Most people attribute the annoying frequency with which this happens to a problem with coin slots; mostly, though, it’s a problem with the currency. The other time you’ll have trouble with your fake coin is when you get one of the mutant squishy ones which look like partially chewed fruit pastilles and are so badly forged they verge on the endearing.) They’re worth what they claim because we choose to believe in them. Your mathematically determined 97p of coin is worth a quid because we believe it’s worth a quid. We trust it. That’s the first main point about money. Its value rests on our belief in its value, underwritten by the authority of the state.
For the second main point about the nature of money, we need to travel to the Pacific Ocean. In Micronesia, about 1800 miles north of the eastern corner of Australia, there’s a group of islands called Yap. It has a population of 11,000 and is largely unvisited except by divers, but it’s a very popular place with economists talking about the nature of money, starting with a fascinating paper by Milton Friedman, ‘The Island of Stone Money’, published in 1991. There’s a particularly good retelling of the story by Felix Martin in his 2013 book Money: The Unauthorised Biography.
Yap has no metal. There’s nothing to make into coins. What the Yapese do instead is sail 250 miles to an island called Palau, where there’s a particular kind of limestone not available on their home island. They quarry the limestone, and then shape it into circular wheel-like forms with a hole in the middle, called fei. Some of these fei stones are absolutely huge, fully 12 feet across. Then they sail the fei back to Yap, where they’re used as money.
The great advantage of the fei being made from this particular stone is that they’re impossible to counterfeit, because there’s none of the limestone on Yap. The fei are rare and difficult to get by definition, so they hold their value well. You can’t fake a fei. Just as you have to work to get money in a developed economy – so the money constitutes a record of labour – the fei are an unfakeable record of the labour that went into their creation. In addition, the big ones have the advantage that they’re impossible to steal. By the same token, though, they’re impossible to move, so what happens is that if you want to spend some of the money, you just agree that somebody else now owns the coin. A coin sitting outside somebody’s house can be transferred backwards and forwards as part of a series of transactions, and all that actually happens is that people change their minds about who now owns it. Everyone agrees that the money has been transferred. The real money isn’t the fei, but the idea of who owns the fei. The register of ownership, held in the community memory, is the money.
It has sometimes happened to the Yapese that their boats are hit by stormy weather on the way back from Palau, and to save their own lives, the men have to chuck the big stones overboard. But when they get back to Palau they report what happened, and everyone accepts it, and the ownership of the stone is assigned to whoever quarried it, and the stone can still be used as a valid form of money because ownership can be exchanged even though the actual stone is five miles down at the bottom of the Pacific.
That example seems bizarre, because the details are so vivid and exotic, but our money functions in the same way. The register is the money. This is the second main point about the nature of money. We think of money as being the stuff in our wallets and purses; but most money isn’t that. It’s not notes and coins. In 2006, for instance, the total amount of money in the world in terms of value was $473 trillion. That’s a number so big it’s very difficult to get your head round: about £45,000 per head for all seven billion people on the planet. Of that $473 trillion, less than a tenth, about $46 trillion, was cash in the form of banknotes and coins. More than 90 per cent of money isn’t money in a physical sense. That number is even bigger in the UK, where only about 4 per cent of money is in the form of cash. What it is instead is entries on a ledger. It’s numbers on your bank balance, the electronic records of debits and credits that are created every time we spend money.
When we say we spend money, what we’re mainly doing is making entries on registers. Your work results in a weekly or monthly credit from your employer’s account to your account, maybe with another transfer of PAYE tax to the government, also your pension contribution if you make one, any forms of insurance, then a chunk automatically going off to your landlord or mortgage provider – all heading to different parts of the financial system, all of them nothing other than movement between and among all these various ledgers and registers. This is what almost all of what we call money mainly is: numbers moving on registers. It’s the same system they have on Yap. (...)
Bitcoin is a new form of electronic money, launched in a paper published on 31 October 2008 by a pseudonymous person or persons calling himself, herself or themselves Satoshi Nakamoto. Note the date: this was shortly after the collapse of Lehman Brothers on 15 September, and the near death of the global financial system. Just as the Civil War was the prompt for the United States to end private money, and the crisis of Kenyan democracy led to the explosive growth of M-Pesa, the global financial crisis seems to have been a crucial spur, if not to the development of bitcoin, then certainly to the timing of its launch.
Bitcoin’s central and most exciting piece of technology is something called the blockchain. This is a register of all the bitcoin transactions that have ever happened. Every time something is bought or sold using bitcoin – remember, that means every time something moves from one place in the register to somewhere else – the new transaction is added to the blockchain and authenticated by a network of computers. The techniques are cryptographic. It’s impossible to fake a new addition to the chain, but it’s relatively easy (by relatively easy, I mean relatively easy for a huge assembled array of computing power) to verify a legitimate transaction. So: impossible to fake but simple to verify. The entities transferring the money are anonymous, and at the same time completely transparent: anyone can see the bitcoin addresses involved, but nobody necessarily knows to whom they belong.
This combination of features has extraordinary power. It means that you can trust the blockchain, while knowing nothing about anyone else attached to it. Bitcoin is in effect a register like the one kept in people’s memory on Yap, but it’s a register that anyone can see and to which everyone assents. For the first time in human history, we have a register that does not need to be underwritten by some form of authority or state power, other than itself – and, as I’ve argued, that register isn’t some glossy add-on to the nature of money, it actually is how money works. A decentralised, anonymous, self-verifying and completely reliable register of this sort is the biggest potential change to the money system since the Medici. It’s banking without banks, and money without money.
by John Lanchester, LRB | Read more:
Image: via:
Labels:
Business,
Economics,
Politics,
Security,
Technology
Monica Lewinsky: ‘The Shame Sticks To You Like Tar’
One night in London in 2005, a woman said a surprisingly eerie thing to Monica Lewinsky. Lewinsky had moved from New York a few days earlier to take a master’s in social psychology at the London School of Economics. On her first weekend, she went drinking with a woman she thought might become a friend. “But she suddenly said she knew really high-powered people,” Lewinsky says, “and I shouldn’t have come to London because I wasn’t wanted there.”
Lewinsky is telling me this story at a table in a quiet corner of a West Hollywood hotel. We had to pay extra for the table to be curtained off. It was my idea. If we hadn’t done it, passersby would probably have stared. Lewinsky would have noticed the stares and would have clammed up a little. “I’m hyper-aware of how other people may be perceiving me,” she says.
She’s tired and dressed in black. She just flew in from India and hasn’t had breakfast yet. We’ll talk for two hours, after which there’s only time for a quick teacake before she hurries to the airport to give a talk in Phoenix, Arizona, and spend the weekend with her father.
“Why did that woman in London say that to you?” I ask her.
“Oh, she’d had too much to drink,” Lewinsky replies. “It’s such a shame, because 99.9% of my experiences in England were positive, and she was an anomaly. I loved being in London, then and now. I was welcomed and accepted at LSE, by my professors and classmates. But when something hits a core trauma – I actually got really retriggered. After that I couldn’t go more than three days without thinking about the FBI sting that happened in ’98.”
Seven years earlier, on 16 January 1998, Lewinsky’s friend – an older work colleague called Linda Tripp – invited her for lunch at a mall in Washington DC. Lewinsky was 25. They’d been working together at the Pentagon for nearly two years, during which time Lewinsky had confided in her that she’d had an affair with President Bill Clinton. Unbeknown to Lewinsky, Tripp had been secretly recording their telephone conversations – more than 20 hours of them. The lunch was a trap. When Tripp arrived, she motioned behind her and two federal agents suddenly appeared. “You’re in trouble,” they told Lewinsky. (...)
Lewinsky doesn’t like thinking about her past. It was hard to get her to agree to this interview. She rarely gives them and she nearly cancelled this one. I approached her on several previous occasions, when I was writing a book on public shaming, and she kept saying no.
It’s not because she’s difficult. She isn’t. She’s very likable and smart. But it feels as if I’m sitting with two Lewinskys. There’s the open, friendly one. This is, I suspect, the actual Lewinsky. In a parallel world where nothing cataclysmic happened in the 1990s, I imagine this would be the entire Lewinsky. But then there’s the nervy one who sometimes suddenly stops mid-sentence and says, “I’m hesitating because I have to think through the consequences of saying this. I still have to manage a lot of trauma to do what I’m doing, even to come here. Any time I put myself in the hands of other people…”
“What’s your nightmare scenario?” I ask her.
“The truth is I’m exhausted,” she says. “So I’m worried I may misspeak, and that thing will become the headline and the cycle will start all over again.”
The reason why she finally agreed to meet me, despite her anxieties, is that the Guardian is highlighting the issue of online harassment through its series The web we want – an endeavour she approves of. “Destigmatising the shame around online harassment is the first step,” she says. “Well, the first step is recognising there’s a problem.”
Lewinsky was once among the 20th century’s most humiliated people, ridiculed across the world. Now she’s a respected and perceptive anti-bullying advocate. She gives talks at Facebook, and at business conferences, on how to make the internet more compassionate. She helps out at anti-bullying organisations like Bystander Revolution, a site that offers video advice on what to do if you’re afraid to go to school, or if you’re a victim of cyberbullying.
A year ago she gave a TED talk about being the object of the first great internet shaming: “Overnight, I went from being a completely private figure to a publicly humiliated one worldwide. Granted, it was before social media, but people could still comment online, email stories, and, of course, email cruel jokes. I was branded as a tramp, tart, slut, whore, bimbo, and, of course, ‘that woman’. It was easy to forget that ‘that woman’ was dimensional, had a soul, and was once unbroken.” Lewinsky’s talk was dazzling and now gets taught in schools alongside Nathaniel Hawthorn’s The Scarlet Letter. I can think of nobody I’d rather talk to about the minutiae of online bullying – who does it and why, the turmoil it can spark, and how to make things better. (...)
Back then, the world basically saw Lewinsky as the predator. Late-night talkshow hosts routinely made misogynistic jokes, with Jay Leno among the cruellest: “Monica Lewinsky has gained back all the weight she lost last year. [She’s] considering having her jaw wired shut but then, nah, she didn’t want to give up her sex life.” And so on.
In February 1998, the feminist writer Nancy Friday was asked by the New York Observer to speculate on Lewinsky’s future. “She can rent out her mouth,” she replied.
I hope those mainstream voices wouldn’t treat Lewinsky quite this badly if the scandal broke today. Nowadays most people understand those jokes to be slut-shaming, punching down, don’t they?
“I hope so,” Lewinsky says. “I don’t know.”
A lot of vicious things that happen online to women do happen at the hands of men, but women are not immune to misogyny
Either way, misogyny is still thriving. When the Guardian began researching the online harassment of its own writers, they discovered something bleak: of the 10 contributors who receive the most abuse in the comment threads, eight are women – five white, three non-white – and the other two are black men. Overall, women Guardian writers get more abuse than men, regardless of what they write about, but especially when they write about rape and feminism. I noticed something similar during my two years interviewing publicly shamed people. When a man is shamed, it’s usually, “I’m going to get you fired.” When a woman is shamed it’s, “I’m going to rape you and get you fired.”
With statistics like these, it’s no surprise that many consider this an ideological issue – that the focus should be on combatting the misogynistic, racist abuse committed by men. But Lewinsky doesn’t see it that way. “A lot of vicious things that happen online to women and minorities do happen at the hands of men,” she says, “but they also happen at the hands of women. Women are not immune to misogyny.”
“That happened to you,” I say. “With people like Nancy Friday. You found yourself being attacked by ideologues.”
“Yes,” Lewinsky says. “I think it’s fair to say that whatever mistakes I made, I was hung out to dry by a lot of people – by a lot of the feminists who had loud voices. I wish it had been handled differently. It was very scary and very confusing to be a young woman thrust on to the world stage and not belonging to any group. I didn’t belong to anybody.”
Lewinsky is telling me this story at a table in a quiet corner of a West Hollywood hotel. We had to pay extra for the table to be curtained off. It was my idea. If we hadn’t done it, passersby would probably have stared. Lewinsky would have noticed the stares and would have clammed up a little. “I’m hyper-aware of how other people may be perceiving me,” she says.
She’s tired and dressed in black. She just flew in from India and hasn’t had breakfast yet. We’ll talk for two hours, after which there’s only time for a quick teacake before she hurries to the airport to give a talk in Phoenix, Arizona, and spend the weekend with her father.“Why did that woman in London say that to you?” I ask her.
“Oh, she’d had too much to drink,” Lewinsky replies. “It’s such a shame, because 99.9% of my experiences in England were positive, and she was an anomaly. I loved being in London, then and now. I was welcomed and accepted at LSE, by my professors and classmates. But when something hits a core trauma – I actually got really retriggered. After that I couldn’t go more than three days without thinking about the FBI sting that happened in ’98.”
Seven years earlier, on 16 January 1998, Lewinsky’s friend – an older work colleague called Linda Tripp – invited her for lunch at a mall in Washington DC. Lewinsky was 25. They’d been working together at the Pentagon for nearly two years, during which time Lewinsky had confided in her that she’d had an affair with President Bill Clinton. Unbeknown to Lewinsky, Tripp had been secretly recording their telephone conversations – more than 20 hours of them. The lunch was a trap. When Tripp arrived, she motioned behind her and two federal agents suddenly appeared. “You’re in trouble,” they told Lewinsky. (...)
Lewinsky doesn’t like thinking about her past. It was hard to get her to agree to this interview. She rarely gives them and she nearly cancelled this one. I approached her on several previous occasions, when I was writing a book on public shaming, and she kept saying no.
It’s not because she’s difficult. She isn’t. She’s very likable and smart. But it feels as if I’m sitting with two Lewinskys. There’s the open, friendly one. This is, I suspect, the actual Lewinsky. In a parallel world where nothing cataclysmic happened in the 1990s, I imagine this would be the entire Lewinsky. But then there’s the nervy one who sometimes suddenly stops mid-sentence and says, “I’m hesitating because I have to think through the consequences of saying this. I still have to manage a lot of trauma to do what I’m doing, even to come here. Any time I put myself in the hands of other people…”
“What’s your nightmare scenario?” I ask her.
“The truth is I’m exhausted,” she says. “So I’m worried I may misspeak, and that thing will become the headline and the cycle will start all over again.”
The reason why she finally agreed to meet me, despite her anxieties, is that the Guardian is highlighting the issue of online harassment through its series The web we want – an endeavour she approves of. “Destigmatising the shame around online harassment is the first step,” she says. “Well, the first step is recognising there’s a problem.”
Lewinsky was once among the 20th century’s most humiliated people, ridiculed across the world. Now she’s a respected and perceptive anti-bullying advocate. She gives talks at Facebook, and at business conferences, on how to make the internet more compassionate. She helps out at anti-bullying organisations like Bystander Revolution, a site that offers video advice on what to do if you’re afraid to go to school, or if you’re a victim of cyberbullying.
A year ago she gave a TED talk about being the object of the first great internet shaming: “Overnight, I went from being a completely private figure to a publicly humiliated one worldwide. Granted, it was before social media, but people could still comment online, email stories, and, of course, email cruel jokes. I was branded as a tramp, tart, slut, whore, bimbo, and, of course, ‘that woman’. It was easy to forget that ‘that woman’ was dimensional, had a soul, and was once unbroken.” Lewinsky’s talk was dazzling and now gets taught in schools alongside Nathaniel Hawthorn’s The Scarlet Letter. I can think of nobody I’d rather talk to about the minutiae of online bullying – who does it and why, the turmoil it can spark, and how to make things better. (...)
Back then, the world basically saw Lewinsky as the predator. Late-night talkshow hosts routinely made misogynistic jokes, with Jay Leno among the cruellest: “Monica Lewinsky has gained back all the weight she lost last year. [She’s] considering having her jaw wired shut but then, nah, she didn’t want to give up her sex life.” And so on.
In February 1998, the feminist writer Nancy Friday was asked by the New York Observer to speculate on Lewinsky’s future. “She can rent out her mouth,” she replied.
I hope those mainstream voices wouldn’t treat Lewinsky quite this badly if the scandal broke today. Nowadays most people understand those jokes to be slut-shaming, punching down, don’t they?
“I hope so,” Lewinsky says. “I don’t know.”
A lot of vicious things that happen online to women do happen at the hands of men, but women are not immune to misogyny
Either way, misogyny is still thriving. When the Guardian began researching the online harassment of its own writers, they discovered something bleak: of the 10 contributors who receive the most abuse in the comment threads, eight are women – five white, three non-white – and the other two are black men. Overall, women Guardian writers get more abuse than men, regardless of what they write about, but especially when they write about rape and feminism. I noticed something similar during my two years interviewing publicly shamed people. When a man is shamed, it’s usually, “I’m going to get you fired.” When a woman is shamed it’s, “I’m going to rape you and get you fired.”
With statistics like these, it’s no surprise that many consider this an ideological issue – that the focus should be on combatting the misogynistic, racist abuse committed by men. But Lewinsky doesn’t see it that way. “A lot of vicious things that happen online to women and minorities do happen at the hands of men,” she says, “but they also happen at the hands of women. Women are not immune to misogyny.”
“That happened to you,” I say. “With people like Nancy Friday. You found yourself being attacked by ideologues.”
“Yes,” Lewinsky says. “I think it’s fair to say that whatever mistakes I made, I was hung out to dry by a lot of people – by a lot of the feminists who had loud voices. I wish it had been handled differently. It was very scary and very confusing to be a young woman thrust on to the world stage and not belonging to any group. I didn’t belong to anybody.”
by Jon Ronson, The Guardian | Read more:
Image: Steve SchofieldSaturday, April 16, 2016
In the Future, We Will Photograph Everything and Look at Nothing
Today everything exists to end in a photograph,” Susan Sontag wrote in her seminal 1977 book “On Photography.” This was something I thought about when I recently read that Google was making its one-hundred-and-forty-nine-dollar photo-editing suite, the Google Nik Collection, free. This photo-editing software is as beloved among photographers as, say, Katz’s Deli is among those who dream of pastrami sandwiches.
Before Google bought it, in 2012, the collection cost five hundred dollars. It is made up of seven pieces of specialized software that, when used in combination with other photo-editing software, such as Adobe Photoshop or Adobe Lightroom, give photographers a level of control akin to that once found in the darkroom. They can mimic old film stock, add analog photo effects, or turn color shots into black-and-white photos. The suite can transform modestly good photos into magical ones. Collectively, Nik’s intellectual sophistication is that of a chess grand master. I don’t mind paying for the software, and neither do thousands of photographers and enthusiasts. So, like many, I wondered, why would Google make it free?
My guess is that it wants to kill the software, but it doesn’t want the P.R. nightmare that would follow. Remember the outcry over its decision to shut down its tool for R.S.S. feeds, Google Reader? Nik loyalists are even more rabid. By making the software free, the company can both ignore the product and avoid a backlash. But make no mistake: it is only a matter of time before Nik goes the way of the film camera—into the dustbin of technological history.
“The giveaway is bad news, as it means the software they paid for has almost [certainly] reached the end of the line in terms of updates,” wrote PC World. And, as Google explained in the blog post announcing the news, the company will “focus our long-term investments in building incredible photo editing tools for mobile.” That means Google Photos, the company’s tool for storing and sorting, and Nik’s own Snapseed app for mobile phones.
Google’s comments—disheartening as they might be—reflect the reality of our shifting technologies. Sure, we all like listening to music on vinyl, but that doesn’t mean streaming music on Spotify is bad. Streaming just fits today’s world better. I love my paper and ink, but I see the benefits of the iPad and Apple Pencil. Digital photography is going through a similar change, and Google is smart to refocus.
To understand Google’s decision, one needs to understand how our relationship with photographs has changed. From analog film cameras to digital cameras to iPhone cameras, it has become progressively easier to take and store photographs. Today we don’t even think twice about snapping a shot. About two years ago, Peter Neubauer, the co-founder of the Swedish database company Neo Technology, pointed out to me that photography has seen the value shift from “the stand-alone individual aesthetic of the artist to the collaborative and social aesthetic of services like Facebook and Instagram.” In the future, he said, the “real value creation will come from stitching together photos as a fabric, extracting information and then providing that cumulative information as a totally different package.”
His comments make sense: we have come to a point in society where we are all taking too many photos and spending very little time looking at them.
“The definition of photography is changing, too, and becoming more of a language,” the Brooklyn-based artist and professional photographer Joshua Allen Harris told me. “We’re attaching imagery to tweets or text messages, almost like a period at the end of a sentence. It’s enhancing our communication in a whole new way.”
In other words, “the term ‘photographer’ is changing,” he said. As a result, photos are less markers of memories than they are Web-browser bookmarks for our lives. And, just as with bookmarks, after a few months it becomes hard to find photos or even to navigate back to the points worth remembering. Google made hoarding bookmarks futile. Today we think of something, and then we Google it. Photos are evolving along the same path as well.
Before Google bought it, in 2012, the collection cost five hundred dollars. It is made up of seven pieces of specialized software that, when used in combination with other photo-editing software, such as Adobe Photoshop or Adobe Lightroom, give photographers a level of control akin to that once found in the darkroom. They can mimic old film stock, add analog photo effects, or turn color shots into black-and-white photos. The suite can transform modestly good photos into magical ones. Collectively, Nik’s intellectual sophistication is that of a chess grand master. I don’t mind paying for the software, and neither do thousands of photographers and enthusiasts. So, like many, I wondered, why would Google make it free?My guess is that it wants to kill the software, but it doesn’t want the P.R. nightmare that would follow. Remember the outcry over its decision to shut down its tool for R.S.S. feeds, Google Reader? Nik loyalists are even more rabid. By making the software free, the company can both ignore the product and avoid a backlash. But make no mistake: it is only a matter of time before Nik goes the way of the film camera—into the dustbin of technological history.
“The giveaway is bad news, as it means the software they paid for has almost [certainly] reached the end of the line in terms of updates,” wrote PC World. And, as Google explained in the blog post announcing the news, the company will “focus our long-term investments in building incredible photo editing tools for mobile.” That means Google Photos, the company’s tool for storing and sorting, and Nik’s own Snapseed app for mobile phones.
Google’s comments—disheartening as they might be—reflect the reality of our shifting technologies. Sure, we all like listening to music on vinyl, but that doesn’t mean streaming music on Spotify is bad. Streaming just fits today’s world better. I love my paper and ink, but I see the benefits of the iPad and Apple Pencil. Digital photography is going through a similar change, and Google is smart to refocus.
To understand Google’s decision, one needs to understand how our relationship with photographs has changed. From analog film cameras to digital cameras to iPhone cameras, it has become progressively easier to take and store photographs. Today we don’t even think twice about snapping a shot. About two years ago, Peter Neubauer, the co-founder of the Swedish database company Neo Technology, pointed out to me that photography has seen the value shift from “the stand-alone individual aesthetic of the artist to the collaborative and social aesthetic of services like Facebook and Instagram.” In the future, he said, the “real value creation will come from stitching together photos as a fabric, extracting information and then providing that cumulative information as a totally different package.”
His comments make sense: we have come to a point in society where we are all taking too many photos and spending very little time looking at them.
“The definition of photography is changing, too, and becoming more of a language,” the Brooklyn-based artist and professional photographer Joshua Allen Harris told me. “We’re attaching imagery to tweets or text messages, almost like a period at the end of a sentence. It’s enhancing our communication in a whole new way.”
In other words, “the term ‘photographer’ is changing,” he said. As a result, photos are less markers of memories than they are Web-browser bookmarks for our lives. And, just as with bookmarks, after a few months it becomes hard to find photos or even to navigate back to the points worth remembering. Google made hoarding bookmarks futile. Today we think of something, and then we Google it. Photos are evolving along the same path as well.
by Om Malik, New Yorker | Read more:
Image: Jordan Strauss/Invision/AP
Sequencing the North by NorthWest Crop Dusting Scene
The image above of the crop dusting plane chasing down Cary Grant in Alfred Hitchcock’s North by Northwest remains one of the most iconic in all of moviedom. That this is so more than 50 years after its theatrical release only goes to show the visionary power and mastery of craft that Alfred Hitchcock brought to film making. (You can see a 4:23 long sequence at YouTube; but they do not allow embedding)
Sometime ago, I went to an exhibit at the Block Museum of Art at Northwestern University in Chicago. It was filled with original notes, drawings, and other artifacts from Hitchcock’s work. I was reminded of this when thumbing through my copy of “Casting a Shadow: Creating the Alfred Hitchcock Film” by Will Schmenner and Corinne Granof, which accompanied that show.
The film is a classic take on mistaken identity, with Grant playing a New York advertising executive mistaken for a government agent by foreign spies. The famous Crop Dusting sequence discussed up top is where we learn how far the spies are willing to go to get rid of Grant, but we also see that he has more survival skills than they bargained for.
The book is a Cinephile’s delight, filled with all manner of delightful insider info to how Hitchcock actually made movies.
One of my favorite pieces of Hitchcock lore from the book is below: It is the Cinematographer’s camera angles for the the crop dusting sequence. All 61 bullet points (below) represent a specific camera angle, a specific shot, as detailed below:
CONTINUITY FOR CORP DUSTING SEQUENCE, SCENE 115,
1. High Shot – Bus arriving – Man out.
2. Lonely figure (Sketch 3)
(Shot Monday, Slate 211)
3. Waist Shot – Thornhill looks about him in four directions.
a. Process plate for all Thornhill’s Close Ups.
4. a. P.O.V.
Through wide fence onto plowed field.
(Shot Mondaym Slate 203X
b. P.O.V.
Empty road from where bus came
(Shot Monday, slate 201)
c. P.O.V.
Wast Brush
(Shot Monday, Slate 202X)
d. P.O.V.
Corn Field
(Shot Monday, Slate 204X)
e. P.O.V.
Empty road ahead
(Shot Monday, Slate 210X)
5. Closer Shot – Thornhill glaces at west with satisfaction and then looks up road expectantly.
by Barry Ritholtz, The Big Picture | Read more:
Image: North by NorthWest
Friday, April 15, 2016
Clinton Campaign Accuses Sanders of Trying to Win Nomination
The war of words between the two Democratic camps heated up over the weekend, as the Clinton campaign accused Vermont Senator Bernie Sanders of “blatantly attempting to win the Democratic nomination for President.”
Appearing on NBC’s “Meet the Press,” the Clinton campaign spokesman Harland Dorrinson said that Sanders’s actions in the past few weeks “left little doubt as to what his true intentions are—namely, to be the Party’s nominee.”
“He’s been raising money, he’s been running in primaries, and, yes, he’s been winning caucuses,” the Clinton aide said. “It’s time for Bernie Sanders to come clean with the American people and admit what he’s really up to.”
“It’s deeply troubling that what appeared at first to be a purely symbolic candidacy has turned into something else entirely,” he said.
In an interview on CNN, Secretary Clinton said that she would not “take the bait” when she was asked whether she thought Sanders was trying to win the nomination, but she stopped short of disavowing the accusation.
Appearing on NBC’s “Meet the Press,” the Clinton campaign spokesman Harland Dorrinson said that Sanders’s actions in the past few weeks “left little doubt as to what his true intentions are—namely, to be the Party’s nominee.”“He’s been raising money, he’s been running in primaries, and, yes, he’s been winning caucuses,” the Clinton aide said. “It’s time for Bernie Sanders to come clean with the American people and admit what he’s really up to.”
“It’s deeply troubling that what appeared at first to be a purely symbolic candidacy has turned into something else entirely,” he said.
In an interview on CNN, Secretary Clinton said that she would not “take the bait” when she was asked whether she thought Sanders was trying to win the nomination, but she stopped short of disavowing the accusation.
by Andy Borowitz, New Yorker | Read more:
Image: Jabin Botsford/Washington Post/via Getty
When Dating Algorithms Can Watch You Blush
Let’s get the basics over with,” W said to M when they met on a 4-minute speed date. “What are you studying?”
“Uh, I’m studying econ and poli sci. How about you?”
“I’m journalism and English literature.”
“OK, cool.”
“Yeah.”
They talked about where they were from (she hailed from Iowa, he from New Jersey), life in a small town, and the transition to college. An eavesdropper would have been hard-pressed to detect a romantic spark in this banal back-and-forth. Yet when researchers, who had recorded the exchange, ran it through a language-analysis program, it revealed what W and M confirmed to be true: They were hitting it off.
The researchers weren’t interested in what the daters discussed, or even whether they seemed to share personality traits, backgrounds, or interests. Instead, they were searching for subtle similarities in how they structured their sentences—specifically, how often they used function words such as it, that, but, about, never, and lots. This synchronicity, known as “language style matching,” or LSM, happens unconsciously. But the researchers found it to be a good predictor of mutual affection: An analysis of conversations involving 80 speed daters showed that couples with high LSM scores were three times as likely as those with low scores to want to see each other again.
It’s not just speech patterns that can encode chemistry. Other studies suggest that when two people unknowingly coordinate nonverbal cues, such as hand gestures, eye gaze, and posture, they’re more apt to like and understand each other. These findings raise a tantalizing question: Could a computer know whom we’re falling for before we do?
Picture this: You’re home from work for the evening. You curl up on the couch, steel your nerves, maybe pour yourself a glass of wine, and open the dating app on your phone. Then for 30 minutes or so, you commit to a succession of brief video dates with other users who satisfy a basic set of criteria, such as gender, age, and location. Meanwhile, using speech- and image-recognition technologies, the app tracks both your and your dates’ words, gestures, expressions, even heartbeats.
Afterward, you rate your dates. And so does the app’s artificial intelligence, which can recognize signs of compatibility (or incompatibility) that you might have missed. At the end of the night, the app tells you which prospects are worth a second look. Over time, the AI might even learn (via follow-up experiments) which combination of signals predicts the happiest relationships, or the most enduring.
Welcome to the vision of Eli Finkel. A professor of psychology and management at Northwestern University and a co-author of the LSM study, Finkel is a prominent critic of popular dating sites such as eHarmony and Chemistry, which claim to possess a formula that can connect you with your soul mate. Finkel’s beef with these sites, he says, isn’t that they “use math to get you dates,” as OKCupid puts it. It’s that they go about it all wrong. As a result, Finkel argues, their matching algorithms likely foretell love no better than chance.
The problem, he explains, is that they rely on information about individuals who have never met—namely, self-reported personality traits and preferences. Decades of relationship research show that romantic success hinges more on how two people interact than on who they are or what they believe they want in a partner. Attraction, scientists tell us, is created and kindled in the glances we exchange, the laughs we share, and the other myriad ways our brains and bodies respond to one another.
Which is why, according to Finkel, we’ll never predict love simply by browsing photographs and curated profiles, or by answering questionnaires. “So the question is: Is there a new way to leverage the Internet to enhance matchmaking, so that when you get face to face with a person, the odds that you’ll be compatible with that person are higher than they would be otherwise?”
by Julia M. Klein, Nautilus | Read more:
Image: Jesse Chan-Norris / Flickr
“Uh, I’m studying econ and poli sci. How about you?”
“I’m journalism and English literature.”“OK, cool.”
“Yeah.”
They talked about where they were from (she hailed from Iowa, he from New Jersey), life in a small town, and the transition to college. An eavesdropper would have been hard-pressed to detect a romantic spark in this banal back-and-forth. Yet when researchers, who had recorded the exchange, ran it through a language-analysis program, it revealed what W and M confirmed to be true: They were hitting it off.
The researchers weren’t interested in what the daters discussed, or even whether they seemed to share personality traits, backgrounds, or interests. Instead, they were searching for subtle similarities in how they structured their sentences—specifically, how often they used function words such as it, that, but, about, never, and lots. This synchronicity, known as “language style matching,” or LSM, happens unconsciously. But the researchers found it to be a good predictor of mutual affection: An analysis of conversations involving 80 speed daters showed that couples with high LSM scores were three times as likely as those with low scores to want to see each other again.
It’s not just speech patterns that can encode chemistry. Other studies suggest that when two people unknowingly coordinate nonverbal cues, such as hand gestures, eye gaze, and posture, they’re more apt to like and understand each other. These findings raise a tantalizing question: Could a computer know whom we’re falling for before we do?
Picture this: You’re home from work for the evening. You curl up on the couch, steel your nerves, maybe pour yourself a glass of wine, and open the dating app on your phone. Then for 30 minutes or so, you commit to a succession of brief video dates with other users who satisfy a basic set of criteria, such as gender, age, and location. Meanwhile, using speech- and image-recognition technologies, the app tracks both your and your dates’ words, gestures, expressions, even heartbeats.
Afterward, you rate your dates. And so does the app’s artificial intelligence, which can recognize signs of compatibility (or incompatibility) that you might have missed. At the end of the night, the app tells you which prospects are worth a second look. Over time, the AI might even learn (via follow-up experiments) which combination of signals predicts the happiest relationships, or the most enduring.
Welcome to the vision of Eli Finkel. A professor of psychology and management at Northwestern University and a co-author of the LSM study, Finkel is a prominent critic of popular dating sites such as eHarmony and Chemistry, which claim to possess a formula that can connect you with your soul mate. Finkel’s beef with these sites, he says, isn’t that they “use math to get you dates,” as OKCupid puts it. It’s that they go about it all wrong. As a result, Finkel argues, their matching algorithms likely foretell love no better than chance.
The problem, he explains, is that they rely on information about individuals who have never met—namely, self-reported personality traits and preferences. Decades of relationship research show that romantic success hinges more on how two people interact than on who they are or what they believe they want in a partner. Attraction, scientists tell us, is created and kindled in the glances we exchange, the laughs we share, and the other myriad ways our brains and bodies respond to one another.
Which is why, according to Finkel, we’ll never predict love simply by browsing photographs and curated profiles, or by answering questionnaires. “So the question is: Is there a new way to leverage the Internet to enhance matchmaking, so that when you get face to face with a person, the odds that you’ll be compatible with that person are higher than they would be otherwise?”
by Julia M. Klein, Nautilus | Read more:
Image: Jesse Chan-Norris / Flickr
Clinton and Goldman: Why It Matters
[ed. See also: Why the Goldman Sachs Settlement Is a $5 Billion Sham]
The Clintons’ connections to Goldman Sachs can be traced back to their beginnings in national politics, in December 1991, when Robert Rubin, then co-chair co-senior partner of the bank, met Bill Clinton at a Manhattan dinner party and was so impressed by him that he signed on as an economic adviser to Clinton’s campaign for the 1992 Democratic nomination. According to a November 2015 survey of Clinton donors by The Washington Post, Rubin and other Goldman partners “mobilized their networks to raise money for the upstart candidate.”
As Bill Clinton’s secretary of the treasury from January 1995 until July 1999, Rubin was an architect of the financial deregulation that left financial derivatives such as Collateralized Debt Obligations (CDOs) largely free of controls. This paved the way for the large-scale, unregulated speculation in financial derivatives by Wall Street banks beginning in the early 2000s. (Goldman itself continued to enjoy special access to Washington during the George W. Bush administration, with former Goldman chief executive Hank Paulson serving as Treasury Secretary from 2006 to 2009.)
These long-running ties with Goldman have paid off for the Clintons. According to a July 2014 analysis in the Wall Street Journal, from 1992 to the present Goldman has been the Clintons’ number one Wall Street contributor, based on speaking fees, charitable donations, and campaign contributions, the three pillars of what I’ve called the Clinton System. As early as 2000, Goldman was the second most generous funder—after Citigroup—of Hillary Clinton’s 2000 Senate campaign, with a contribution of $711,000. In the early 2000s, Bill Clinton was also a Goldman beneficiary, receiving $650,000 from Goldman for four speeches delivered between December 2004 and June 2005. (The transcripts of these speeches do not appear to be currently available.)
By the winter of 2006–2007, however, Goldman and its CEO Lloyd Blankfein were becoming deeply involved in the collapsing housing bubble—and engaging in the practices that have since resulted in years of investigations and lawsuits. Data gathered mostly from the Corporate Research Project, a public interest website, show that on thirteen occasions between 2009 and 2016, Goldman was penalized by US courts or government agencies for fraudulent or deceptive practices that were committed mostly between 2006 and 2009. Four of these penalties amounted to $300 million or more.
In July 2010 the Securities and Exchange Commission fined Goldman $550 million for the fraudulent marketing of its Abacus CDO; the bank had allowed its client John Paulson to stuff the CDO with toxic ingredients, mostly in the form of mortgage-backed securities (MBSs), and then to bet against the CDO when it was marketed by taking a short position. Paulson earned around $1 billion when the CDO lost value as it was designed to do. In August 2011 the Federal Housing Finance Agency sued Goldman for “negligent misrepresentation, securities laws violations and common fraud” in its dealings with the semi-public mortgage banks Fannie Mae and Freddie Mac. In August 2014 Goldman agreed as restitution to buy back $3.15 billion worth of securities it had sold to the two banks for $1.2 billion more than they were currently worth.
In July 2012 Goldman agreed to pay $25.6 million to settle a suit brought by the Public Employees Retirement System of Mississippi accusing the bank of defrauding investors in a 2006 offering of MBSs. In January 2013, the Federal Reserve announced that Goldman would pay $330 million to settle allegations of foreclosure abuse by its mortgage loan servicing operations. Finally, in January of this year, Goldman announced that it would pay $5 billion to settle multiple lawsuits brought by official agencies against the bank, mainly for fraudulent marketing of CDOs; the final terms of the settlement were released on April 11. (Through the availability of tax credits and allowances, Goldman may end up paying less.) Among the plaintiffs were the Department of Justice, the New York and Illinois Attorneys General, the National Credit Union Administration, and the Federal Home Loan Banks of Chicago and Seattle.
These are summary descriptions of Goldman transgressions, which do no more than point to a pattern of deceptive and often fraudulent trading in derivatives. To get a more detailed sense of what exactly the bank was doing with these trades, we have to look at Goldman’s own record of its behavior during the crash. This record, which is now in the public domain, provides a stark backdrop to Clinton’s recent dealings with Goldman. The story begins on December 14, 2006 when David Viniar, Chief Financial Officer at Goldman and thus number four in the Goldman hierarchy, convened a meeting on the thirtieth floor of Goldman’s Manhattan headquarters. This was the seat of power where, along with Viniar, the bank’s Big Three had their offices: CEO Lloyd Blankfein, and co-vice presidents Gary Cohn and Jon Winkelried.
At the meeting Viniar called for an in-depth review of Goldman’s holdings of mortgage backed securities because its “position in subprime mortgage related assets was too long, and its risk exposure was too great.” The next day Viniar emailed a subordinate about the deteriorating housing markets, its effect on mortgage-backed securities, and not only the risks but also the opportunities this opened up. In his email Viniar alerted his subordinates to the possibility that the bank could profit from the deterioration of its own assets: “My basic message was let’s be aggressive distributing things because there will be very good opportunities as the markets [go] into what is likely to be even greater distress and we want to be in a position to take advantage of them.”
By February 11, 2007, Goldman CEO Lloyd Blankfein himself was urging Goldman’s mortgage department to get rid of deteriorating assets: “Could/should we have cleaned up these books before and are we doing enough right now to sell off cats and dogs in other books throughout the division”? But this was no easy task. In 2006 and 2007 Goldman created 120 complex financial derivatives, relying heavily on subprime mortgages grouped together in MBSs and CDOs with a total value of around $100 billion.
The problem was that the derivatives Goldman was busy creating were clogged with assets that were rapidly losing value. Faced with what it saw as a collapsing market, especially in MBSs, Goldman abandoned some derivatives that were still “under construction” while liquidating others that were fully formed, selling off their components in the markets. If this is all Goldman had done—anticipating where the market was going and unloading its bad assets—it would not have been the target of multiple lawsuits, and Blankfein might rightly be esteemed on Wall Street as the great survivor of the crash.
But Goldman also persisted with the creation of new CDOs and MBSs and continued marketing its existing ones so that they too could become part of new CDOs. Although the trading strategies involved in these maneuvers were sometimes highly complex, the motive underlying them was not. The bank’s executives believed that they could make more money by repackaging their collapsing assets into new CDOs and MBSs, which they could still market to clients as investment opportunities. Crucially, in doing this, Goldman was not simply acting as a “market maker,” an institutional trader that simply buys and sells securities at publicly posted prices. By marketing these new financial products, Goldman was also acting as underwriter and placement agent for them, and as such was subject to additional rules on fair disclosure.
On multiple occasions, as the Federal Housing Finance Agency’s 2011 lawsuit alleged, Goldman failed to disclose to its own clients how risky many of its derivatives were and that the bank itself was betting against them by taking the short position.
The Clintons’ connections to Goldman Sachs can be traced back to their beginnings in national politics, in December 1991, when Robert Rubin, then co-chair co-senior partner of the bank, met Bill Clinton at a Manhattan dinner party and was so impressed by him that he signed on as an economic adviser to Clinton’s campaign for the 1992 Democratic nomination. According to a November 2015 survey of Clinton donors by The Washington Post, Rubin and other Goldman partners “mobilized their networks to raise money for the upstart candidate.”
As Bill Clinton’s secretary of the treasury from January 1995 until July 1999, Rubin was an architect of the financial deregulation that left financial derivatives such as Collateralized Debt Obligations (CDOs) largely free of controls. This paved the way for the large-scale, unregulated speculation in financial derivatives by Wall Street banks beginning in the early 2000s. (Goldman itself continued to enjoy special access to Washington during the George W. Bush administration, with former Goldman chief executive Hank Paulson serving as Treasury Secretary from 2006 to 2009.)These long-running ties with Goldman have paid off for the Clintons. According to a July 2014 analysis in the Wall Street Journal, from 1992 to the present Goldman has been the Clintons’ number one Wall Street contributor, based on speaking fees, charitable donations, and campaign contributions, the three pillars of what I’ve called the Clinton System. As early as 2000, Goldman was the second most generous funder—after Citigroup—of Hillary Clinton’s 2000 Senate campaign, with a contribution of $711,000. In the early 2000s, Bill Clinton was also a Goldman beneficiary, receiving $650,000 from Goldman for four speeches delivered between December 2004 and June 2005. (The transcripts of these speeches do not appear to be currently available.)
By the winter of 2006–2007, however, Goldman and its CEO Lloyd Blankfein were becoming deeply involved in the collapsing housing bubble—and engaging in the practices that have since resulted in years of investigations and lawsuits. Data gathered mostly from the Corporate Research Project, a public interest website, show that on thirteen occasions between 2009 and 2016, Goldman was penalized by US courts or government agencies for fraudulent or deceptive practices that were committed mostly between 2006 and 2009. Four of these penalties amounted to $300 million or more.
In July 2010 the Securities and Exchange Commission fined Goldman $550 million for the fraudulent marketing of its Abacus CDO; the bank had allowed its client John Paulson to stuff the CDO with toxic ingredients, mostly in the form of mortgage-backed securities (MBSs), and then to bet against the CDO when it was marketed by taking a short position. Paulson earned around $1 billion when the CDO lost value as it was designed to do. In August 2011 the Federal Housing Finance Agency sued Goldman for “negligent misrepresentation, securities laws violations and common fraud” in its dealings with the semi-public mortgage banks Fannie Mae and Freddie Mac. In August 2014 Goldman agreed as restitution to buy back $3.15 billion worth of securities it had sold to the two banks for $1.2 billion more than they were currently worth.
In July 2012 Goldman agreed to pay $25.6 million to settle a suit brought by the Public Employees Retirement System of Mississippi accusing the bank of defrauding investors in a 2006 offering of MBSs. In January 2013, the Federal Reserve announced that Goldman would pay $330 million to settle allegations of foreclosure abuse by its mortgage loan servicing operations. Finally, in January of this year, Goldman announced that it would pay $5 billion to settle multiple lawsuits brought by official agencies against the bank, mainly for fraudulent marketing of CDOs; the final terms of the settlement were released on April 11. (Through the availability of tax credits and allowances, Goldman may end up paying less.) Among the plaintiffs were the Department of Justice, the New York and Illinois Attorneys General, the National Credit Union Administration, and the Federal Home Loan Banks of Chicago and Seattle.
These are summary descriptions of Goldman transgressions, which do no more than point to a pattern of deceptive and often fraudulent trading in derivatives. To get a more detailed sense of what exactly the bank was doing with these trades, we have to look at Goldman’s own record of its behavior during the crash. This record, which is now in the public domain, provides a stark backdrop to Clinton’s recent dealings with Goldman. The story begins on December 14, 2006 when David Viniar, Chief Financial Officer at Goldman and thus number four in the Goldman hierarchy, convened a meeting on the thirtieth floor of Goldman’s Manhattan headquarters. This was the seat of power where, along with Viniar, the bank’s Big Three had their offices: CEO Lloyd Blankfein, and co-vice presidents Gary Cohn and Jon Winkelried.
At the meeting Viniar called for an in-depth review of Goldman’s holdings of mortgage backed securities because its “position in subprime mortgage related assets was too long, and its risk exposure was too great.” The next day Viniar emailed a subordinate about the deteriorating housing markets, its effect on mortgage-backed securities, and not only the risks but also the opportunities this opened up. In his email Viniar alerted his subordinates to the possibility that the bank could profit from the deterioration of its own assets: “My basic message was let’s be aggressive distributing things because there will be very good opportunities as the markets [go] into what is likely to be even greater distress and we want to be in a position to take advantage of them.”
By February 11, 2007, Goldman CEO Lloyd Blankfein himself was urging Goldman’s mortgage department to get rid of deteriorating assets: “Could/should we have cleaned up these books before and are we doing enough right now to sell off cats and dogs in other books throughout the division”? But this was no easy task. In 2006 and 2007 Goldman created 120 complex financial derivatives, relying heavily on subprime mortgages grouped together in MBSs and CDOs with a total value of around $100 billion.
The problem was that the derivatives Goldman was busy creating were clogged with assets that were rapidly losing value. Faced with what it saw as a collapsing market, especially in MBSs, Goldman abandoned some derivatives that were still “under construction” while liquidating others that were fully formed, selling off their components in the markets. If this is all Goldman had done—anticipating where the market was going and unloading its bad assets—it would not have been the target of multiple lawsuits, and Blankfein might rightly be esteemed on Wall Street as the great survivor of the crash.
But Goldman also persisted with the creation of new CDOs and MBSs and continued marketing its existing ones so that they too could become part of new CDOs. Although the trading strategies involved in these maneuvers were sometimes highly complex, the motive underlying them was not. The bank’s executives believed that they could make more money by repackaging their collapsing assets into new CDOs and MBSs, which they could still market to clients as investment opportunities. Crucially, in doing this, Goldman was not simply acting as a “market maker,” an institutional trader that simply buys and sells securities at publicly posted prices. By marketing these new financial products, Goldman was also acting as underwriter and placement agent for them, and as such was subject to additional rules on fair disclosure.
On multiple occasions, as the Federal Housing Finance Agency’s 2011 lawsuit alleged, Goldman failed to disclose to its own clients how risky many of its derivatives were and that the bank itself was betting against them by taking the short position.
Thursday, April 14, 2016
Ugg: the Look That Refused to Die
In December of last year, Kitson, a small chain of boutiques on the west coast of America, announced it was going out of business. The first Kitson store had opened back in 2000 on Robinson Boulevard, just on the edge of Beverly Hills; it was the kind of shop where you could impulse-buy a cupcake-printed tote bag or, during a crucial Hollywood breakup, “Team Aniston” and “Team Jolie” T-shirts. The biggest tabloid stars of the early millennium – Paris Hilton, Lindsay Lohan, Britney Spears – flocked to Kitson, and were often photographed by paparazzi as they walked out with the store’s signature baby blue shopping bags draped on their arms. Kitson was an ideal place to pick up the unofficial uniform of that era’s celebrity set: a candy-coloured Juicy Couture velour tracksuit and a pair of sheepskin-lined Ugg boots.
When Kitson, so emblematic of a certain pre-financial crisis excess, announced that it was closing its doors for good, it felt like the death knell to a ditzy and much-derided era. Many of the stars of that time – Lohan, for example – have lost their lustre, and leggings have replaced velour tracksuits as the modern woman’s errand-running outfit of choice. (The hot pink Juicy Couture sweats are now literally museum pieces: they will be on display at the V&A later this spring.) As a result, they have come to embody a particularly repellent cultural moment that everyone is glad to be over with. In 2012, while filming The Bling Ring – based on the true story of a gang of southern California teenagers who burgled the homes of celebrities (including Paris Hilton) in 2008 and 2009 – Emma Watson tweeted a picture of herself in character as Nicki, wearing a short-sleeved pink Juicy Couture tracksuit and a pair of Uggs. “Nicki likes Lip Gloss, Purses, Yoga, Pole Dancing, Uggs, Louboutins, Juice Cleanses, Iced coffee and Tattoos.”
Uggs are certainly ugly, or at least inelegant. They look like something Frankenstein’s monster would wear if he were an elf. The shapeless, unstructured boots, pulled on in a hurry, can make anyone look like a slob, which has made them the target of special scorn. For as long as Uggs have been popular, it hasn’t been hard to find someone furiously denouncing them. “Ugg boots are not sexy,” the Independent declared in 2003, “unless you’re Mrs Bigfoot on a lone mission across Antarctica to find Mr Bigfoot.” When wearing the boots, a writer at the online beauty magazine The Gloss complained, “there’s nothing to indicate that you don’t have square, hideous shoe boxes in place of human feet”. In 2015, one coffee shop on Brick Lane in east London even banned Ugg-wearers from its premises – calling the boots “slag wellies”.
And yet, over the years, plenty of odd and unflattering shoes – pool sliders, clogs, tall platforms – have met with the approval of the fashion establishment. The problem with Uggs wasn’t that they were ugly; it’s that they were common.
But a funny thing happened on the way to fashion’s graveyard of regrettable fads: the ubiquitous Ugg has not gone anywhere. Uggs have quietly lingered on since their heyday, unnoticed but omnipresent – once you start paying attention, you’ll be shocked to discover how many people are still wearing them. Walk down any high street and focus on footwear, and you will see an army of sheepskin boots coming at you. They are worn by mothers running errands in town and in the country, paired with denim cut-off shorts at rock festivals, worn by teenagers on Saturday shopping trips.
In the reception area at Ugg corporate headquarters in Southern California, there is a bound album filled with snapshots of celebrities wearing the company’s products. It is arranged in alphabetical order, with separate sections for women and men, and is the size of the September issue of a fashion magazine, or maybe a small phone book. Many of the photographs are from the brand’s peak cultural moment in the mid-2000s, including six different pictures of Blake Lively and four of Leighton Meester, wearing Uggs between takes on the set of Gossip Girl. But there are enough photos from the past few years to make it clear that Uggs remain a perennial off-duty uniform for the famous: Ariana Grande wearing classic boots at an airport, paired with a massive Louis Vuitton bag; Charlize Theron wearing the Cardy boot, whose knitted exterior is meant to resemble a buttoned cardigan; Emma Watson (again) shopping in a white pair; Rosie Huntington-Whiteley crossing the street wearing Coquettes (Ugg slippers shaped like a flat clog or a boot with the top sliced off, which can be worn indoors or out); Hugh Jackman and the designer Valentino (separately) wearing the Butte snow boot. Last winter, I spotted Grace Coddington, the revered creative director-at-large of American Vogue, striding into work in a pair of short black Ugg boots, paired with a Céline bag.
The message of all these images – and perhaps the secret of Ugg’s apparently unstoppable success – is that if there is a dividing line between public glamour and private style, it might be a pair of cosy shearling boots. They are undeniably comfortable – soft and squishy and warm, as if your feet were in the embrace of someone who really loves you. The look and feel telegraphs a message of “I’m worth it” but also “this is me, off-duty”. At £150 a pair, they are neither cheap nor entirely out of range. They reside in the overlap of a Venn diagram for casual and indulgent.
Somehow Uggs, the boots that so many people loved to hate, have managed to defy the cruel logic of the fashion cycle and carry on – whether you approve of them or not.
When Kitson, so emblematic of a certain pre-financial crisis excess, announced that it was closing its doors for good, it felt like the death knell to a ditzy and much-derided era. Many of the stars of that time – Lohan, for example – have lost their lustre, and leggings have replaced velour tracksuits as the modern woman’s errand-running outfit of choice. (The hot pink Juicy Couture sweats are now literally museum pieces: they will be on display at the V&A later this spring.) As a result, they have come to embody a particularly repellent cultural moment that everyone is glad to be over with. In 2012, while filming The Bling Ring – based on the true story of a gang of southern California teenagers who burgled the homes of celebrities (including Paris Hilton) in 2008 and 2009 – Emma Watson tweeted a picture of herself in character as Nicki, wearing a short-sleeved pink Juicy Couture tracksuit and a pair of Uggs. “Nicki likes Lip Gloss, Purses, Yoga, Pole Dancing, Uggs, Louboutins, Juice Cleanses, Iced coffee and Tattoos.”Uggs are certainly ugly, or at least inelegant. They look like something Frankenstein’s monster would wear if he were an elf. The shapeless, unstructured boots, pulled on in a hurry, can make anyone look like a slob, which has made them the target of special scorn. For as long as Uggs have been popular, it hasn’t been hard to find someone furiously denouncing them. “Ugg boots are not sexy,” the Independent declared in 2003, “unless you’re Mrs Bigfoot on a lone mission across Antarctica to find Mr Bigfoot.” When wearing the boots, a writer at the online beauty magazine The Gloss complained, “there’s nothing to indicate that you don’t have square, hideous shoe boxes in place of human feet”. In 2015, one coffee shop on Brick Lane in east London even banned Ugg-wearers from its premises – calling the boots “slag wellies”.
And yet, over the years, plenty of odd and unflattering shoes – pool sliders, clogs, tall platforms – have met with the approval of the fashion establishment. The problem with Uggs wasn’t that they were ugly; it’s that they were common.
But a funny thing happened on the way to fashion’s graveyard of regrettable fads: the ubiquitous Ugg has not gone anywhere. Uggs have quietly lingered on since their heyday, unnoticed but omnipresent – once you start paying attention, you’ll be shocked to discover how many people are still wearing them. Walk down any high street and focus on footwear, and you will see an army of sheepskin boots coming at you. They are worn by mothers running errands in town and in the country, paired with denim cut-off shorts at rock festivals, worn by teenagers on Saturday shopping trips.
In the reception area at Ugg corporate headquarters in Southern California, there is a bound album filled with snapshots of celebrities wearing the company’s products. It is arranged in alphabetical order, with separate sections for women and men, and is the size of the September issue of a fashion magazine, or maybe a small phone book. Many of the photographs are from the brand’s peak cultural moment in the mid-2000s, including six different pictures of Blake Lively and four of Leighton Meester, wearing Uggs between takes on the set of Gossip Girl. But there are enough photos from the past few years to make it clear that Uggs remain a perennial off-duty uniform for the famous: Ariana Grande wearing classic boots at an airport, paired with a massive Louis Vuitton bag; Charlize Theron wearing the Cardy boot, whose knitted exterior is meant to resemble a buttoned cardigan; Emma Watson (again) shopping in a white pair; Rosie Huntington-Whiteley crossing the street wearing Coquettes (Ugg slippers shaped like a flat clog or a boot with the top sliced off, which can be worn indoors or out); Hugh Jackman and the designer Valentino (separately) wearing the Butte snow boot. Last winter, I spotted Grace Coddington, the revered creative director-at-large of American Vogue, striding into work in a pair of short black Ugg boots, paired with a Céline bag.
The message of all these images – and perhaps the secret of Ugg’s apparently unstoppable success – is that if there is a dividing line between public glamour and private style, it might be a pair of cosy shearling boots. They are undeniably comfortable – soft and squishy and warm, as if your feet were in the embrace of someone who really loves you. The look and feel telegraphs a message of “I’m worth it” but also “this is me, off-duty”. At £150 a pair, they are neither cheap nor entirely out of range. They reside in the overlap of a Venn diagram for casual and indulgent.
Somehow Uggs, the boots that so many people loved to hate, have managed to defy the cruel logic of the fashion cycle and carry on – whether you approve of them or not.
Ugg has sold so many products – mostly footwear, but also clothing and home goods – that there are 3.7 items for every woman in America; 3.0 for every woman in the UK; 2.1 for Japan. (This doesn’t include the 2.5 million pairs of counterfeit Uggs have been seized since 2007.) After a brief dip earlier this decade – when the haters proclaimed the long-overdue death of the Ugg – sales are climbing again: in 2014-15, Ugg sales were up 12.6% on the previous year, to $1.49bn, according to the most recent earnings report from Deckers Brands, the California-based footwear company that has owned Ugg since 1995.
by Marisa Meltzer, The Guardian | Read more:
Image: The Guardian
The Food Industrial Complex
In 2011, during a debate over the nutritional guidelines for school lunches, Congress decided that pizza counts as a vegetable. And not for the first time.
The American government first proposed that an unhealthy food—if it contains trace amounts of a healthy ingredient—could count as a vegetable in 1981. Looking for ways to cut the school lunch budget, the Reagan Administration suggested that cafeterias include ingredients in condiments like pickle relish and ketchup toward nutritional requirements.
This was not good politics. Democrats and the press had a field day saying that Reagan had just classified ketchup as a vegetable. “This is one of the most ridiculous regulations I ever heard of,” Republican Senator John Heinz, owner of Heinz, told the press, “and I suppose I need not add that I know something about ketchup and relish."
The Reagan Administration dropped the proposal, but it soon became law anyway. When the Obama Administration directed the Department of Agriculture to revise school lunch policies in 2011, experts took aim at the rule that allowed the tiny amount of tomato paste in pizza sauce to count toward the vegetable requirements of each meal.
Any changes made by the Department of Agriculture could jeopardize huge contracts for companies that supply food for school children’s lunches, so the food industry responded with a $5.6 million lobbying campaign. According to Margo Wootan, director of the Center for Science in the Public Interest, two multibillion dollar companies spent the most: Schwan and ConAgra, which each had large contracts for pizzas and fries used in school lunches.
Before the U.S. Department of Agriculture (USDA) could make any recommendations, Congress ensured that the push for healthier lunches did not hurt the manufacturers of unhealthy foods. Congress passed an agriculture appropriations bill that would deny the USDA funding to enforce any policies that prevented the potatoes in french fries or the tomato paste in pizza from counting as nutritional elements.
The press again enjoyed declaring that Congress had classified pizza as a vegetable. Cynics shrugged at yet another example of the government prioritizing the bottom line of businesses that manufacture sugary and salty processed foods over public health.
Yet the one-sided nature of the food industry’s lobbying is puzzling. Where were the broccoli, spinach, and carrot lobbies? Why didn’t a member of Congress take to the floor with a set of talking points provided by the leafy green vegetable lobby? Why can’t American farmers, who enjoy huge government subsidies, stand up to the processed food lobby?
Part of the answer lies in the economics of the food industry: the profit margins and scale of processed food makers gives them a heft that growers of healthy foods can’t match.
But it is also because “Big Ag” is not in the healthy food business. American farms with lobbying power don’t grow brussel sprouts; they grow grains used to make the high fructose corn syrup in Coke, the starches in processed foods, and the oil in deep fryers.
This is somewhat inevitable, but it is also a self-inflicted wound: the result of misguided government policy that subsidizes Big Macs and Big Gulps.
The Poor Margins of Broccoli Farmers
The words “food lobby” have become synonymous with unhealthy food.
In 2015, according to the Center for Responsive Politics, processed food manufacturers spent $32 million on lobbying while the fruit and vegetable industry spent a mere $3.7 million. Moreover, top fruit and vegetable contributors include the National Potato Council, which protects potato farmers’ interests in french fries, and a company that grows tomatoes for fast food chains.
To understand why the food lobby is dominated by companies pushing unhealthy foods, a good place to start is the huge imbalance between the amount of fruits and vegetables we should eat and the relative size of the fruits and vegetables market.
According to nutritional guidelines published by the USDA and the Harvard School of Public Health, fruits and vegetables should make up 50% of a healthy diet. But the financial value of the fruit and vegetable market is nowhere near 50% of the food industry. In 2015, American farmers earned under $50 billion in revenue from fruits and vegetables. In contrast, processed food manufacturers like ConAgra, General Mills, and Kellogg each make around $15 billion in yearly revenue.
The meat and carb heavy American diet partially explains these disparities. The Department of Agriculture estimates that Americans eat roughly 50% less fruits and vegetables and over 20% more grains and meat than recommended by its nutrition guidelines.
But it is the economics of the food industry that really explain why the food lobby pushes unhealthy fare.
Processed foods have high profit margins that fund advertising campaigns and lobbying budgets. The importance of branding also leads to consolidation that supports special interest lobbying.
by Alex Mayyasi, Pricenomics | Read more:
Image: Till Krech
The American government first proposed that an unhealthy food—if it contains trace amounts of a healthy ingredient—could count as a vegetable in 1981. Looking for ways to cut the school lunch budget, the Reagan Administration suggested that cafeterias include ingredients in condiments like pickle relish and ketchup toward nutritional requirements.
This was not good politics. Democrats and the press had a field day saying that Reagan had just classified ketchup as a vegetable. “This is one of the most ridiculous regulations I ever heard of,” Republican Senator John Heinz, owner of Heinz, told the press, “and I suppose I need not add that I know something about ketchup and relish."The Reagan Administration dropped the proposal, but it soon became law anyway. When the Obama Administration directed the Department of Agriculture to revise school lunch policies in 2011, experts took aim at the rule that allowed the tiny amount of tomato paste in pizza sauce to count toward the vegetable requirements of each meal.
Any changes made by the Department of Agriculture could jeopardize huge contracts for companies that supply food for school children’s lunches, so the food industry responded with a $5.6 million lobbying campaign. According to Margo Wootan, director of the Center for Science in the Public Interest, two multibillion dollar companies spent the most: Schwan and ConAgra, which each had large contracts for pizzas and fries used in school lunches.
Before the U.S. Department of Agriculture (USDA) could make any recommendations, Congress ensured that the push for healthier lunches did not hurt the manufacturers of unhealthy foods. Congress passed an agriculture appropriations bill that would deny the USDA funding to enforce any policies that prevented the potatoes in french fries or the tomato paste in pizza from counting as nutritional elements.
The press again enjoyed declaring that Congress had classified pizza as a vegetable. Cynics shrugged at yet another example of the government prioritizing the bottom line of businesses that manufacture sugary and salty processed foods over public health.
Yet the one-sided nature of the food industry’s lobbying is puzzling. Where were the broccoli, spinach, and carrot lobbies? Why didn’t a member of Congress take to the floor with a set of talking points provided by the leafy green vegetable lobby? Why can’t American farmers, who enjoy huge government subsidies, stand up to the processed food lobby?
Part of the answer lies in the economics of the food industry: the profit margins and scale of processed food makers gives them a heft that growers of healthy foods can’t match.
But it is also because “Big Ag” is not in the healthy food business. American farms with lobbying power don’t grow brussel sprouts; they grow grains used to make the high fructose corn syrup in Coke, the starches in processed foods, and the oil in deep fryers.
This is somewhat inevitable, but it is also a self-inflicted wound: the result of misguided government policy that subsidizes Big Macs and Big Gulps.
The Poor Margins of Broccoli Farmers
The words “food lobby” have become synonymous with unhealthy food.
In 2015, according to the Center for Responsive Politics, processed food manufacturers spent $32 million on lobbying while the fruit and vegetable industry spent a mere $3.7 million. Moreover, top fruit and vegetable contributors include the National Potato Council, which protects potato farmers’ interests in french fries, and a company that grows tomatoes for fast food chains.
To understand why the food lobby is dominated by companies pushing unhealthy foods, a good place to start is the huge imbalance between the amount of fruits and vegetables we should eat and the relative size of the fruits and vegetables market.
According to nutritional guidelines published by the USDA and the Harvard School of Public Health, fruits and vegetables should make up 50% of a healthy diet. But the financial value of the fruit and vegetable market is nowhere near 50% of the food industry. In 2015, American farmers earned under $50 billion in revenue from fruits and vegetables. In contrast, processed food manufacturers like ConAgra, General Mills, and Kellogg each make around $15 billion in yearly revenue.
The meat and carb heavy American diet partially explains these disparities. The Department of Agriculture estimates that Americans eat roughly 50% less fruits and vegetables and over 20% more grains and meat than recommended by its nutrition guidelines.
But it is the economics of the food industry that really explain why the food lobby pushes unhealthy fare.
Processed foods have high profit margins that fund advertising campaigns and lobbying budgets. The importance of branding also leads to consolidation that supports special interest lobbying.
by Alex Mayyasi, Pricenomics | Read more:
Image: Till Krech
Subscribe to:
Comments (Atom)









