Monday, February 1, 2016
Uninstalling Facebook App Saves Up to 20% of Android Battery Life
Facebook does not have the greatest track record with its Android app. Users have long complained about performance issues and it sucking up battery and last year Facebook’s chief product officer, Chris Cox, took the unusual step of making his staff ditch their iPhones and move to Android until they sorted out the issues.
But the problems have remained, and recently they led the Android blogger Russell Holly to dump the app, starting a chain reaction which revealed something rather interesting about the app’s performance. Prompted by Holly’s revelation that life on Android was better without Facebook’s app, Reddit user pbrandes_eth tested the app’s impact on the performance of an LG G4.
They found that when the Facebook and Facebook Messenger apps were uninstalled, other apps on the smartphone launched 15% faster. They tested 15 separate apps, and documented the findings, leading other reddit users to test other devices. They found similar results when testing for app loading performance.
After reading Holly’s piece, I had also decided to explore other options for accessing Facebook, to see if, rather than app loading, I could improve my smartphone’s battery life.
I left the Facebook Messenger app installed, but swapped the Facebook app for an app called Metal, which acts as a wrapper for Facebook’s mobile site. Over the course of a day my Huawei Nexus 6P had 20% more battery. This was true on average for every day for the week tried.
In Metal I was using the same notifications and accessing the same features as I had just a week earlier through the Facebook app, so why the difference?
Despite the Facebook app not showing up as using a significant amount of power within Android’s built-in battery statistics, it was evidently consuming more power in the background than it needed to.
It turned out other Android services including Android system and Android OS showed reduced battery consumption when the Facebook app was uninstalled. Those services act as a buffer for many apps to the outside world when running in the background. So while Facebook didn’t look like it was using that much power, it was actually just being displayed elsewhere in Android’s statistics.
But the problems have remained, and recently they led the Android blogger Russell Holly to dump the app, starting a chain reaction which revealed something rather interesting about the app’s performance. Prompted by Holly’s revelation that life on Android was better without Facebook’s app, Reddit user pbrandes_eth tested the app’s impact on the performance of an LG G4.

After reading Holly’s piece, I had also decided to explore other options for accessing Facebook, to see if, rather than app loading, I could improve my smartphone’s battery life.
I left the Facebook Messenger app installed, but swapped the Facebook app for an app called Metal, which acts as a wrapper for Facebook’s mobile site. Over the course of a day my Huawei Nexus 6P had 20% more battery. This was true on average for every day for the week tried.
In Metal I was using the same notifications and accessing the same features as I had just a week earlier through the Facebook app, so why the difference?
Despite the Facebook app not showing up as using a significant amount of power within Android’s built-in battery statistics, it was evidently consuming more power in the background than it needed to.
It turned out other Android services including Android system and Android OS showed reduced battery consumption when the Facebook app was uninstalled. Those services act as a buffer for many apps to the outside world when running in the background. So while Facebook didn’t look like it was using that much power, it was actually just being displayed elsewhere in Android’s statistics.
by Samuel Gibbs, The Guardian | Read more:
Image: opopododo/FlickrHow to Raise a Creative Child. Step One: Back Off
Child prodigies rarely become adult geniuses who change the world. We assume that they must lack the social and emotional skills to function in society. When you look at the evidence, though, this explanation doesn’t suffice: Less than a quarter of gifted children suffer from social and emotional problems. A vast majority are well adjusted — as winning at a cocktail party as in the spelling bee.
What holds them back is that they don’t learn to be original. They strive to earn the approval of their parents and the admiration of their teachers. But as they perform in Carnegie Hall and become chess champions, something unexpected happens: Practice makes perfect, but it doesn’t make new.
The gifted learn to play magnificent Mozart melodies, but rarely compose their own original scores. They focus their energy on consuming existing scientific knowledge, not producing new insights. They conform to codified rules, rather than inventing their own. Research suggests that the most creative children are the least likely to become the teacher’s pet, and in response, many learn to keep their original ideas to themselves. In the language of the critic William Deresiewicz, they become the excellent sheep.
In adulthood, many prodigies become experts in their fields and leaders in their organizations. Yet “only a fraction of gifted children eventually become revolutionary adult creators,” laments the psychologist Ellen Winner. “Those who do must make a painful transition” to an adult who “ultimately remakes a domain.”
Most prodigies never make that leap. They apply their extraordinary abilities by shining in their jobs without making waves. They become doctors who heal their patients without fighting to fix the broken medical system or lawyers who defend clients on unfair charges but do not try to transform the laws themselves.
So what does it take to raise a creative child? One study compared the families of children who were rated among the most creative 5 percent in their school system with those who were not unusually creative. The parents of ordinary children had an average of six rules, like specific schedules for homework and bedtime. Parents of highly creative children had an average of fewer than one rule.
Creativity may be hard to nurture, but it’s easy to thwart. By limiting rules, parents encouraged their children to think for themselves. They tended to “place emphasis on moral values, rather than on specific rules,” the Harvard psychologist Teresa Amabile reports.
Even then, though, parents didn’t shove their values down their children’s throats. When psychologists compared America’s most creative architects with a group of highly skilled but unoriginal peers, there was something unique about the parents of the creative architects: “Emphasis was placed on the development of one’s own ethical code.”
Yes, parents encouraged their children to pursue excellence and success — but they also encouraged them to find “joy in work.” Their children had freedom to sort out their own values and discover their own interests. And that set them up to flourish as creative adults.
When the psychologist Benjamin Bloom led a study of the early roots of world-class musicians, artists, athletes and scientists, he learned that their parents didn’t dream of raising superstar kids. They weren’t drill sergeants or slave drivers. They responded to the intrinsic motivation of their children. When their children showed interest and enthusiasm in a skill, the parents supported them.
What holds them back is that they don’t learn to be original. They strive to earn the approval of their parents and the admiration of their teachers. But as they perform in Carnegie Hall and become chess champions, something unexpected happens: Practice makes perfect, but it doesn’t make new.
The gifted learn to play magnificent Mozart melodies, but rarely compose their own original scores. They focus their energy on consuming existing scientific knowledge, not producing new insights. They conform to codified rules, rather than inventing their own. Research suggests that the most creative children are the least likely to become the teacher’s pet, and in response, many learn to keep their original ideas to themselves. In the language of the critic William Deresiewicz, they become the excellent sheep.

Most prodigies never make that leap. They apply their extraordinary abilities by shining in their jobs without making waves. They become doctors who heal their patients without fighting to fix the broken medical system or lawyers who defend clients on unfair charges but do not try to transform the laws themselves.
So what does it take to raise a creative child? One study compared the families of children who were rated among the most creative 5 percent in their school system with those who were not unusually creative. The parents of ordinary children had an average of six rules, like specific schedules for homework and bedtime. Parents of highly creative children had an average of fewer than one rule.
Creativity may be hard to nurture, but it’s easy to thwart. By limiting rules, parents encouraged their children to think for themselves. They tended to “place emphasis on moral values, rather than on specific rules,” the Harvard psychologist Teresa Amabile reports.
Even then, though, parents didn’t shove their values down their children’s throats. When psychologists compared America’s most creative architects with a group of highly skilled but unoriginal peers, there was something unique about the parents of the creative architects: “Emphasis was placed on the development of one’s own ethical code.”
Yes, parents encouraged their children to pursue excellence and success — but they also encouraged them to find “joy in work.” Their children had freedom to sort out their own values and discover their own interests. And that set them up to flourish as creative adults.
When the psychologist Benjamin Bloom led a study of the early roots of world-class musicians, artists, athletes and scientists, he learned that their parents didn’t dream of raising superstar kids. They weren’t drill sergeants or slave drivers. They responded to the intrinsic motivation of their children. When their children showed interest and enthusiasm in a skill, the parents supported them.
by Adam Grant, NY Times | Read more:
Image: Brian ChippendaleSunday, January 31, 2016
The Refragmentation
One advantage of being old is that you can see change happen in your lifetime. A lot of the change I've seen is fragmentation. US politics is much more polarized than it used to be. Culturally we have ever less common ground. The creative class flocks to a handful of happy cities, abandoning the rest. And increasing economic inequality means the spread between rich and poor is growing too. I'd like to propose a hypothesis: that all these trends are instances of the same phenomenon. And moreover, that the cause is not some force that's pulling us apart, but rather the erosion of forces that had been pushing us together.
Worse still, for those who worry about these trends, the forces that were pushing us together were an anomaly, a one-time combination of circumstances that's unlikely to be repeated—and indeed, that we would not want to repeat.
The two forces were war (above all World War II), and the rise of large corporations.
The effects of World War II were both economic and social. Economically, it decreased variation in income. Like all modern armed forces, America's were socialist economically. From each according to his ability, to each according to his need. More or less. Higher ranking members of the military got more (as higher ranking members of socialist societies always do), but what they got was fixed according to their rank. And the flattening effect wasn't limited to those under arms, because the US economy was conscripted too. Between 1942 and 1945 all wages were set by the National War Labor Board. Like the military, they defaulted to flatness. And this national standardization of wages was so pervasive that its effects could still be seen years after the war ended.
Business owners weren't supposed to be making money either. FDR said "not a single war millionaire" would be permitted. To ensure that, any increase in a company's profits over prewar levels was taxed at 85%. And when what was left after corporate taxes reached individuals, it was taxed again at a marginal rate of 93%.
Socially too the war tended to decrease variation. Over 16 million men and women from all sorts of different backgrounds were brought together in a way of life that was literally uniform. Service rates for men born in the early 1920s approached 80%. And working toward a common goal, often under stress, brought them still closer together.
Though strictly speaking World War II lasted less than 4 years for the US, its effects lasted longer. Wars make central governments more powerful, and World War II was an extreme case of this. In the US, as in all the other Allied countries, the federal government was slow to give up the new powers it had acquired. Indeed, in some respects the war didn't end in 1945; the enemy just switched to the Soviet Union. In tax rates, federal power, defense spending, conscription, and nationalism the decades after the war looked more like wartime than prewar peacetime. And the social effects lasted too. The kid pulled into the army from behind a mule team in West Virginia didn't simply go back to the farm afterward. Something else was waiting for him, something that looked a lot like the army.
If total war was the big political story of the 20th century, the big economic story was the rise of new kind of company. And this too tended to produce both social and economic cohesion.
The 20th century was the century of the big, national corporation. General Electric, General Foods, General Motors. Developments in finance, communications, transportation, and manufacturing enabled a new type of company whose goal was above all scale. Version 1 of this world was low-res: a Duplo world of a few giant companies dominating each big market.
The late 19th and early 20th centuries had been a time of consolidation, led especially by J. P. Morgan. Thousands of companies run by their founders were merged into a couple hundred giant ones run by professional managers. Economies of scale ruled the day. It seemed to people at the time that this was the final state of things. John D. Rockefeller said in 1880:
The consolidation that began in the late 19th century continued for most of the 20th. By the end of World War II, as Michael Lind writes, "the major sectors of the economy were either organized as government-backed cartels or dominated by a few oligopolistic corporations."
For consumers this new world meant the same choices everywhere, but only a few of them. When I grew up there were only 2 or 3 of most things, and since they were all aiming at the middle of the market there wasn't much to differentiate them.
One of the most important instances of this phenomenon was in TV. Here there were 3 choices: NBC, CBS, and ABC. Plus public TV for eggheads and communists. The programs the 3 networks offered were indistinguishable. In fact, here there was a triple pressure toward the center. If one show did try something daring, local affiliates in conservative markets would make them stop. Plus since TVs were expensive whole families watched the same shows together, so they had to be suitable for everyone.
And not only did everyone get the same thing, they got it at the same time. It's difficult to imagine now, but every night tens of millions of families would sit down together in front of their TV set watching the same show, at the same time, as their next door neighbors. What happens now with the Super Bowl used to happen every night. We were literally in sync. (...)
It wasn't just as consumers that the big companies made us similar. They did as employers too. Within companies there were powerful forces pushing people toward a single model of how to look and act. IBM was particularly notorious for this, but they were only a little more extreme than other big companies. And the models of how to look and act varied little between companies. Meaning everyone within this world was expected to seem more or less the same. And not just those in the corporate world, but also everyone who aspired to it—which in the middle of the 20th century meant most people who weren't already in it. For most of the 20th century, working-class people tried hard to look middle class. You can see it in old photos. Few adults aspired to look dangerous in 1950.
But the rise of national corporations didn't just compress us culturally. It compressed us economically too, and on both ends.
Worse still, for those who worry about these trends, the forces that were pushing us together were an anomaly, a one-time combination of circumstances that's unlikely to be repeated—and indeed, that we would not want to repeat.
The two forces were war (above all World War II), and the rise of large corporations.
The effects of World War II were both economic and social. Economically, it decreased variation in income. Like all modern armed forces, America's were socialist economically. From each according to his ability, to each according to his need. More or less. Higher ranking members of the military got more (as higher ranking members of socialist societies always do), but what they got was fixed according to their rank. And the flattening effect wasn't limited to those under arms, because the US economy was conscripted too. Between 1942 and 1945 all wages were set by the National War Labor Board. Like the military, they defaulted to flatness. And this national standardization of wages was so pervasive that its effects could still be seen years after the war ended.
Business owners weren't supposed to be making money either. FDR said "not a single war millionaire" would be permitted. To ensure that, any increase in a company's profits over prewar levels was taxed at 85%. And when what was left after corporate taxes reached individuals, it was taxed again at a marginal rate of 93%.
Socially too the war tended to decrease variation. Over 16 million men and women from all sorts of different backgrounds were brought together in a way of life that was literally uniform. Service rates for men born in the early 1920s approached 80%. And working toward a common goal, often under stress, brought them still closer together.
Though strictly speaking World War II lasted less than 4 years for the US, its effects lasted longer. Wars make central governments more powerful, and World War II was an extreme case of this. In the US, as in all the other Allied countries, the federal government was slow to give up the new powers it had acquired. Indeed, in some respects the war didn't end in 1945; the enemy just switched to the Soviet Union. In tax rates, federal power, defense spending, conscription, and nationalism the decades after the war looked more like wartime than prewar peacetime. And the social effects lasted too. The kid pulled into the army from behind a mule team in West Virginia didn't simply go back to the farm afterward. Something else was waiting for him, something that looked a lot like the army.
If total war was the big political story of the 20th century, the big economic story was the rise of new kind of company. And this too tended to produce both social and economic cohesion.
The 20th century was the century of the big, national corporation. General Electric, General Foods, General Motors. Developments in finance, communications, transportation, and manufacturing enabled a new type of company whose goal was above all scale. Version 1 of this world was low-res: a Duplo world of a few giant companies dominating each big market.
The late 19th and early 20th centuries had been a time of consolidation, led especially by J. P. Morgan. Thousands of companies run by their founders were merged into a couple hundred giant ones run by professional managers. Economies of scale ruled the day. It seemed to people at the time that this was the final state of things. John D. Rockefeller said in 1880:
The day of combination is here to stay. Individualism has gone, never to return.He turned out to be mistaken, but he seemed right for the next hundred years.
The consolidation that began in the late 19th century continued for most of the 20th. By the end of World War II, as Michael Lind writes, "the major sectors of the economy were either organized as government-backed cartels or dominated by a few oligopolistic corporations."
For consumers this new world meant the same choices everywhere, but only a few of them. When I grew up there were only 2 or 3 of most things, and since they were all aiming at the middle of the market there wasn't much to differentiate them.
One of the most important instances of this phenomenon was in TV. Here there were 3 choices: NBC, CBS, and ABC. Plus public TV for eggheads and communists. The programs the 3 networks offered were indistinguishable. In fact, here there was a triple pressure toward the center. If one show did try something daring, local affiliates in conservative markets would make them stop. Plus since TVs were expensive whole families watched the same shows together, so they had to be suitable for everyone.
And not only did everyone get the same thing, they got it at the same time. It's difficult to imagine now, but every night tens of millions of families would sit down together in front of their TV set watching the same show, at the same time, as their next door neighbors. What happens now with the Super Bowl used to happen every night. We were literally in sync. (...)
It wasn't just as consumers that the big companies made us similar. They did as employers too. Within companies there were powerful forces pushing people toward a single model of how to look and act. IBM was particularly notorious for this, but they were only a little more extreme than other big companies. And the models of how to look and act varied little between companies. Meaning everyone within this world was expected to seem more or less the same. And not just those in the corporate world, but also everyone who aspired to it—which in the middle of the 20th century meant most people who weren't already in it. For most of the 20th century, working-class people tried hard to look middle class. You can see it in old photos. Few adults aspired to look dangerous in 1950.
But the rise of national corporations didn't just compress us culturally. It compressed us economically too, and on both ends.
by Paul Graham | Read more:
The Church Of The Gridiron
[ed. See also: The Collision Sport on Trial]
American football officially began in the years following the Civil War. A crude blend of soccer and rugby, the sport was brutal, with a fast-and-loose set of rules that gave it the appearance of a gang fight. In 1905, 19 players died, and another 137 were injured; the Chicago Tribune called the season a “death harvest.” President Theodore Roosevelt finally intervened, calling a group of influential sportsmen to the White House in order to help transform the game.
Reforms followed, such as legalizing the forward pass and penalizing unsportsmanlike conduct. The sport became safer, and by midcentury it had entered a golden age of players like quarterback Johnny Unitas and fullback Jim Brown. Games were televised, and in the late sixties the Super Bowl was created.
Today pro football is the unparalleled giant of the sports world. In 2014 forty-five of the fifty top-rated television broadcasts were football games. More Americans follow football than follow Major League Baseball, nba basketball, and nascar racing combined. The National Football League (nfl) earns nearly $10 billion a year in profits, with an expressed goal of $25 billion. During the season, Americans spend more time watching football than going to religious services. Pro football has become the spectacle that unites people in this country more than any other.
“But it has a dark side,” says author Steve Almond.
For four decades Almond was a consummate fan, soaking up all that football offered. Then, in 2014, he did the unthinkable: he stopped. No more games. No more listening to sports talk radio. He would become football fandom’s conscientious objector.
In Against Football: One Fan’s Reluctant Manifesto he writes, “Our allegiance to football legitimizes and even fosters within us a tolerance for violence, greed, racism, and homophobia.” A New York Times bestseller, the book is an eloquent examination of America’s most popular sport — in particular, the aspects many fans tend to ignore: its astounding injury rate, its exaggeration of gender stereotypes, and its inherent violence.
Cook: What role does football play in the U.S. today?
Almond: It’s the largest shared narrative in the country: emotionally, psychologically, and maybe even financially. My sense is that more Americans — male and female, gay and straight, of all races and classes — are deeply invested in football than in any other single activity. For forty years I was a member in good standing of the Church of the Gridiron. The game can be brutal, but it’s also complex and satisfying to watch.
When Ernest Hemingway wanted to understand Spanish culture, he went to see the bullfights. Football is our bullfight: an expression of our cultural values and a profound statement about our national consciousness. It’s important to understand what it does for us and to us, what its pleasures are and its moral costs. But football means so much to so many Americans that we’re terrified of interrogating it. (...)
Football is a powerful refuge. When we watch, we get so absorbed that we forget our troubles. It’s existential relief. You are a part of some exalted event. I didn’t watch the 2014 Super Bowl, but 111 million people tuned in. We are desperate to find something that will connect us. Football is a quick and easy solution.
Yet, at a certain point, you have to step back and ask: Why is this the church I worship in? What is the nature of this religion we have created?
As a fan I did feel a connection to the people around me, especially if my team was winning, but I also felt lost inside. Watching football became a lonely experience, like feeding an addiction. It wasn’t a way for me to engage with my problems. It wasn’t enlarging my empathy or my moral imagination. It wasn’t satisfying a deeper spiritual need.
Having said that, I can’t say to other fans that the holy feeling they have when they walk into their team’s stadium isn’t real. It is. My beef is that those feelings — our devotion to athletic heroism, our sentimental loyalty to the teams we rooted for growing up, and that our dads rooted for — are being mercilessly exploited and turned into an engine of greed. Not only that, but we’re getting so sucked into the fan mind-set that we start to see everything as a competition. Think about it. We have television programs that have turned singing, dancing, cooking, traveling, and even falling in love into competitions. It’s as if the only way a person in our culture can get what he or she wants is for another person to “lose.” This mind-set is ultimately martial. It’s what novelist Cormac McCarthy is referring to when he writes, in Blood Meridian, about warfare as a natural extension of sports. What ultimately matters is whether your team — and therefore you — wins. A lot of people these days feel that way about politics and religion: it’s all about vanquishing the socialist or the heathen or whatever. Football may not be the driving force behind this cultural mind-set, but it’s the purest expression of it. (...)
Cook: Baseball used to be the “national pastime.” What happened?
Almond: Late-model capitalism. We went from an agricultural society to an industrial one. Baseball is a pastoral game. Football is more in tune with the modern American experience. The typical American worker today is trapped in an office with elaborate rules of conduct and a lot of technical jargon. You’ve got “units” of employees working on group projects and multiple levels of management. Jobs are increasingly specialized. That’s how football operates, too. There’s a giant playbook with dozens of contingencies for any given play, strategy sessions, tons of jargon, a hierarchy of coaches — all things office drones recognize from their jobs.
But here’s what makes football so alluring: When a play works, it’s not just that you got the third-quarter earnings report done. It’s Barry Sanders making a magnificent spin move to avoid a tackle and carrying the ball sixty yards to glory. That experience is ecstatic and unlike anything in our everyday lives.
Football is both a reflection of complex, brutal, and oppressive industrialization and, at the same time, a liberation from it; a return to the intuitive childhood pleasures of play.
American football officially began in the years following the Civil War. A crude blend of soccer and rugby, the sport was brutal, with a fast-and-loose set of rules that gave it the appearance of a gang fight. In 1905, 19 players died, and another 137 were injured; the Chicago Tribune called the season a “death harvest.” President Theodore Roosevelt finally intervened, calling a group of influential sportsmen to the White House in order to help transform the game.
Reforms followed, such as legalizing the forward pass and penalizing unsportsmanlike conduct. The sport became safer, and by midcentury it had entered a golden age of players like quarterback Johnny Unitas and fullback Jim Brown. Games were televised, and in the late sixties the Super Bowl was created.
Today pro football is the unparalleled giant of the sports world. In 2014 forty-five of the fifty top-rated television broadcasts were football games. More Americans follow football than follow Major League Baseball, nba basketball, and nascar racing combined. The National Football League (nfl) earns nearly $10 billion a year in profits, with an expressed goal of $25 billion. During the season, Americans spend more time watching football than going to religious services. Pro football has become the spectacle that unites people in this country more than any other.
“But it has a dark side,” says author Steve Almond.
For four decades Almond was a consummate fan, soaking up all that football offered. Then, in 2014, he did the unthinkable: he stopped. No more games. No more listening to sports talk radio. He would become football fandom’s conscientious objector.
In Against Football: One Fan’s Reluctant Manifesto he writes, “Our allegiance to football legitimizes and even fosters within us a tolerance for violence, greed, racism, and homophobia.” A New York Times bestseller, the book is an eloquent examination of America’s most popular sport — in particular, the aspects many fans tend to ignore: its astounding injury rate, its exaggeration of gender stereotypes, and its inherent violence.

Almond: It’s the largest shared narrative in the country: emotionally, psychologically, and maybe even financially. My sense is that more Americans — male and female, gay and straight, of all races and classes — are deeply invested in football than in any other single activity. For forty years I was a member in good standing of the Church of the Gridiron. The game can be brutal, but it’s also complex and satisfying to watch.
When Ernest Hemingway wanted to understand Spanish culture, he went to see the bullfights. Football is our bullfight: an expression of our cultural values and a profound statement about our national consciousness. It’s important to understand what it does for us and to us, what its pleasures are and its moral costs. But football means so much to so many Americans that we’re terrified of interrogating it. (...)
Football is a powerful refuge. When we watch, we get so absorbed that we forget our troubles. It’s existential relief. You are a part of some exalted event. I didn’t watch the 2014 Super Bowl, but 111 million people tuned in. We are desperate to find something that will connect us. Football is a quick and easy solution.
Yet, at a certain point, you have to step back and ask: Why is this the church I worship in? What is the nature of this religion we have created?
As a fan I did feel a connection to the people around me, especially if my team was winning, but I also felt lost inside. Watching football became a lonely experience, like feeding an addiction. It wasn’t a way for me to engage with my problems. It wasn’t enlarging my empathy or my moral imagination. It wasn’t satisfying a deeper spiritual need.
Having said that, I can’t say to other fans that the holy feeling they have when they walk into their team’s stadium isn’t real. It is. My beef is that those feelings — our devotion to athletic heroism, our sentimental loyalty to the teams we rooted for growing up, and that our dads rooted for — are being mercilessly exploited and turned into an engine of greed. Not only that, but we’re getting so sucked into the fan mind-set that we start to see everything as a competition. Think about it. We have television programs that have turned singing, dancing, cooking, traveling, and even falling in love into competitions. It’s as if the only way a person in our culture can get what he or she wants is for another person to “lose.” This mind-set is ultimately martial. It’s what novelist Cormac McCarthy is referring to when he writes, in Blood Meridian, about warfare as a natural extension of sports. What ultimately matters is whether your team — and therefore you — wins. A lot of people these days feel that way about politics and religion: it’s all about vanquishing the socialist or the heathen or whatever. Football may not be the driving force behind this cultural mind-set, but it’s the purest expression of it. (...)
Cook: Baseball used to be the “national pastime.” What happened?
Almond: Late-model capitalism. We went from an agricultural society to an industrial one. Baseball is a pastoral game. Football is more in tune with the modern American experience. The typical American worker today is trapped in an office with elaborate rules of conduct and a lot of technical jargon. You’ve got “units” of employees working on group projects and multiple levels of management. Jobs are increasingly specialized. That’s how football operates, too. There’s a giant playbook with dozens of contingencies for any given play, strategy sessions, tons of jargon, a hierarchy of coaches — all things office drones recognize from their jobs.
But here’s what makes football so alluring: When a play works, it’s not just that you got the third-quarter earnings report done. It’s Barry Sanders making a magnificent spin move to avoid a tackle and carrying the ball sixty yards to glory. That experience is ecstatic and unlike anything in our everyday lives.
Football is both a reflection of complex, brutal, and oppressive industrialization and, at the same time, a liberation from it; a return to the intuitive childhood pleasures of play.
by David Cook, Sun Magazine | Read more:
Image: Marshawn Lynch
Saturday, January 30, 2016
Fake Online Locksmiths May Be Out to Pick Your Pocket
Maybe this has happened to you.
Locked out of your car or home, you pull out your phone and type “locksmith” into Google. Up pops a list of names, the most promising of which appear beneath the paid ads, in space reserved for local service companies.
You might assume that the search engine’s algorithm has instantly sifted through the possibilities and presented those that are near you and that have earned good customer reviews. Some listings will certainly fit that description. But odds are good that your results include locksmiths that are not locksmiths at all.
They are call centers — often out of state, sometimes in a different country — that use a high-tech ruse to trick Google into presenting them as physical stores in your neighborhood. These operations, known as lead generators, or lead gens for short, keep a group of poorly trained subcontractors on call. After your details are forwarded, usually via text, one of those subcontractors jumps in a car and heads to your vehicle or home. That is when the trouble starts.
The goal of lead gens is to wrest as much money as possible from every customer, according to lawsuits. The typical approach is for a phone representative to offer an estimate in the range of $35 to $90. On site, the subcontractor demands three or four times that sum, often claiming that the work was more complicated than expected. Most consumers simply blanch and pay up, in part because they are eager to get into their homes or cars.
“It was very late, and it was very cold,” said Anna Pietro, recalling an evening last January when she called Allen Emergency, the nearest locksmith to her home in a Dallas suburb, according to a Google Maps search on her iPhone. “This guy shows up and says he needs to drill my door lock, which will cost $350, about seven times the estimate I’d been given on the phone. And he demanded cash.”
The phone number at Allen Emergency is now disconnected.
It is a classic bait-and-switch. And it has quietly become an epidemic in America, among the fastest-growing sources of consumer complaints, according to the Consumer Federation of America.
Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.
“I’m not exaggerating when I say these guys have people in every large and midsize city in the United States,” said John Ware, an assistant United States attorney in St. Louis, speaking of lead-gen locksmiths. (...)
The Ghosts on Google
The flaws in the Google machine are well known to Avi, an Israeli-born locksmith, who asked that his last name be omitted from this story, citing threats by competitors. (“One told me there is a bounty on my head,” he said.) Avi has been at war with lead-gen operators for eight years. It’s like guerrilla combat, because the companies are forever expanding and always innovating, he said.
To demonstrate, he searched for “locksmith” in Google one afternoon in November, as we sat in his living room in a suburb of Phoenix. One of the companies in the results was called Locksmith Force.
The company’s website at the time listed six physical locations, including a pinkish, two-story building at 10275 West Santa Fe Drive, Sun City, Ariz. When Avi looked up that address in Google Maps, he saw in the bottom left-hand corner a street-view image of the same pinkish building at the end of a retail strip.
There seemed no reason to doubt that a pinkish building stood at 10275 West Santa Fe Drive.
Avi was skeptical. “That’s about a five-minute drive from here,” he said.
We jumped in his car. It wasn’t long before the voice in his GPS announced, “You have arrived.”
“That’s the address,” he said. He was pointing to a low white-brick wall that ran beside a highway. There was no pinkish building and no stores. Other than a large, featureless warehouse on the other side of the street, there was little in sight.
“This is what I’m dealing with,” Avi said. “Ghosts.”
These ghosts don’t just game search results. They dominate AdWords, Google’s paid advertising platform. Nearly all of those ads promise “$19 service,” or thereabouts, a suspiciously low sum, given that “locksmith”-related ads cost about $30 or so per click, depending on the area.
(Yes, Google makes money every time a person clicks on an AdWords ad, and yes, in the case of locksmiths, the cost can be $30 for every click — even more in some cities. If you’ve ever wondered how Google gives away services and is still among the most profitable companies in the world, wonder no more. People clicking AdWords generated $60 billion last year.)
Locked out of your car or home, you pull out your phone and type “locksmith” into Google. Up pops a list of names, the most promising of which appear beneath the paid ads, in space reserved for local service companies.
You might assume that the search engine’s algorithm has instantly sifted through the possibilities and presented those that are near you and that have earned good customer reviews. Some listings will certainly fit that description. But odds are good that your results include locksmiths that are not locksmiths at all.

The goal of lead gens is to wrest as much money as possible from every customer, according to lawsuits. The typical approach is for a phone representative to offer an estimate in the range of $35 to $90. On site, the subcontractor demands three or four times that sum, often claiming that the work was more complicated than expected. Most consumers simply blanch and pay up, in part because they are eager to get into their homes or cars.
“It was very late, and it was very cold,” said Anna Pietro, recalling an evening last January when she called Allen Emergency, the nearest locksmith to her home in a Dallas suburb, according to a Google Maps search on her iPhone. “This guy shows up and says he needs to drill my door lock, which will cost $350, about seven times the estimate I’d been given on the phone. And he demanded cash.”
The phone number at Allen Emergency is now disconnected.
It is a classic bait-and-switch. And it has quietly become an epidemic in America, among the fastest-growing sources of consumer complaints, according to the Consumer Federation of America.
Lead gens have their deepest roots in locksmithing, but the model has migrated to an array of services, including garage door repair, carpet cleaning, moving and home security. Basically, they surface in any business where consumers need someone in the vicinity to swing by and clean, fix, relocate or install something.
“I’m not exaggerating when I say these guys have people in every large and midsize city in the United States,” said John Ware, an assistant United States attorney in St. Louis, speaking of lead-gen locksmiths. (...)
The Ghosts on Google
The flaws in the Google machine are well known to Avi, an Israeli-born locksmith, who asked that his last name be omitted from this story, citing threats by competitors. (“One told me there is a bounty on my head,” he said.) Avi has been at war with lead-gen operators for eight years. It’s like guerrilla combat, because the companies are forever expanding and always innovating, he said.
To demonstrate, he searched for “locksmith” in Google one afternoon in November, as we sat in his living room in a suburb of Phoenix. One of the companies in the results was called Locksmith Force.
The company’s website at the time listed six physical locations, including a pinkish, two-story building at 10275 West Santa Fe Drive, Sun City, Ariz. When Avi looked up that address in Google Maps, he saw in the bottom left-hand corner a street-view image of the same pinkish building at the end of a retail strip.
There seemed no reason to doubt that a pinkish building stood at 10275 West Santa Fe Drive.
Avi was skeptical. “That’s about a five-minute drive from here,” he said.
We jumped in his car. It wasn’t long before the voice in his GPS announced, “You have arrived.”
“That’s the address,” he said. He was pointing to a low white-brick wall that ran beside a highway. There was no pinkish building and no stores. Other than a large, featureless warehouse on the other side of the street, there was little in sight.
“This is what I’m dealing with,” Avi said. “Ghosts.”
These ghosts don’t just game search results. They dominate AdWords, Google’s paid advertising platform. Nearly all of those ads promise “$19 service,” or thereabouts, a suspiciously low sum, given that “locksmith”-related ads cost about $30 or so per click, depending on the area.
(Yes, Google makes money every time a person clicks on an AdWords ad, and yes, in the case of locksmiths, the cost can be $30 for every click — even more in some cities. If you’ve ever wondered how Google gives away services and is still among the most profitable companies in the world, wonder no more. People clicking AdWords generated $60 billion last year.)
by David Segal, NY Times | Read more:
Image: Caitlin O’HaraThe Real Legacy of Steve Jobs
Partway through Alex Gibney’s earnest documentary Steve Jobs: The Man in the Machine, an early Apple Computer collaborator named Daniel Kottke asks the question that appears to animate Danny Boyle’s recent film about Jobs: “How much of an asshole do you have to be to be successful?” Boyle’s Steve Jobs is a factious, melodramatic fugue that cycles through the themes and variations of Jobs’s life in three acts—the theatrical, stage-managed product launches of the Macintosh computer (1984), theNeXT computer (1988), and the iMac computer (1998). For Boyle (and his screenwriter Aaron Sorkin) the answer appears to be “a really, really big one.”
Gibney, for his part, has assembled a chorus of former friends, lovers, and employees who back up that assessment, and he is perplexed about it. By the time Jobs died in 2011, his cruelty, arrogance, mercurial temper, bullying, and other childish behavior were well known. So, too, were the inhumane conditions in Apple’s production facilities in China—where there had been dozens of suicides—as well as Jobs’s halfhearted response to them. Apple’s various tax avoidance schemes were also widely known. So why, Gibney wonders as his film opens—with thousands of people all over the world leaving flowers and notes “to Steve” outside Apple Stores the day he died, and fans recording weepy, impassioned webcam eulogies, and mourners holding up images of flickering candles on their iPads as they congregate around makeshift shrines—did Jobs’s death engender such planetary regret?
The simple answer is voiced by one of the bereaved, a young boy who looks to be nine or ten, swiveling back and forth in a desk chair in front of his computer: “The thing I’m using now, an iMac, he made,” the boy says. “He made the iMac. He made the Macbook. He made the Macbook Pro. He made the Macbook Air. He made the iPhone. He made the iPod. He’s made the iPod Touch. He’s made everything.”
Yet if the making of popular consumer goods was driving this outpouring of grief, then why hadn’t it happened before? Why didn’t people sob in the streets when George Eastman or Thomas Edison or Alexander Graham Bell died—especially since these men, unlike Steve Jobs, actually invented the cameras, electric lights, and telephones that became the ubiquitous and essential artifacts of modern life?* The difference, suggests the MIT sociologist Sherry Turkle, is that people’s feelings about Steve Jobs had less to do with the man, and less to do with the products themselves, and everything to do with the relationship between those products and their owners, a relationship so immediate and elemental that it elided the boundaries between them. “Jobs was making the computer an extension of yourself,” Turkle tells Gibney. “It wasn’t just for you, it was you.”
In Gibney’s film, Andy Grignon, the iPhone senior manager from 2005 to 2007, observes that
People are drawn to magic. Steve Jobs knew this, and it was one reason why he insisted on secrecy until the moment of unveiling. But Jobs’s obsession with secrecy went beyond his desire to preserve the “a-ha!” moment. Is Steve Jobs “the most successful paranoid in business history?,” The Economist asked in 2005, a year that saw Apple sue, among others, a Harvard freshman running a site on the Internet that traded in gossip about Apple and other products that might be in the pipeline. Gibney tells the story of Jason Chen, a Silicon Valley journalist whose home was raided in 2010 by the California Rapid Enforcement Allied Computer Team (REACT), a multi-agency SWAT force, after he published details of an iPhone model then in development. A prototype of the phone had been left in a bar by an Apple employee and then sold to Chen’s employer, the website Gizmodo, for $5,000. Chen had returned the phone to Apple four days before REACT broke down his door and seized computers and other property. Though REACT is a public entity, Apple sits on its steering committee, leaving many wondering if law enforcement was doing Apple’s bidding.
Whether to protect trade secrets, or sustain the magic, or both, Jobs was adamant that Apple products be closed systems that discouraged or prevented tinkering. This was the rationale behind Apple’s lawsuit against people who “jail-broke” their devices in order to use non-Apple, third-party apps—a lawsuit Apple eventually lost. And it can be seen in Jobs’s insistence, from the beginning, on making computers that integrated both software and hardware—unlike, for example, Microsoft, whose software can be found on any number of different kinds of PCs; this has kept Apple computer prices high and clones at bay. An early exchange in Boyle’s movie has Steve Wozniak arguing for a personal computer that could be altered by its owner, against Steve Jobs, who believed passionately in end-to-end control. “Computers aren’t paintings,” Wozniak says, but that is exactly what Jobs considered them to be. The inside of the original Macintosh bears the signatures of its creators.
The magic Jobs was selling went beyond the products his company made: it infused the story he told about himself. Even as a multimillionaire, and then a billionaire, even after selling out friends and collaborators, even after being caught back-dating stock options, even after sending most of Apple’s cash offshore to avoid paying taxes, Jobs sold himself as an outsider, a principled rebel who had taken a stand against the dominant (what he saw as mindless, crass, imperfect) culture. You could, too, he suggested, if you allied yourself with Apple. It was this sleight of hand that allowed consumers to believe that to buy a consumer good was to do good—that it was a way to change the world. “The myths surrounding Apple is for a company that makes phones,” the journalist Joe Nocera tells Gibney. “A phone is not a mythical device. It makes you wonder less about Apple than about us.”

The simple answer is voiced by one of the bereaved, a young boy who looks to be nine or ten, swiveling back and forth in a desk chair in front of his computer: “The thing I’m using now, an iMac, he made,” the boy says. “He made the iMac. He made the Macbook. He made the Macbook Pro. He made the Macbook Air. He made the iPhone. He made the iPod. He’s made the iPod Touch. He’s made everything.”
Yet if the making of popular consumer goods was driving this outpouring of grief, then why hadn’t it happened before? Why didn’t people sob in the streets when George Eastman or Thomas Edison or Alexander Graham Bell died—especially since these men, unlike Steve Jobs, actually invented the cameras, electric lights, and telephones that became the ubiquitous and essential artifacts of modern life?* The difference, suggests the MIT sociologist Sherry Turkle, is that people’s feelings about Steve Jobs had less to do with the man, and less to do with the products themselves, and everything to do with the relationship between those products and their owners, a relationship so immediate and elemental that it elided the boundaries between them. “Jobs was making the computer an extension of yourself,” Turkle tells Gibney. “It wasn’t just for you, it was you.”
In Gibney’s film, Andy Grignon, the iPhone senior manager from 2005 to 2007, observes that
Apple is a business. And we’ve somehow attached this emotion [of love, devotion, and a sense of higher purpose] to a business which is just there to make money for its shareholders. That’s all it is, nothing more. Creating that association is probably one of Steve’s greatest accomplishments.Jobs was a consummate showman. It’s no accident that Sorkin tells his story of Jobs through product launches. These were theatrical events—performances—where Jobs made sure to put himself on display as much as he did whatever new thing he was touting. “Steve was P.T. Barnum incarnate,” says Lee Clow, the advertising executive with whom he collaborated closely. “He loved the ta-da! He was always like, ‘I want you to see the Smallest Man in the World!’ He loved pulling the black velvet cloth off a new product, everything about the showbiz, the marketing, the communications.”
People are drawn to magic. Steve Jobs knew this, and it was one reason why he insisted on secrecy until the moment of unveiling. But Jobs’s obsession with secrecy went beyond his desire to preserve the “a-ha!” moment. Is Steve Jobs “the most successful paranoid in business history?,” The Economist asked in 2005, a year that saw Apple sue, among others, a Harvard freshman running a site on the Internet that traded in gossip about Apple and other products that might be in the pipeline. Gibney tells the story of Jason Chen, a Silicon Valley journalist whose home was raided in 2010 by the California Rapid Enforcement Allied Computer Team (REACT), a multi-agency SWAT force, after he published details of an iPhone model then in development. A prototype of the phone had been left in a bar by an Apple employee and then sold to Chen’s employer, the website Gizmodo, for $5,000. Chen had returned the phone to Apple four days before REACT broke down his door and seized computers and other property. Though REACT is a public entity, Apple sits on its steering committee, leaving many wondering if law enforcement was doing Apple’s bidding.
Whether to protect trade secrets, or sustain the magic, or both, Jobs was adamant that Apple products be closed systems that discouraged or prevented tinkering. This was the rationale behind Apple’s lawsuit against people who “jail-broke” their devices in order to use non-Apple, third-party apps—a lawsuit Apple eventually lost. And it can be seen in Jobs’s insistence, from the beginning, on making computers that integrated both software and hardware—unlike, for example, Microsoft, whose software can be found on any number of different kinds of PCs; this has kept Apple computer prices high and clones at bay. An early exchange in Boyle’s movie has Steve Wozniak arguing for a personal computer that could be altered by its owner, against Steve Jobs, who believed passionately in end-to-end control. “Computers aren’t paintings,” Wozniak says, but that is exactly what Jobs considered them to be. The inside of the original Macintosh bears the signatures of its creators.
The magic Jobs was selling went beyond the products his company made: it infused the story he told about himself. Even as a multimillionaire, and then a billionaire, even after selling out friends and collaborators, even after being caught back-dating stock options, even after sending most of Apple’s cash offshore to avoid paying taxes, Jobs sold himself as an outsider, a principled rebel who had taken a stand against the dominant (what he saw as mindless, crass, imperfect) culture. You could, too, he suggested, if you allied yourself with Apple. It was this sleight of hand that allowed consumers to believe that to buy a consumer good was to do good—that it was a way to change the world. “The myths surrounding Apple is for a company that makes phones,” the journalist Joe Nocera tells Gibney. “A phone is not a mythical device. It makes you wonder less about Apple than about us.”
by Sue Halpern, NY Review of Books | Read more:
Image: Philippe Huguen/AFP/Getty ImagesFriday, January 29, 2016
Why The World Is Obsessed With Midcentury Modern Design
Today, more than ever, the midcentury modern look is everywhere. DVRs are set to capture Mad Men's final season playing out on AMC. Flip through the April issue of Elle DĂ©cor, and you'll find that more than half of the featured homes prominently include midcentury furniture pieces. Turn on The Daily Show and you'll see the guests sitting in classic Knoll office chairs. If you dine in a contemporary restaurant tonight, there's a good chance you'll be seated in a chair that was designed in the 1950s—whether it is an Eames, Bertoia, Cherner, or Saarinen. A few years back, you could stamp your mail with an Eames postage stamp.
Meanwhile, type the words "midcentury" and "modern" into any furniture retailer's search pane, and you'll likely come up with dozens of pieces labeled with these design-world buzzwords—despite the fact that there is nothing "midcentury" about the items they describe. Over the past two decades, a term describing a specific period of design has become the marketing descriptor du jour.
"Midcentury modern" itself is a difficult term to define. It broadly describes architecture, furniture, and graphic design from the middle of the 20th century (roughly 1933 to 1965, though some would argue the period is specifically limited to 1947 to 1957). The timeframe is a modifier for the larger modernist movement, which has roots in the Industrial Revolution at the end of the 19th century and also in the post-World War I period.
Author Cara Greenberg coined the phrase "midcentury modern" as the title for her 1984 book, Midcentury Modern: Furniture of the 1950s. In 1983, Greenberg had written a piece for Metropolitan Home about 1950s furniture, and an editor at Crown urged her to write a book on the topic. As for the phrase "midcentury modern," Greenberg "just made that up as the book's title," she says. A New York Times review of the book acknowledged that Greenberg's tome hit on a trend. "Some love it and others simply can't stand it, but there is no denying that the 50's are back in vogue again. Cara Greenberg, the author of 'Mid- Century Modern: Furniture of the 1950's' ($30, Harmony Books) manages to convey the verve, imagination and the occasional pure zaniness of the period." The book was an immediate hit, selling more than 100,000 copies, and once "midcentury modern" entered the lexicon, the phrase was quickly adopted by both the design world and the mainstream.
The popularity of midcentury modern design today has roots at the time of Greenberg's book. Most of the designs of the midcentury had gone out of fashion by the late 60s, but in the early- to mid-eighties, interest in the period began to return. Within a decade, vintage midcentury designs were increasingly popular, and several events helped to boost midcentury modern's appeal from a niche group of design enthusiasts into the mainstream.
by Laura Fenton, Curbed | Read more:
Image: Herman Miller

"Midcentury modern" itself is a difficult term to define. It broadly describes architecture, furniture, and graphic design from the middle of the 20th century (roughly 1933 to 1965, though some would argue the period is specifically limited to 1947 to 1957). The timeframe is a modifier for the larger modernist movement, which has roots in the Industrial Revolution at the end of the 19th century and also in the post-World War I period.
Author Cara Greenberg coined the phrase "midcentury modern" as the title for her 1984 book, Midcentury Modern: Furniture of the 1950s. In 1983, Greenberg had written a piece for Metropolitan Home about 1950s furniture, and an editor at Crown urged her to write a book on the topic. As for the phrase "midcentury modern," Greenberg "just made that up as the book's title," she says. A New York Times review of the book acknowledged that Greenberg's tome hit on a trend. "Some love it and others simply can't stand it, but there is no denying that the 50's are back in vogue again. Cara Greenberg, the author of 'Mid- Century Modern: Furniture of the 1950's' ($30, Harmony Books) manages to convey the verve, imagination and the occasional pure zaniness of the period." The book was an immediate hit, selling more than 100,000 copies, and once "midcentury modern" entered the lexicon, the phrase was quickly adopted by both the design world and the mainstream.
The popularity of midcentury modern design today has roots at the time of Greenberg's book. Most of the designs of the midcentury had gone out of fashion by the late 60s, but in the early- to mid-eighties, interest in the period began to return. Within a decade, vintage midcentury designs were increasingly popular, and several events helped to boost midcentury modern's appeal from a niche group of design enthusiasts into the mainstream.
by Laura Fenton, Curbed | Read more:
Image: Herman Miller
Thursday, January 28, 2016
The Disposable Rocket
Inhabiting a male body is like having a bank account; as long as it’s healthy, you don’t think much about it. Compared to the female body, it is a low-maintenance proposition: a shower now and then, trim the fingernails every ten days, a haircut once a month. Oh yes, shaving—scraping or buzzing away at your face every morning. Byron, in Don Juan, thought the repeated nuisance of shaving balanced out the periodic agony, for females, of childbirth. Women are, his lines tell us,
From the standpoint of reproduction, the male body is a delivery system, as the female is a mazy device for retention. Once the delivery is made, men feel a faith but distinct falling-off of interest. Yet against the enduring realm heroics of birth and nurture should be set the male’s superhuman frenzy to deliver his goods: he vaults walls, skips sleep, risks wallet, health, and his political future all to ram home his seed into the gut of the chosen woman. The sense of the chase lives in him as the key to life. His body is, like a delivery rocket that falls away in space, a disposable means. Men put their bodies at risk to experience the release from gravity.
Condemn’d to child-bed, as men for their sins Have shaving too entail’d upon their chins,— A daily plague, which the aggregate May average on the whole with parturition.

When my tenancy of a male body was fairly new—of six or so years’ duration—I used to jump and fall just for the joy of it. Falling—backwards, or downstairs—became a specialty of mine, an attention-getting stunt I was still practicing into my thirties, at suburban parties. Falling is, after all, a kind of flying, though of briefer duration than would be ideal. My impulse to hurl myself from high windows and the edges of cliffs belongs to my body, not my mind, which resists the siren call of the chasm with all its might; the interior struggle knocks the wind from my lungs and tightens my scrotum and gives any trip to Europe, with its Alps, castle parapets, and gargoyled cathedral lookouts, a flavor of nightmare. Falling, strangely, no longer figures in my dreams, as it often did when I was a boy and my subconscious was more honest with me. An airplane, that necessary evil, turns the earth into a map so quickly the brain turns aloof and calm; still, I marvel that there is no end of young men willing to become jet pilots.
Any accounting of male-female differences must include the male’s superior recklessness, a drive not, I think, toward death, as the darkest feminist cosmogonies would have it, but to test the limits, to see what the traffic will bear—a kind of mechanic’s curiosity. The number of men who do lasting damage to their young bodies is striking; war and car accidents aside, secondary-school sports, with the approval of parents and the encouragement of brutish coaches, take a fearful toll of skulls and knees. We were made for combat, back in the postsimian, East-African days, and the bumping, the whacking, the breathlessness, the painsmothering adrenaline rush form a cumbersome and unfashionable bliss, but bliss nevertheless. Take your body to the edge, and see if it flies.
by John Updike, Brown University | Read more: (pdf)
Image: via:
Interview With Noam Chomsky: Is European Integration Unraveling?
Europe is in turmoil. The migration and refugee crisis is threatening to unravel the entire European integration project. Unwilling to absorb the waves of people fleeing their homes in the Middle East and North Africa, many European Union (EU) member states have began imposing border controls.
But it is not only people from Syria and Iraq, as mainstream media narratives would suggest, who are trying to reach Europe these days. Refugees come from Pakistan and Afghanistan and from nations in sub-Saharan Africa. The numbers are staggering, and they seem to be growing with the passing of every month. In the meantime, anti-immigration sentiment is spreading like wildfire throughout Europe, giving rise to extremist voices that threaten the very foundation of the EU and its vision of an "open, democratic" society.
In light of these challenges, EU officials are pulling out all the stops in their effort to deal with the migration and refugee crisis, offering both technical and economic assistance to member states in hopes that they will do their part in averting the unraveling of the European integration project. Whether they will succeed or fail remains to be seen. What is beyond a doubt however is that Europe's migration and refugee crisis will intensify as more than 4 million more migrants and refugees are expected to reach Europe in the next two years.
Noam Chomsky, one of the world's leading critical intellectuals, offered his insights to Truthout on Europe's migration and refugee crisis and other current European developments - including the ongoing financial crisis in Greece - in an exclusive interview with C.J. Polychroniou.
C.J. Polychroniou: Noam, thanks for doing this interview on current developments in Europe. I would like to start by asking you this question: Why do you think Europe's refugee crisis is happening now?
Noam Chomsky: The crisis has been building up for a long time. It is hitting Europe now because it has burst the bounds, from the Middle East and from Africa. Two Western sledgehammer blows had a dramatic effect. The first was the US-UK invasion of Iraq, which dealt a nearly lethal blow to a country that had already been devastated by a massive military attack 20 years earlier followed by virtually genocidal US-UK sanctions. Apart from the slaughter and destruction, the brutal occupation ignited a sectarian conflict that is now tearing the country and the entire region apart. The invasion displaced millions of people, many of whom fled and were absorbed in the neighboring countries, poor countries that are left to deal somehow with the detritus of our crimes.
One outgrowth of the invasion is the ISIS/Daesh monstrosity, which is contributing to the horrifying Syrian catastrophe. Again, the neighboring countries have been absorbing the flow of refugees. Turkey alone has over 2 million Syrian refugees. At the same time it is contributing to the flow by its policies in Syria: supporting the extremist al-Nusra Front and other radical Islamists and attacking the Kurds who are the main ground force opposing ISIS - which has also benefited from not-so-tacit Turkish support. But the flood can no longer be contained within the region.
The second sledgehammer blow destroyed Libya, now a chaos of warring groups, an ISIS base, a rich source of jihadis and weapons from West Africa to the Middle East, and a funnel for the flow of refugees from Africa. That at once brings up longer-term factors. For centuries, Europe has been torturing Africa - or, to put it more mildly - exploiting Africa for Europe's own development, to adopt the recommendation of the top US planner George Kennan after World War II.
The history, which should be familiar, is beyond grotesque. To take just a single case, consider Belgium, now groaning under a refugee crisis. Its wealth derived in no small measure from "exploiting" the Congo with brutality that exceeded even its European competitors. Congo finally won its freedom in 1960. It could have become a rich and advanced country once freed from Belgium's clutches, spurring Africa's development as well. There were real prospects, under the leadership of Patrice Lumumba, one of the most promising figures in Africa. He was targeted for assassination by the CIA, but the Belgians got there first. His body was cut to pieces and dissolved in sulfuric acid. The US and its allies supported the murderous kleptomaniac Mobutu. By now Eastern Congo is the scene of the world's worst slaughters, assisted by US favorite Rwanda while warring militias feed the craving of Western multinationals for minerals for cell phones and other high-tech wonders. The picture generalizes too much of Africa, exacerbated by innumerable crimes. For Europe, all of this becomes a refugee crisis.
Do the waves of immigrants (obviously many of them are immigrants, not simply refugees from war-torn regions) penetrating the heart of Europe represent some kind of a "natural disaster," or is it purely the result of politics?
There is an element of natural disaster. The terrible drought in Syria that shattered the society was presumably the effect of global warming, which is not exactly natural. The Darfur crisis was in part the result of desertification that drove nomadic populations to settled areas. The awful Central African famines today may also be in part due to the assault on the environment during the "Anthropocene," the new geological era when human activities, mainly industrialization, have been destroying the prospects for decent survival, and will do so, unless curbed.
European Union officials are having an exceedingly difficult time coping with the refugee crisis because many EU member states are unwilling to do their part and accept anything more than just a handful of refugees. What does this say about EU governance and the values of many European societies?
EU governance works very efficiently to impose harsh austerity measures that devastate poorer countries and benefit Northern banks. But it has broken down almost completely when addressing a human catastrophe that is in substantial part the result of Western crimes. The burden has fallen on the few who were willing, at least temporarily, to do more than lift a finger, like Sweden and Germany. Many others have just closed their borders. Europe is trying to induce Turkey to keep the miserable wrecks away from its borders, just as the US is doing, pressuring Mexico to prevent those trying to escape the ruins of US crimes in Central America from reaching US borders. This is even described as a humane policy that reduces "illegal immigration."
What does all of this tell us about prevailing values? It is hard even to use the word "values," let alone to comment. That's particularly when writing in the United States, probably the safest country in the world, now consumed by a debate over whether to allow Syrians in at all because one might be a terrorist pretending to be a doctor, or at the extremes, which unfortunately is in the US mainstream, whether to allow any Muslims in at all, while a huge wall protects us from immigrants fleeing from the wreckage south of the border.
by C.J. Polychroniou and Noam Chomsky, Truthout | Read more:
Image: Andrew Rusk
But it is not only people from Syria and Iraq, as mainstream media narratives would suggest, who are trying to reach Europe these days. Refugees come from Pakistan and Afghanistan and from nations in sub-Saharan Africa. The numbers are staggering, and they seem to be growing with the passing of every month. In the meantime, anti-immigration sentiment is spreading like wildfire throughout Europe, giving rise to extremist voices that threaten the very foundation of the EU and its vision of an "open, democratic" society.

Noam Chomsky, one of the world's leading critical intellectuals, offered his insights to Truthout on Europe's migration and refugee crisis and other current European developments - including the ongoing financial crisis in Greece - in an exclusive interview with C.J. Polychroniou.
C.J. Polychroniou: Noam, thanks for doing this interview on current developments in Europe. I would like to start by asking you this question: Why do you think Europe's refugee crisis is happening now?
Noam Chomsky: The crisis has been building up for a long time. It is hitting Europe now because it has burst the bounds, from the Middle East and from Africa. Two Western sledgehammer blows had a dramatic effect. The first was the US-UK invasion of Iraq, which dealt a nearly lethal blow to a country that had already been devastated by a massive military attack 20 years earlier followed by virtually genocidal US-UK sanctions. Apart from the slaughter and destruction, the brutal occupation ignited a sectarian conflict that is now tearing the country and the entire region apart. The invasion displaced millions of people, many of whom fled and were absorbed in the neighboring countries, poor countries that are left to deal somehow with the detritus of our crimes.
One outgrowth of the invasion is the ISIS/Daesh monstrosity, which is contributing to the horrifying Syrian catastrophe. Again, the neighboring countries have been absorbing the flow of refugees. Turkey alone has over 2 million Syrian refugees. At the same time it is contributing to the flow by its policies in Syria: supporting the extremist al-Nusra Front and other radical Islamists and attacking the Kurds who are the main ground force opposing ISIS - which has also benefited from not-so-tacit Turkish support. But the flood can no longer be contained within the region.
The second sledgehammer blow destroyed Libya, now a chaos of warring groups, an ISIS base, a rich source of jihadis and weapons from West Africa to the Middle East, and a funnel for the flow of refugees from Africa. That at once brings up longer-term factors. For centuries, Europe has been torturing Africa - or, to put it more mildly - exploiting Africa for Europe's own development, to adopt the recommendation of the top US planner George Kennan after World War II.
The history, which should be familiar, is beyond grotesque. To take just a single case, consider Belgium, now groaning under a refugee crisis. Its wealth derived in no small measure from "exploiting" the Congo with brutality that exceeded even its European competitors. Congo finally won its freedom in 1960. It could have become a rich and advanced country once freed from Belgium's clutches, spurring Africa's development as well. There were real prospects, under the leadership of Patrice Lumumba, one of the most promising figures in Africa. He was targeted for assassination by the CIA, but the Belgians got there first. His body was cut to pieces and dissolved in sulfuric acid. The US and its allies supported the murderous kleptomaniac Mobutu. By now Eastern Congo is the scene of the world's worst slaughters, assisted by US favorite Rwanda while warring militias feed the craving of Western multinationals for minerals for cell phones and other high-tech wonders. The picture generalizes too much of Africa, exacerbated by innumerable crimes. For Europe, all of this becomes a refugee crisis.
Do the waves of immigrants (obviously many of them are immigrants, not simply refugees from war-torn regions) penetrating the heart of Europe represent some kind of a "natural disaster," or is it purely the result of politics?
There is an element of natural disaster. The terrible drought in Syria that shattered the society was presumably the effect of global warming, which is not exactly natural. The Darfur crisis was in part the result of desertification that drove nomadic populations to settled areas. The awful Central African famines today may also be in part due to the assault on the environment during the "Anthropocene," the new geological era when human activities, mainly industrialization, have been destroying the prospects for decent survival, and will do so, unless curbed.
European Union officials are having an exceedingly difficult time coping with the refugee crisis because many EU member states are unwilling to do their part and accept anything more than just a handful of refugees. What does this say about EU governance and the values of many European societies?
EU governance works very efficiently to impose harsh austerity measures that devastate poorer countries and benefit Northern banks. But it has broken down almost completely when addressing a human catastrophe that is in substantial part the result of Western crimes. The burden has fallen on the few who were willing, at least temporarily, to do more than lift a finger, like Sweden and Germany. Many others have just closed their borders. Europe is trying to induce Turkey to keep the miserable wrecks away from its borders, just as the US is doing, pressuring Mexico to prevent those trying to escape the ruins of US crimes in Central America from reaching US borders. This is even described as a humane policy that reduces "illegal immigration."
What does all of this tell us about prevailing values? It is hard even to use the word "values," let alone to comment. That's particularly when writing in the United States, probably the safest country in the world, now consumed by a debate over whether to allow Syrians in at all because one might be a terrorist pretending to be a doctor, or at the extremes, which unfortunately is in the US mainstream, whether to allow any Muslims in at all, while a huge wall protects us from immigrants fleeing from the wreckage south of the border.
by C.J. Polychroniou and Noam Chomsky, Truthout | Read more:
Image: Andrew Rusk
Why Tokyo is the World’s Best Food City
[ed. I think Anthony Bourdain said the same thing.]
It’s pointless to engage in any debate about which city has the best food without mentioning Tokyo.
Tokyo is the answer I give when friends and I kick around the question, Where would you live for the rest of your life solely for the food? Why? Because Japan as a country is devoted to food, and in Tokyo that fixation is exponentially multiplied. It’s a city of places built on top of each other, a mass complex of restaurants.
Let me rattle off the reasons why Tokyo beats all other cities:
It has more Michelin stars than any other city in the world, should you choose to eat that kind of food. I’d argue that some of the best French food and some of the best Italian food is in Tokyo. All the great French chefs have outposts there. If I want to eat at L’Astrance, I can go to Tokyo and eat it with Japanese ingredients. The Japanese have been sending their best cooks to train in Europe for almost sixty years. If you look at the top kitchens around the world, there is at least one Japanese cook in nearly every one.
Japan has taken from everywhere, because that’s what Japanese culture does: they take and they polish and shine and they make it better. The rest of the world’s food cultures could disappear, and as long as Tokyo remains, everything will be okay. It’s the GenBank for food. Everything that is good in the world is there.
If I want to have sushi, there’s no better place on the planet. All of the best fish in the world is flown to Tokyo so the chefs there can have first pick of it—whether it’s Hokkaido sea urchin or bluefin tuna caught off of Long Island, it all moves through Tsukiji fish market before jokers in any other city get a crack at it.
If I want to have kaiseki, there are top Kyoto guys who have spots in Tokyo, and they’re pretty fucking good. If I want to visit places dedicated to singular food items, from tempura to tonkatsu to yakitori, they’ve got it all. They have street food, yakisoba, ramen. They have the best steakhouses in the world. They have the best fucking patisseries in the world. The best Pierre HermĂ© is in Tokyo, not in fucking Paris. You know why? Because of the fucking Japanese cooks. I can eat the best food in subways, I can eat the best food in the train station, I can eat the best food in the airport. It’s the one place in the world where I have to seek out bad food. It’s hard to find.
They have no stupid importation laws; they get the best shit. Europe exports their best shit to Japan, because they know the Japanese have better palates than dumb Americans. It’s true. Go to the local department stores and buy cheese. It’s amazing. (...)
I can craft a great meal from convenience stores. A fantastic meal. From properly made bento boxes, to a variety of instant ramen, toonigiri, to salads, to sandwiches, it’s all really good. The egg-salad sandwiches at all the convenience stores are amazing. All the fried chicken, delicious. The chain restaurants, amazing. KFC, Pizza Hut, TGI Fridays, Tony Roma’s, you name it. I’ve been to all of them. Guess what? They’re all awesome. You know why? They care a little bit more. That’s it. They just make better fucking food than anywhere else. It’s awesome.
Now let’s keep it interesting by switching and going over the cons. There really are only a few.
It’s pointless to engage in any debate about which city has the best food without mentioning Tokyo.

Let me rattle off the reasons why Tokyo beats all other cities:
It has more Michelin stars than any other city in the world, should you choose to eat that kind of food. I’d argue that some of the best French food and some of the best Italian food is in Tokyo. All the great French chefs have outposts there. If I want to eat at L’Astrance, I can go to Tokyo and eat it with Japanese ingredients. The Japanese have been sending their best cooks to train in Europe for almost sixty years. If you look at the top kitchens around the world, there is at least one Japanese cook in nearly every one.
Japan has taken from everywhere, because that’s what Japanese culture does: they take and they polish and shine and they make it better. The rest of the world’s food cultures could disappear, and as long as Tokyo remains, everything will be okay. It’s the GenBank for food. Everything that is good in the world is there.
If I want to have sushi, there’s no better place on the planet. All of the best fish in the world is flown to Tokyo so the chefs there can have first pick of it—whether it’s Hokkaido sea urchin or bluefin tuna caught off of Long Island, it all moves through Tsukiji fish market before jokers in any other city get a crack at it.
If I want to have kaiseki, there are top Kyoto guys who have spots in Tokyo, and they’re pretty fucking good. If I want to visit places dedicated to singular food items, from tempura to tonkatsu to yakitori, they’ve got it all. They have street food, yakisoba, ramen. They have the best steakhouses in the world. They have the best fucking patisseries in the world. The best Pierre HermĂ© is in Tokyo, not in fucking Paris. You know why? Because of the fucking Japanese cooks. I can eat the best food in subways, I can eat the best food in the train station, I can eat the best food in the airport. It’s the one place in the world where I have to seek out bad food. It’s hard to find.
They have no stupid importation laws; they get the best shit. Europe exports their best shit to Japan, because they know the Japanese have better palates than dumb Americans. It’s true. Go to the local department stores and buy cheese. It’s amazing. (...)
I can craft a great meal from convenience stores. A fantastic meal. From properly made bento boxes, to a variety of instant ramen, toonigiri, to salads, to sandwiches, it’s all really good. The egg-salad sandwiches at all the convenience stores are amazing. All the fried chicken, delicious. The chain restaurants, amazing. KFC, Pizza Hut, TGI Fridays, Tony Roma’s, you name it. I’ve been to all of them. Guess what? They’re all awesome. You know why? They care a little bit more. That’s it. They just make better fucking food than anywhere else. It’s awesome.
Now let’s keep it interesting by switching and going over the cons. There really are only a few.
by David Chang, Lucky Peach | Read more:
Image: uncredited
Wednesday, January 27, 2016
The People's Critics
[ed. I still think this is my favorite Pete Wells review.]
For one of his last meals as the chief restaurant critic of the New York Times, Sam Sifton ate at “the best restaurant in New York City: Per Se, in the Time Warner Center, just up the escalator from the mall, a jewel amid the zirconia.” He (re-)awarded it the Times’ highest rating, four stars, and was so moved that he savored one dish as one “might have a massage or a sunset.” And of course he did: No one would have expected any less for Thomas Keller, long considered one of America’s greatest living chefs.
That was five years ago. Earlier this month, Sifton’s replacement, Pete Wells, declared that “the perception of Per Se as one of the country’s great restaurants, which I shared after visits in the past, appear[s] out of date” and stripped the restaurant of two of its stars. Even though it had been anticipated, it’s hard to overstate the magnitude of Wells’ review in the restaurant world: It’s maybe sort of like if people still cared what music critics said about albums and the most important one of all wrote that, like, Radiohead’s new album is not that good and certainly not great but especially not perfect?
Anyways, Wells’ takedown was received with rapt and thunderous applause: It became one of this most-read reviews in his more than four years as the Times’ chief restaurant critic and sucked the sage-scented air out of almost every other conversation in the dining world, at least for a moment. And why not? People love to watch falling stars, especially when the crash is this spectacular: The greatest restaurant in New York from one of the greatest chefs in the country is in fact a smoldering garbage fire, and has been for a year, or maybe even longer. (...)
But there is something that distinguishes Pete Wells’ run as critic, and it’s not just his deep awareness that his potential audience is both larger and different than his predecessors—a savvy on full display in his atomic obliteration of Guy Fieri’s American Kitchen & Bar or four-star crown for Sushi Nakazawa (whose chef is mildly famous for being the apprentice who cried when he made the egg sushi correctly in Jiro Dreams of Sushi). It would be hard to overstate how profoundly high-end dining has changed since Per Se opened in 2004, during a decade or so that has been largely marked by the democratization of high-end cooking: Or, in a picture, carefully grown and obsessively sourced food, radically composed and meticulously prepared, then dropped onto your cramped table with deeply uncomfortable seats by a cranky, tattooed and taciturn waiter for tens of dollars a head. What might have seemed like sorcery in 2004, “hunt[ing] down superior ingredients—turning to Elysian Fields Farm for lamb, Snake River Farms for Kobe beef—and let[ting] them express themselves as clearly as possible” through “cooking as diligence and even perfectionism”—amount to mere table stakes for any remotely hyped restaurant in gentrified Brooklyn (or Manhattan or any major city) in 2016. What was praise from Bruni in 2004 reads like a recipe for inducing nausea today, in a world where the kind of diner who would save up for a meal at Per Se probably dreams of eating a single scallop off of a bed of smoking moss and juniper branch at Fäviken:
For one of his last meals as the chief restaurant critic of the New York Times, Sam Sifton ate at “the best restaurant in New York City: Per Se, in the Time Warner Center, just up the escalator from the mall, a jewel amid the zirconia.” He (re-)awarded it the Times’ highest rating, four stars, and was so moved that he savored one dish as one “might have a massage or a sunset.” And of course he did: No one would have expected any less for Thomas Keller, long considered one of America’s greatest living chefs.
That was five years ago. Earlier this month, Sifton’s replacement, Pete Wells, declared that “the perception of Per Se as one of the country’s great restaurants, which I shared after visits in the past, appear[s] out of date” and stripped the restaurant of two of its stars. Even though it had been anticipated, it’s hard to overstate the magnitude of Wells’ review in the restaurant world: It’s maybe sort of like if people still cared what music critics said about albums and the most important one of all wrote that, like, Radiohead’s new album is not that good and certainly not great but especially not perfect?

But there is something that distinguishes Pete Wells’ run as critic, and it’s not just his deep awareness that his potential audience is both larger and different than his predecessors—a savvy on full display in his atomic obliteration of Guy Fieri’s American Kitchen & Bar or four-star crown for Sushi Nakazawa (whose chef is mildly famous for being the apprentice who cried when he made the egg sushi correctly in Jiro Dreams of Sushi). It would be hard to overstate how profoundly high-end dining has changed since Per Se opened in 2004, during a decade or so that has been largely marked by the democratization of high-end cooking: Or, in a picture, carefully grown and obsessively sourced food, radically composed and meticulously prepared, then dropped onto your cramped table with deeply uncomfortable seats by a cranky, tattooed and taciturn waiter for tens of dollars a head. What might have seemed like sorcery in 2004, “hunt[ing] down superior ingredients—turning to Elysian Fields Farm for lamb, Snake River Farms for Kobe beef—and let[ting] them express themselves as clearly as possible” through “cooking as diligence and even perfectionism”—amount to mere table stakes for any remotely hyped restaurant in gentrified Brooklyn (or Manhattan or any major city) in 2016. What was praise from Bruni in 2004 reads like a recipe for inducing nausea today, in a world where the kind of diner who would save up for a meal at Per Se probably dreams of eating a single scallop off of a bed of smoking moss and juniper branch at Fäviken:
Sybaritic to the core, Per Se is big on truffles, and it is big on foie gras, which it prepares in many ways, depending on the night. I relished it most when it was poached sous vide, in a tightly sealed plastic pouch, with Sauternes and vanilla. The vanilla was a perfect accent, used in perfect proportion.Leaving aside the dismal execution that Wells experienced, part of Per Se’s problem, in other words, is that it is no longer elite enough even in a city host to merely the fifth-greatest restaurant in the world. (Eleven Madison Park, which Pete Wells loved, by the way, is now more inaccessible than ever, with a starting price of $295 a head for dinner.)
by Matt Buchanan, The Awl | Read more:
Image: John
Subscribe to:
Posts (Atom)