Wednesday, June 29, 2016
Beyond the Brexit Debate
Hostility to the EU, not just in Britain, but throughout Europe, has been driven by frustrations about democracy and resentment about immigration. The Remain (pro-EU) campaign, recognizing that it has few answers, has largely avoided both issues, focusing almost entirely on economic arguments. Leave (anti-EU) campaigners have been equally opportunistic in the way they have addressed questions of democracy and immigration.
Many EU supporters dismiss the charge that the EU is undemocratic, pointing to the existence of the European parliament whose members are elected by all EU citizens. This is not only to overstate the influence of MEPs on policy making, it is also to miss the point about popular resentment. The reason that people see the EU as undemocratic is not because they don't think they can vote in EU elections. It is because they feel that despite their vote, they have little say in the major decisions that shape their lives. (...)
But while the EU is a fundamentally undemocratic institution, leaving the EU would not, in itself, bridge the democratic deficit. There exists today a much more profound disenchantment with mainstream political institutions, a disenchantment that is evident at national, as well as at European, level and which, throughout Europe, has led to an upsurge in support for populist parties.
The background to this disenchantment is the narrowing of the ideological divides that once characterized politics. Politics has become more about technocratic management than social transformation. One way in which people have felt this change is as a crisis of political representation, as a growing sense of being denied a voice, and of political institutions as being remote and corrupt. The sense of being politically abandoned has been most acute within the traditional working class, whose feelings of isolation have increased as social democratic parties have cut their links with their old constituencies. As mainstream parties have discarded both their ideological attachments and their long-established constituencies, so the public has become increasingly disengaged from the political process. The gap between voters and the elite has widened, fostering disenchantment with the very idea of politics. The EU has come to be an institutional embodiment of this new political landscape.
The main political faultline, throughout Europe today, is not between left and right, between social democracy and conservatism, but between those who feel at home in – or at least are willing to accommodate themselves to – the new globalised, technocratic world, and those who feel left out, dispossessed and voiceless. These kinds of divisions have always existed, of course. In the past, however, that sense of dispossession and voicelessness could be expressed politically, particularly through the organizations of the left and of the labour movement. No longer. It is the erosion of such mechanisms that is leading to the remaking of Europe's political landscape. (...)
The real issue is not control of borders but having a democratic mandate for any immigration policy. There is no iron law that the public must be irrevocably hostile to immigration. Large sections of the public have become hostile because they have come to associate immigration with unacceptable change. And yet, while immigration has become the most potent symbol of a world out of control, and of ordinary people having little say in the policies that affect their lives, it is not the reason for the social and political grievances that many have to endure.
Economic and social changes – the decline of manufacturing industry, the crumbling of the welfare state, the coming of austerity, the atomization of society, the growth of inequality – have combined with political shifts , such as the erosion of trade union power and the transformation of social democratic parties, to create in sections of the electorate a sense of rage.
by Kenan Malik, Eurozine | Read more:
Many EU supporters dismiss the charge that the EU is undemocratic, pointing to the existence of the European parliament whose members are elected by all EU citizens. This is not only to overstate the influence of MEPs on policy making, it is also to miss the point about popular resentment. The reason that people see the EU as undemocratic is not because they don't think they can vote in EU elections. It is because they feel that despite their vote, they have little say in the major decisions that shape their lives. (...)
But while the EU is a fundamentally undemocratic institution, leaving the EU would not, in itself, bridge the democratic deficit. There exists today a much more profound disenchantment with mainstream political institutions, a disenchantment that is evident at national, as well as at European, level and which, throughout Europe, has led to an upsurge in support for populist parties.The background to this disenchantment is the narrowing of the ideological divides that once characterized politics. Politics has become more about technocratic management than social transformation. One way in which people have felt this change is as a crisis of political representation, as a growing sense of being denied a voice, and of political institutions as being remote and corrupt. The sense of being politically abandoned has been most acute within the traditional working class, whose feelings of isolation have increased as social democratic parties have cut their links with their old constituencies. As mainstream parties have discarded both their ideological attachments and their long-established constituencies, so the public has become increasingly disengaged from the political process. The gap between voters and the elite has widened, fostering disenchantment with the very idea of politics. The EU has come to be an institutional embodiment of this new political landscape.
The main political faultline, throughout Europe today, is not between left and right, between social democracy and conservatism, but between those who feel at home in – or at least are willing to accommodate themselves to – the new globalised, technocratic world, and those who feel left out, dispossessed and voiceless. These kinds of divisions have always existed, of course. In the past, however, that sense of dispossession and voicelessness could be expressed politically, particularly through the organizations of the left and of the labour movement. No longer. It is the erosion of such mechanisms that is leading to the remaking of Europe's political landscape. (...)
The real issue is not control of borders but having a democratic mandate for any immigration policy. There is no iron law that the public must be irrevocably hostile to immigration. Large sections of the public have become hostile because they have come to associate immigration with unacceptable change. And yet, while immigration has become the most potent symbol of a world out of control, and of ordinary people having little say in the policies that affect their lives, it is not the reason for the social and political grievances that many have to endure.
Economic and social changes – the decline of manufacturing industry, the crumbling of the welfare state, the coming of austerity, the atomization of society, the growth of inequality – have combined with political shifts , such as the erosion of trade union power and the transformation of social democratic parties, to create in sections of the electorate a sense of rage.
by Kenan Malik, Eurozine | Read more:
Image: uncredited
Can You Get Over an Addiction?
I shot heroin and cocaine while attending Columbia in the 1980s, sometimes injecting many times a day and leaving scars that are still visible. I kept using, even after I was suspended from school, after I overdosed and even after I was arrested for dealing, despite knowing that this could reduce my chances of staying out of prison.
My parents were devastated: They couldn’t understand what had happened to their “gifted” child who had always excelled academically. They kept hoping I would just somehow stop, even though every time I tried to quit, I relapsed within months.
There are, speaking broadly, two schools of thought on addiction: The first was that my brain had been chemically “hijacked” by drugs, leaving me no control over a chronic, progressive disease. The second was simply that I was a selfish criminal, with little regard for others, as much of the public still seems to believe. (When it’s our own loved ones who become addicted, we tend to favor the first explanation; when it’s someone else’s, we favor the second.)
We are long overdue for a new perspective — both because our understanding of the neuroscience underlying addiction has changed and because so many existing treatments simply don’t work.
Addiction is indeed a brain problem, but it’s not a degenerative pathology like Alzheimer’s disease or cancer, nor is it evidence of a criminal mind. Instead, it’s a learning disorder, a difference in the wiring of the brain that affects the way we process information about motivation, reward and punishment. And, as with many learning disorders, addictive behavior is shaped by genetic and environmental influences over the course of development.
Scientists have documented the connection between learning processes and addiction for decades. Now, through both animal research and imaging studies, neuroscientists are starting to recognize which brain regions are involved in addiction and how.
The studies show that addiction alters the interactions between midbrain regions like the ventral tegmentum and the nucleus accumbens, which are involved with motivation and pleasure, and parts of the prefrontal cortex that mediate decisions and help set priorities. Acting in concert, these networks determine what we value in order to ensure that we attain critical biological goals: namely, survival and reproduction.
In essence, addiction occurs when these brain systems are focused on the wrong objects: a drug or self-destructive behavior like excessive gambling instead of a new sexual partner or a baby. Once that happens, it can cause serious trouble.
If, like me, you grew up with a hyper-reactive nervous system that constantly made you feel overwhelmed, alienated and unlovable, finding a substance that eases social stress becomes a blessed escape. For me, heroin provided a sense of comfort, safety and love that I couldn’t get from other people (the key agent of addiction in these regions is the same for many pleasurable experiences:dopamine). Once I’d experienced the relief heroin gave me, I felt as though I couldn’t survive without it.
Understanding addiction from this neurodevelopmental perspective offers a great deal of hope. First, like other learning disorders, for example, attention-deficit hyperactivity disorder or dyslexia, addiction doesn’t affect overall intelligence. Second, this view suggests that addiction skews choice — but doesn’t completely eliminate free will: after all, no one injects drugs in front of the police. This means that addicts can learn to take actions to improve our health, like using clean syringes, as I did. Research overwhelmingly shows such programs not only reduce H.I.V., but also aid recovery.
The learning perspective also explains why the compulsion for alcohol or drugs can be so strong and why people with addiction continue even when the damage far outweighs the pleasure they receive and why they can appear to be acting irrationally: If you believe that something is essential to your survival, your priorities won’t make sense to others.
Learning that drives urges like love and reproduction is quite different from learning dry facts. Unlike memorizing your sevens and nines, deep, emotional learning completely alters the way you determine what matters most, which is why you remember your high school crush better than high school math.
Recognizing addiction as a learning disorder can also help end the argument over whether addiction should be treated as a progressive illness, as experts contend, or as a moral problem, a belief that is reflected in our continuing criminalization of certain drugs. You’ve just learned a maladaptive way of coping.
Moreover, if addiction resides in the parts of the brain involved in love, then recovery is more like getting over a breakup than it is like facing a lifelong illness. Healing a broken heart is difficult and often involves relapses into obsessive behavior, but it’s not brain damage.
The implications for treatment here are profound. If addiction is like misguided love, then compassion is a far better approach than punishment. (...)
This makes sense because the circuitry that normally connects us to one another socially has been channeled instead into drug seeking. To return our brains to normal then, we need more love, not more pain.
by Maia Szalavitz, NY Times | Read more:
Image: James Gallagher
My parents were devastated: They couldn’t understand what had happened to their “gifted” child who had always excelled academically. They kept hoping I would just somehow stop, even though every time I tried to quit, I relapsed within months.
There are, speaking broadly, two schools of thought on addiction: The first was that my brain had been chemically “hijacked” by drugs, leaving me no control over a chronic, progressive disease. The second was simply that I was a selfish criminal, with little regard for others, as much of the public still seems to believe. (When it’s our own loved ones who become addicted, we tend to favor the first explanation; when it’s someone else’s, we favor the second.)We are long overdue for a new perspective — both because our understanding of the neuroscience underlying addiction has changed and because so many existing treatments simply don’t work.
Addiction is indeed a brain problem, but it’s not a degenerative pathology like Alzheimer’s disease or cancer, nor is it evidence of a criminal mind. Instead, it’s a learning disorder, a difference in the wiring of the brain that affects the way we process information about motivation, reward and punishment. And, as with many learning disorders, addictive behavior is shaped by genetic and environmental influences over the course of development.
Scientists have documented the connection between learning processes and addiction for decades. Now, through both animal research and imaging studies, neuroscientists are starting to recognize which brain regions are involved in addiction and how.
The studies show that addiction alters the interactions between midbrain regions like the ventral tegmentum and the nucleus accumbens, which are involved with motivation and pleasure, and parts of the prefrontal cortex that mediate decisions and help set priorities. Acting in concert, these networks determine what we value in order to ensure that we attain critical biological goals: namely, survival and reproduction.
In essence, addiction occurs when these brain systems are focused on the wrong objects: a drug or self-destructive behavior like excessive gambling instead of a new sexual partner or a baby. Once that happens, it can cause serious trouble.
If, like me, you grew up with a hyper-reactive nervous system that constantly made you feel overwhelmed, alienated and unlovable, finding a substance that eases social stress becomes a blessed escape. For me, heroin provided a sense of comfort, safety and love that I couldn’t get from other people (the key agent of addiction in these regions is the same for many pleasurable experiences:dopamine). Once I’d experienced the relief heroin gave me, I felt as though I couldn’t survive without it.
Understanding addiction from this neurodevelopmental perspective offers a great deal of hope. First, like other learning disorders, for example, attention-deficit hyperactivity disorder or dyslexia, addiction doesn’t affect overall intelligence. Second, this view suggests that addiction skews choice — but doesn’t completely eliminate free will: after all, no one injects drugs in front of the police. This means that addicts can learn to take actions to improve our health, like using clean syringes, as I did. Research overwhelmingly shows such programs not only reduce H.I.V., but also aid recovery.
The learning perspective also explains why the compulsion for alcohol or drugs can be so strong and why people with addiction continue even when the damage far outweighs the pleasure they receive and why they can appear to be acting irrationally: If you believe that something is essential to your survival, your priorities won’t make sense to others.
Learning that drives urges like love and reproduction is quite different from learning dry facts. Unlike memorizing your sevens and nines, deep, emotional learning completely alters the way you determine what matters most, which is why you remember your high school crush better than high school math.
Recognizing addiction as a learning disorder can also help end the argument over whether addiction should be treated as a progressive illness, as experts contend, or as a moral problem, a belief that is reflected in our continuing criminalization of certain drugs. You’ve just learned a maladaptive way of coping.
Moreover, if addiction resides in the parts of the brain involved in love, then recovery is more like getting over a breakup than it is like facing a lifelong illness. Healing a broken heart is difficult and often involves relapses into obsessive behavior, but it’s not brain damage.
The implications for treatment here are profound. If addiction is like misguided love, then compassion is a far better approach than punishment. (...)
This makes sense because the circuitry that normally connects us to one another socially has been channeled instead into drug seeking. To return our brains to normal then, we need more love, not more pain.
by Maia Szalavitz, NY Times | Read more:
Image: James Gallagher
Tuesday, June 28, 2016
Sorry for the interruption. Still traveling.
Saturday, June 25, 2016
Pulp Friction
Even by the standards of the ailing book publishing industry, the past year has been a bad one for Barnes & Noble. After the company spun off its profitable college textbook division, its stock plunged nearly 40 percent. Its long-term debt tripled, to $192 million, and its cash reserves dwindled. Leonard Riggio, who turned the company into a behemoth, has announced he will step down this summer after more than 40 years as chairman. At the rate it’s going, Barnes & Noble won’t be known as a bookseller at all—either because most of its floor space will be given over to games and gadgets, or, more ominously, because it won’t even exist.
There’s more than a little irony to the impending collapse of Barnes & Noble. The mega-retailer that drove many small, independent booksellers out of business is now being done in by the rise of Amazon. But while many book lovers may be tempted to gloat, the death of Barnes & Noble would be catastrophic—not just for publishing houses and the writers they publish, but for American culture as a whole.
If Barnes & Noble were to shut its doors, Amazon, independent bookstores, and big-box retailers like Target and Walmart would pick up some of the slack. But not all of it. Part of the reason is that book sales are driven by “showrooming,” the idea that most people don’t buy a book, either in print or electronically, unless they’ve seen it somewhere else—on a friend’s shelf, say, or in a bookstore. Even on the brink of closing, Barnes & Noble still accounts for as much as 30 percent of all sales for some publishing houses.
But the focus on sales masks the deeper degree to which the publishing industry relies on Barnes & Noble. The retailer provides much of the up-front cash publishers need to survive, in the form of initial orders. Most independent bookstores can’t afford to buy many books in advance; a single carton of 24 books would represent a large order. Amazon also buys few books in advance, preferring to let supplies run down so as to prompt online shoppers to “add to cart” because there are “only five left in stock.”
Barnes & Noble, by contrast, often takes very large initial orders. For books it believes will fly off the shelves, initials can reach the mid-five figures—hundreds of thousands of dollars that go to the publisher before a single book is even sold. That money, in turn, allows publishers to run ads in magazines and on Facebook, send authors on book tours, and pay for publicists. Without Barnes & Noble, it would become much harder for publishers to turn books into best-sellers. (...)
Big-name authors, like Malcolm Gladwell or James Patterson, will probably be fine. So too will writers who specialize in romance, science fiction, manga, and commercial fiction—genres with devoted audiences, who have already gravitated to Amazon’s low prices. But Barnes & Noble is essential to publishers of literary fiction—the so-called “serious” works that get nominated for Pulitzers and National Book Awards. Without the initial orders Barnes & Noble places, and the visibility its shelves provide, breakout hits by relative unknowns—books like Anthony Doerr’s All the Light We Cannot See or Emily St. John Mandel’s Station Eleven—will suffer.
In a world without Barnes & Noble, risk-averse publishers will double down on celebrity authors and surefire hits. Literary writers without proven sales records will have difficulty getting published, as will young, debut novelists. The most literary of novels will be shunted to smaller publishers. Some will probably never be published at all. And rigorous nonfiction books, which often require extensive research and travel, will have a tough time finding a publisher with the capital to fund such efforts.
There’s more than a little irony to the impending collapse of Barnes & Noble. The mega-retailer that drove many small, independent booksellers out of business is now being done in by the rise of Amazon. But while many book lovers may be tempted to gloat, the death of Barnes & Noble would be catastrophic—not just for publishing houses and the writers they publish, but for American culture as a whole.If Barnes & Noble were to shut its doors, Amazon, independent bookstores, and big-box retailers like Target and Walmart would pick up some of the slack. But not all of it. Part of the reason is that book sales are driven by “showrooming,” the idea that most people don’t buy a book, either in print or electronically, unless they’ve seen it somewhere else—on a friend’s shelf, say, or in a bookstore. Even on the brink of closing, Barnes & Noble still accounts for as much as 30 percent of all sales for some publishing houses.
But the focus on sales masks the deeper degree to which the publishing industry relies on Barnes & Noble. The retailer provides much of the up-front cash publishers need to survive, in the form of initial orders. Most independent bookstores can’t afford to buy many books in advance; a single carton of 24 books would represent a large order. Amazon also buys few books in advance, preferring to let supplies run down so as to prompt online shoppers to “add to cart” because there are “only five left in stock.”
Barnes & Noble, by contrast, often takes very large initial orders. For books it believes will fly off the shelves, initials can reach the mid-five figures—hundreds of thousands of dollars that go to the publisher before a single book is even sold. That money, in turn, allows publishers to run ads in magazines and on Facebook, send authors on book tours, and pay for publicists. Without Barnes & Noble, it would become much harder for publishers to turn books into best-sellers. (...)
Big-name authors, like Malcolm Gladwell or James Patterson, will probably be fine. So too will writers who specialize in romance, science fiction, manga, and commercial fiction—genres with devoted audiences, who have already gravitated to Amazon’s low prices. But Barnes & Noble is essential to publishers of literary fiction—the so-called “serious” works that get nominated for Pulitzers and National Book Awards. Without the initial orders Barnes & Noble places, and the visibility its shelves provide, breakout hits by relative unknowns—books like Anthony Doerr’s All the Light We Cannot See or Emily St. John Mandel’s Station Eleven—will suffer.
In a world without Barnes & Noble, risk-averse publishers will double down on celebrity authors and surefire hits. Literary writers without proven sales records will have difficulty getting published, as will young, debut novelists. The most literary of novels will be shunted to smaller publishers. Some will probably never be published at all. And rigorous nonfiction books, which often require extensive research and travel, will have a tough time finding a publisher with the capital to fund such efforts.
by Alex Shepard, TNR | Read more:
Image: David PlunkertThursday, June 23, 2016
The Surprising Relevance of the Baltic Dry Index
On January 11th of this year, online financial circles lit up with dire news. “Commerce between Europe and North America has literally come to a halt,” one blogger wrote. “For the first time in known history, not one cargo ship is in-transit in the North Atlantic between Europe and North America. . . . It is a horrific economic sign; proof that commerce is literally stopped.” Although the Web site that first broadcast this information is prone to hysteria—there are, in fact, many cargo ships on the world’s oceans, in plain sight—more pessimistic market experts, such as Zero Hedge and the Dollar Vigilante, eagerly quoted it for their millions of readers. The story fit neatly into a narrative: the global economy, despite outward signs that it has clawed its way back from recession, is a small step away from an enormous crash.
But if sober-minded, mainstream economists were tempted to dismiss this ostensible trade calamity outright, they found that they couldn’t. The index that inspired these warnings, known as the Baltic Dry Index, was until recently viewed as a credible, if obscure, source—one that has accurately signalled prior systemic failures, and one that economists of all stripes have routinely consulted as a trusted proxy for trade activity. Based in London, this gauge reflects the rates that freight carriers charge to haul basic, solid raw materials, such as iron ore, coal, cement, and grain. As a daily composite of the tonnage fees on popular seagoing routes, the B.D.I. essentially mirrors supply and demand at the most elementary level. A decrease usually means that shipping prices and commodities sales are dropping (the latter because shippers are competing over fewer consignments). Shipping is a direct indicator of whether people want goods, and softness in shipping prices is therefore a sign of weakness in manufacturing and construction.
In January, when the B.D.I. surfaced as a heated topic in certain geeky economic corners of the Internet, it had fallen to a record low of 429, an eighty-per-cent decline from December, 2013, and far below its record high, in May, 2008, of 11,793. It continued to plunge for another month, hitting a nadir of 291 on February 12th. The index has rebounded a little since then, but not enough to dampen some concerns raised by its descent. While the catastrophic scenarios offered by the pessimists aren’t quite plausible, the B.D.I.’s dramatic plunge does appear to have indicated a genuinely alarming economic trend about the strength of global trade, with implications for jobs and corporate profits, that many economists had overlooked. Which raises the question: Why did economists color their judgment by discounting the B.D.I. in the first place? (...)
It was at St. Mary Axe that the Baltic Dry Index débuted, in 1985, in response to technological advances that allowed the exchange to conduct its dealings electronically. The index was conceived as a way of standardizing global cargo prices, by taking a daily canvas of Baltic’s network of sea-based freight brokers and compiling the costs of booking shipments of raw materials. The price list that resulted would allow freighter transactions to be agreed upon with little haggling or human intervention, no matter where buyers and sellers were located. “We created the index to lubricate shipping, especially as shipping agreements are increasingly consummated over long distances and electronically,” Jeremy Penn, the C.E.O. of the exchange, told me. “We didn’t create it to be used for any other purpose.”
But economists saw things differently. To them, the Baltic Dry Index was a rare gem: a coincident indicator. You might be familiar with lagging indicators, such as the prime rate, which reflect the economy as it was months earlier. And you may also have come into contact with leading indicators, such as initial claims for unemployment insurance and the S. & P. 500, which supposedly presage the short- to medium-term future state of the economy. But their predictive track record is poor, because any number of exigencies—from a terrorist attack to a devastating natural disaster to an overnight rate change in Europe—could intervene to detour the economy.
By contrast, a coincident indicator, like non-farm payroll and industrial production, is an immediate and frequent snapshot of some aspect of the economy. Absent extenuating circumstances, any sharp and continuous movement in a coincident indicator will more than likely foreshadow a significant change, even if its implications aren’t immediately obvious. Indeed, the simplicity of the Baltic Dry’s message is what made it so attractive to global financial analysts.
by Jeffrey Rothfeder, New Yorker | Read more:
But if sober-minded, mainstream economists were tempted to dismiss this ostensible trade calamity outright, they found that they couldn’t. The index that inspired these warnings, known as the Baltic Dry Index, was until recently viewed as a credible, if obscure, source—one that has accurately signalled prior systemic failures, and one that economists of all stripes have routinely consulted as a trusted proxy for trade activity. Based in London, this gauge reflects the rates that freight carriers charge to haul basic, solid raw materials, such as iron ore, coal, cement, and grain. As a daily composite of the tonnage fees on popular seagoing routes, the B.D.I. essentially mirrors supply and demand at the most elementary level. A decrease usually means that shipping prices and commodities sales are dropping (the latter because shippers are competing over fewer consignments). Shipping is a direct indicator of whether people want goods, and softness in shipping prices is therefore a sign of weakness in manufacturing and construction.In January, when the B.D.I. surfaced as a heated topic in certain geeky economic corners of the Internet, it had fallen to a record low of 429, an eighty-per-cent decline from December, 2013, and far below its record high, in May, 2008, of 11,793. It continued to plunge for another month, hitting a nadir of 291 on February 12th. The index has rebounded a little since then, but not enough to dampen some concerns raised by its descent. While the catastrophic scenarios offered by the pessimists aren’t quite plausible, the B.D.I.’s dramatic plunge does appear to have indicated a genuinely alarming economic trend about the strength of global trade, with implications for jobs and corporate profits, that many economists had overlooked. Which raises the question: Why did economists color their judgment by discounting the B.D.I. in the first place? (...)
It was at St. Mary Axe that the Baltic Dry Index débuted, in 1985, in response to technological advances that allowed the exchange to conduct its dealings electronically. The index was conceived as a way of standardizing global cargo prices, by taking a daily canvas of Baltic’s network of sea-based freight brokers and compiling the costs of booking shipments of raw materials. The price list that resulted would allow freighter transactions to be agreed upon with little haggling or human intervention, no matter where buyers and sellers were located. “We created the index to lubricate shipping, especially as shipping agreements are increasingly consummated over long distances and electronically,” Jeremy Penn, the C.E.O. of the exchange, told me. “We didn’t create it to be used for any other purpose.”
But economists saw things differently. To them, the Baltic Dry Index was a rare gem: a coincident indicator. You might be familiar with lagging indicators, such as the prime rate, which reflect the economy as it was months earlier. And you may also have come into contact with leading indicators, such as initial claims for unemployment insurance and the S. & P. 500, which supposedly presage the short- to medium-term future state of the economy. But their predictive track record is poor, because any number of exigencies—from a terrorist attack to a devastating natural disaster to an overnight rate change in Europe—could intervene to detour the economy.
By contrast, a coincident indicator, like non-farm payroll and industrial production, is an immediate and frequent snapshot of some aspect of the economy. Absent extenuating circumstances, any sharp and continuous movement in a coincident indicator will more than likely foreshadow a significant change, even if its implications aren’t immediately obvious. Indeed, the simplicity of the Baltic Dry’s message is what made it so attractive to global financial analysts.
by Jeffrey Rothfeder, New Yorker | Read more:
Image: Carlos Jasso, Reuters
Jacques Majorelle (French, 1886 - 1962). Young woman under banana trees (Jeune femme sous les bananiers), N/D
via:
Seattle’s Cat Fox on Three Decades of Repairing Guitars
At the Weiser Fiddle Festival in Idaho, a seed was planted in the imagination of Cat Fox. Some guy was repairing guitars out of a tricked out, converted sheep wagon he towed behind his truck from festival to festival. That brief visage held court in her young mind. Years later, Fox is driving around in her VW van, lost and bummed out, having recently dropped out of University of Puget Sound with a great scholarship opportunity. When it came time to mouth the inescapable question, ‘What do I want to do with my life?’ she knew a few things. She liked woodworking, she liked to hang out with musicians, and she loved to party. And here again is the guy with the sheep wagon flashing front and center in her memory. Fox was going to be a luthier.
But women don’t build guitars! At the time, few women were playing guitars, let alone making them, but Fox’s gutsy resolve had taken root.
Upon graduation from the luthier program at Minnesota Technical College in Red Wing, with completed guitar firmly in hand, she packed everything into her VW van and drove to Massachusetts. She arrived at the door of Bill Cumpiano, world-renowned luthier and author of the definitive bible of the craft, Guitar Making Tradition and Technology. Fox recalls Cumpiano’s stern and uncompromising tone on the phone as he gives her an opening, “No guarantee I will take you on but I will observe your work.” Fox thinks to herself, “Of course he’ll take me on!” He does, of course, take her on as his apprentice for two years followed by another four years as his repair technician. To this day, the best compliment she ever received was from Cumpiano, “If you decide to build guitars, I hope you go far, far away.” So she did…all the way to Seattle.
The jerky cadence of raindrops can be heard on the corrugated metal roof of Fox’s Fremont studio, Sound Guitar Repair, in a space she shares with guitar builder and husband Rick Davis. Fox works in a studio within a studio, a cozy oasis among band saws and jigs of all sizes and dimensions. Fox has never had to advertise to fill her busy schedule. Her clients range from the rich and famous to the parlor picker to the street musician. Word of mouth is all she has needs to stay busy.
Her clients include musicians the likes of Bill Frisell and Danny O’Keefe, but her favorite story is when the Everly Brothers came knocking at her door. Out of the blue she gets a call from a guy saying he’s with the Everly Brothers and can she fix one of their guitars? Apparently a roadie had tripped and badly damaged one of their iconic black Steinegger signature Ike Everly guitars. Incredulous…and convinced her friend Doug was up to his old tricks, her eyes widened when a big truck with the words “Everly Brothers” on the side rolled up to her shop. She repaired two big splits from the block and the next day, when asked about the bill, she sheepishly said,“$500.00? Is that too much?” to which the stagehand replied, “Honey, this is the Everly Brothers”.
How does a woman gain the type of respect that Cat Fox has earned in a profession which counts women luthiers in America on one hand? “I care very deeply,” are her precise words. That, and her unflagging “I can do that!” attitude. Fox sees women inherently well suited to the work of building and repairing guitars. Precise work, attention to detail, and the ability to listen and interpret what your client needs even if its not what they’re saying – these are all skills that come naturally to women.
But women don’t build guitars! At the time, few women were playing guitars, let alone making them, but Fox’s gutsy resolve had taken root.Upon graduation from the luthier program at Minnesota Technical College in Red Wing, with completed guitar firmly in hand, she packed everything into her VW van and drove to Massachusetts. She arrived at the door of Bill Cumpiano, world-renowned luthier and author of the definitive bible of the craft, Guitar Making Tradition and Technology. Fox recalls Cumpiano’s stern and uncompromising tone on the phone as he gives her an opening, “No guarantee I will take you on but I will observe your work.” Fox thinks to herself, “Of course he’ll take me on!” He does, of course, take her on as his apprentice for two years followed by another four years as his repair technician. To this day, the best compliment she ever received was from Cumpiano, “If you decide to build guitars, I hope you go far, far away.” So she did…all the way to Seattle.
The jerky cadence of raindrops can be heard on the corrugated metal roof of Fox’s Fremont studio, Sound Guitar Repair, in a space she shares with guitar builder and husband Rick Davis. Fox works in a studio within a studio, a cozy oasis among band saws and jigs of all sizes and dimensions. Fox has never had to advertise to fill her busy schedule. Her clients range from the rich and famous to the parlor picker to the street musician. Word of mouth is all she has needs to stay busy.
Her clients include musicians the likes of Bill Frisell and Danny O’Keefe, but her favorite story is when the Everly Brothers came knocking at her door. Out of the blue she gets a call from a guy saying he’s with the Everly Brothers and can she fix one of their guitars? Apparently a roadie had tripped and badly damaged one of their iconic black Steinegger signature Ike Everly guitars. Incredulous…and convinced her friend Doug was up to his old tricks, her eyes widened when a big truck with the words “Everly Brothers” on the side rolled up to her shop. She repaired two big splits from the block and the next day, when asked about the bill, she sheepishly said,“$500.00? Is that too much?” to which the stagehand replied, “Honey, this is the Everly Brothers”.
How does a woman gain the type of respect that Cat Fox has earned in a profession which counts women luthiers in America on one hand? “I care very deeply,” are her precise words. That, and her unflagging “I can do that!” attitude. Fox sees women inherently well suited to the work of building and repairing guitars. Precise work, attention to detail, and the ability to listen and interpret what your client needs even if its not what they’re saying – these are all skills that come naturally to women.
by Sarah Gardner, Fretboard Journal | Read more:
Image: uncredited
The Death of a Study
In the spring of 2009, researchers started showing up in the neighborhoods of Montgomery County, Pennsylvania, clipboards in hand, to enroll expecting mothers and their unborn children in a huge environmental and health study that was going to last for decades. They asked probing questions whenever a woman answered the doorbell: Was she between the ages of 18 to 49? Was she pregnant, and if yes, how far along? If she wasn’t, could the researchers stay in touch with her until she knew she was having a baby?
The National Children’s Study (NCS), as it was called, had set out to enroll and follow 100,000 children from conception until the age of 21 in an effort to unlock some of our most enduring medical mysteries — from the prevalence of asthma and attention-deficit disorder to the rise of autism. Montgomery County, a bedroom community northwest of Philadelphia, was one of its test sites, and the women targeted for recruitment came from painstakingly selected households. They would answer dozens of questions about their own health, family medical histories, jobs, and personal habits. They would provide clippings of their hair and fingernails, and dust from their houses. When they went into labor, hospital staff would be on hand to sample cord blood, placenta, the infant’s first bowel movement, and other biological specimens — each a window into the prenatal chemical milieu.
Scientists, of course, knew that developing babies and young children are exquisitely sensitive to their environments. They just needed more data, more evidence, to connect early exposures to diseases and disorders later in life — and that’s precisely what the NCS, administered under the auspices of the National Institutes of Health, was going to provide: an unprecedented epidemiological portrait of the typical American home.
“We were tapping into a data goldmine,” said Jennifer Culhane, an epidemiologist at Children’s Hospital of Philadelphia who was directing the study’s local efforts. “There was a sense that we were engaging in something special, and doing more to advance the science of pediatric disease than anyone else in the world.”
Those aspirations came to naught when the NCS was canceled in December 2014, after a 14-year history during which it burned through $1.3 billion in taxpayer dollars without generating much in the way of useful information. The study’s collapse barely registered with the national media, and unlike other major taxpayer-funded failures that have become political bludgeons on Capitol Hill, reaction in Washington has been muted. But the study’s collapse left a bitter legacy of anger and frustration among those who worked on the NCS for years, only to see their efforts wasted on a bungled enterprise that critics say went nowhere and accomplished nothing. Sources interviewed for this story gave a range of reasons for the study’s demise: deep scientific divisions over how it should have been carried out, partisan bickering, and even charges of deliberate dissembling over just how much such study would ultimately cost, to name just a few.
“There is no single factor that led to the failure of the NCS to achieve its goals, but rather a persistent set of challenges facing the design, management, and costs of the study,” said Francis Collins, director of the NIH, in an email message defending his decision to terminate the program.
Still, many critics say those challenges were a result of dysfunctional management, an ever-shifting set of objectives, and even lack of support at the highest levels of the National Institutes of Health.
“Too many people loaded this study with their own desires and wishes for what they wanted it to be without thinking enough about what it could actually achieve,” said Ellen Silbergeld, a former member of an NCS federal advisory committee and a professor of environmental science, epidemiology, and health policy at Johns Hopkins University School of Public Health, in Baltimore, Maryland.
Many of the scientists who worked on the National Children’s Study have since moved on. But virtually all of the participants interviewed suggested that the American public lost a groundbreaking opportunity to answer questions about pediatric disease. Those questions remain as vexing now as when the study was launched nearly two decades ago — and Silbergeld was among many sources interviewed who lamented all the lost time and wasted money.
“This was a scientific humiliation for the United States,” she said.
by Charles Schmidt, Undark | Read more:
Image: uncredited
The National Children’s Study (NCS), as it was called, had set out to enroll and follow 100,000 children from conception until the age of 21 in an effort to unlock some of our most enduring medical mysteries — from the prevalence of asthma and attention-deficit disorder to the rise of autism. Montgomery County, a bedroom community northwest of Philadelphia, was one of its test sites, and the women targeted for recruitment came from painstakingly selected households. They would answer dozens of questions about their own health, family medical histories, jobs, and personal habits. They would provide clippings of their hair and fingernails, and dust from their houses. When they went into labor, hospital staff would be on hand to sample cord blood, placenta, the infant’s first bowel movement, and other biological specimens — each a window into the prenatal chemical milieu.Scientists, of course, knew that developing babies and young children are exquisitely sensitive to their environments. They just needed more data, more evidence, to connect early exposures to diseases and disorders later in life — and that’s precisely what the NCS, administered under the auspices of the National Institutes of Health, was going to provide: an unprecedented epidemiological portrait of the typical American home.
“We were tapping into a data goldmine,” said Jennifer Culhane, an epidemiologist at Children’s Hospital of Philadelphia who was directing the study’s local efforts. “There was a sense that we were engaging in something special, and doing more to advance the science of pediatric disease than anyone else in the world.”
Those aspirations came to naught when the NCS was canceled in December 2014, after a 14-year history during which it burned through $1.3 billion in taxpayer dollars without generating much in the way of useful information. The study’s collapse barely registered with the national media, and unlike other major taxpayer-funded failures that have become political bludgeons on Capitol Hill, reaction in Washington has been muted. But the study’s collapse left a bitter legacy of anger and frustration among those who worked on the NCS for years, only to see their efforts wasted on a bungled enterprise that critics say went nowhere and accomplished nothing. Sources interviewed for this story gave a range of reasons for the study’s demise: deep scientific divisions over how it should have been carried out, partisan bickering, and even charges of deliberate dissembling over just how much such study would ultimately cost, to name just a few.
“There is no single factor that led to the failure of the NCS to achieve its goals, but rather a persistent set of challenges facing the design, management, and costs of the study,” said Francis Collins, director of the NIH, in an email message defending his decision to terminate the program.
Still, many critics say those challenges were a result of dysfunctional management, an ever-shifting set of objectives, and even lack of support at the highest levels of the National Institutes of Health.
“Too many people loaded this study with their own desires and wishes for what they wanted it to be without thinking enough about what it could actually achieve,” said Ellen Silbergeld, a former member of an NCS federal advisory committee and a professor of environmental science, epidemiology, and health policy at Johns Hopkins University School of Public Health, in Baltimore, Maryland.
Many of the scientists who worked on the National Children’s Study have since moved on. But virtually all of the participants interviewed suggested that the American public lost a groundbreaking opportunity to answer questions about pediatric disease. Those questions remain as vexing now as when the study was launched nearly two decades ago — and Silbergeld was among many sources interviewed who lamented all the lost time and wasted money.
“This was a scientific humiliation for the United States,” she said.
by Charles Schmidt, Undark | Read more:
Image: uncredited
Dutch Prototype Clean-Up Boom Brings Pacific Plastics Solution a Step Closer
A bid to clear the Pacific of its plastic debris has moved a step closer with the launch of the biggest prototype clean-up boom yet by the Dutch environment minister at a port in The Hague.
On Thursday the 100m-long barrier will be towed 20km out to sea for a year of sensor-monitored tests, before being scaled up for real-life trials off the Japanese coast at the end of next year.
If all goes well, full-scale deployment of a 100km-long version will take place in the “great Pacific garbage patch” between California and Hawaii in 2020. (...)The snake-like ocean barrier is made out of vulcanised rubber and works by harnessing sea currents to passively funnel trash in surface waters – often just millimetres in diameter – into a V-shaped cone.
A cable sub-system will anchor the structure at depths of up to 4.5km – almost twice as far down as has even been done before – keeping it in place so it can trap the rubbish for periodic collection by boats.
A fully scaled-up barrier would be the most ambitious ocean cleansing project yet, capturing around half of the plastic soup that circles the Pacific gyre within a decade. That at least is the plan.
by Arthur Nesland, The Guardian | Read more:
Image: The Ocean CleanupWednesday, June 22, 2016
The Dystopian Future In Which Everyone Is The Boss
Across a number of professions, bosses have been vanishing. Last year, Tony Hsieh, CEO of the online shoe mega-retailer Zappos, announced that the company would implement Holacracy, a hierarchy-free office model with which Hsieh had become enamored after attending a conference talk by its creators. Under Holacracy—which bills itself as a system that “removes power from a management hierarchy and distributes it across clear roles”—Zappos employees would design their own job descriptions and work with colleagues in autonomous “circles” free from the hovering interference of “people managers.” (Former people managers were to find new roles in the company or accept buyouts.)
Hsieh hasn’t been the only boss to institute a bossless office in recent years. Somewhere between rigid corporate hierarchy and the approximately three hundred worker cooperatives that exist in the US today lies an expanding realm of manager-free workplaces. Most are white-collar and many, like Zappos, are the sorts of tech firms that have been famously predisposed to collaborative work arrangements, casual dress codes, beanbags, and other anti-corporate trappings since the beginning. But there are also industrial operations like Morning Star, the world’s largest tomato processing plant, where over 2,000 employees annually sign “Colleague Letters of Understanding” that lay out each worker’s job description and output goals, in lieu of managers to oversee production. In a 2013 overview for New York Magazine on the rise of bossless workplaces, Matthew Schaer reported that even Morning Star’s internal conflicts were resolved without hierarchy: instead of management or HR handling clashes between employees, anywhere from one to ten of the feuding parties’ colleagues would be enlisted to mediate the spat.
Does the bossless office signal progress for workers? The majority of Americans still answer to supervisors, and there are scant few who haven’t grumbled—if not seethed—over incompetent, abusive, or overly controlling managers. A number of studies have unsurprisingly confirmed that bad bosses create undue amounts of stress for workers. Thus, it stands to reason that removing such meddlesome disciplinarians, as companies like Zappos and Morning Star have done, has the potential to improve worker morale vastly. “It’s a beautiful way of structuring a workplace,” Ben & Jerry’s co-founder Ben Cohen told Inc. magazine last year. “Management is not nearly as necessary as it thinks it is.”
Growing evidence suggests that the disappearance of management bureaucracy also makes offices more productive. In an interview with Current, Martha Little, a senior producer at Audible, praised the company’s collaborative work structure and explained that top-down management was quickly becoming an obsolete way of organizing workplaces. “In this very fast-paced, very tech-oriented media-delivery-service world, I don’t think the hierarchies can really keep up with the fast pace of change, flexibility and input of ideas that you need to compete,” she said. In the Wall Street Journal, Tim Clem, an employee at the tech outfit GitHub [ed. no relation], similarly noted of his company’s bossless setup, “It makes you want to do more.”
But if employees at bossless offices often report good spirits and high productivity, outside of true worker cooperatives there is a hard limit to the workplace democracy, and it usually takes the form of the company’s purse strings. As Schaer noted of Morning Star, “The company is privately held, and no employee, no matter how hard-performing, is entitled to a share of the profits.” And different pay grades exist at all of the aforementioned “flattened” companies, no matter whose or how many voices are “heard” at company meetings.
Not only does the bossless office camouflage longstanding monetary inequalities, it also outsources the tasks once assigned to managers to an increasing number of workers. Employees at bossless companies who have supposedly been liberated from their manager overlords are generally compelled to absorb the duties of the now-nonexistent management in addition to whatever roles they might otherwise perform. At the software company Menlo Innovations—which prides itself on its boss-free, non-hierarchical work environment—committees of employees must reach consensus on most HR matters including hiring, firing, and determining employees’ pay. The absence of management, in other words, tends merely to displace “traditional” boss responsibilities onto a new group of people rather than eliminate them entirely.
Media theorist Alexander Galloway has challenged the assumption that horizontal arrangements are inherently egalitarian. According to Galloway, over the last few decades, labor and culture alike have been increasingly organized as networks—evident in the rise of “flexible” workplaces and cultural phenomena like the rise of social media. While plenty of academics and activists alike continue to believe that the dissolution of official hierarchy (the boss, the state) is synonymous with the dissolution of power, Galloway argues that such processes may only reflect the changing nature of a post-Fordist world. He further cautions, “Centralized verticality is only one form of organization. The distributed network is simply a different form of organization, one with its own special brand of management and control.”
Hsieh hasn’t been the only boss to institute a bossless office in recent years. Somewhere between rigid corporate hierarchy and the approximately three hundred worker cooperatives that exist in the US today lies an expanding realm of manager-free workplaces. Most are white-collar and many, like Zappos, are the sorts of tech firms that have been famously predisposed to collaborative work arrangements, casual dress codes, beanbags, and other anti-corporate trappings since the beginning. But there are also industrial operations like Morning Star, the world’s largest tomato processing plant, where over 2,000 employees annually sign “Colleague Letters of Understanding” that lay out each worker’s job description and output goals, in lieu of managers to oversee production. In a 2013 overview for New York Magazine on the rise of bossless workplaces, Matthew Schaer reported that even Morning Star’s internal conflicts were resolved without hierarchy: instead of management or HR handling clashes between employees, anywhere from one to ten of the feuding parties’ colleagues would be enlisted to mediate the spat.Does the bossless office signal progress for workers? The majority of Americans still answer to supervisors, and there are scant few who haven’t grumbled—if not seethed—over incompetent, abusive, or overly controlling managers. A number of studies have unsurprisingly confirmed that bad bosses create undue amounts of stress for workers. Thus, it stands to reason that removing such meddlesome disciplinarians, as companies like Zappos and Morning Star have done, has the potential to improve worker morale vastly. “It’s a beautiful way of structuring a workplace,” Ben & Jerry’s co-founder Ben Cohen told Inc. magazine last year. “Management is not nearly as necessary as it thinks it is.”
Growing evidence suggests that the disappearance of management bureaucracy also makes offices more productive. In an interview with Current, Martha Little, a senior producer at Audible, praised the company’s collaborative work structure and explained that top-down management was quickly becoming an obsolete way of organizing workplaces. “In this very fast-paced, very tech-oriented media-delivery-service world, I don’t think the hierarchies can really keep up with the fast pace of change, flexibility and input of ideas that you need to compete,” she said. In the Wall Street Journal, Tim Clem, an employee at the tech outfit GitHub [ed. no relation], similarly noted of his company’s bossless setup, “It makes you want to do more.”
But if employees at bossless offices often report good spirits and high productivity, outside of true worker cooperatives there is a hard limit to the workplace democracy, and it usually takes the form of the company’s purse strings. As Schaer noted of Morning Star, “The company is privately held, and no employee, no matter how hard-performing, is entitled to a share of the profits.” And different pay grades exist at all of the aforementioned “flattened” companies, no matter whose or how many voices are “heard” at company meetings.
Not only does the bossless office camouflage longstanding monetary inequalities, it also outsources the tasks once assigned to managers to an increasing number of workers. Employees at bossless companies who have supposedly been liberated from their manager overlords are generally compelled to absorb the duties of the now-nonexistent management in addition to whatever roles they might otherwise perform. At the software company Menlo Innovations—which prides itself on its boss-free, non-hierarchical work environment—committees of employees must reach consensus on most HR matters including hiring, firing, and determining employees’ pay. The absence of management, in other words, tends merely to displace “traditional” boss responsibilities onto a new group of people rather than eliminate them entirely.
Media theorist Alexander Galloway has challenged the assumption that horizontal arrangements are inherently egalitarian. According to Galloway, over the last few decades, labor and culture alike have been increasingly organized as networks—evident in the rise of “flexible” workplaces and cultural phenomena like the rise of social media. While plenty of academics and activists alike continue to believe that the dissolution of official hierarchy (the boss, the state) is synonymous with the dissolution of power, Galloway argues that such processes may only reflect the changing nature of a post-Fordist world. He further cautions, “Centralized verticality is only one form of organization. The distributed network is simply a different form of organization, one with its own special brand of management and control.”
by J.C. Pan, Literary Hub | Read more:
Image: uncredited
The Secret of Taste: Why We Like What We Like
If you had asked me, when I was 10, to forecast my life as an adult, I would probably have sketched out something like this: I would be driving a Trans Am, a Corvette, or some other muscle car. My house would boast a mammoth collection of pinball machines. I would sip sophisticated drinks (like Baileys Irish Cream), read Robert Ludlum novels, and blast Van Halen while sitting in an easy chair wearing sunglasses. Now that I am at a point to actually be able to realise every one of these feverishly envisioned tastes, they hold zero interest (well, perhaps the pinball machines in a weak moment).
It was not just that my 10-year-old self could not predict whom I would become but that I was incapable of imagining that my tastes could undergo such wholesale change. How could I know what I would want if I did not know who I would be?
One problem is that we do not anticipate the effect of experiencing things. We may instinctively realise we will tire of our favourite food if we eat too much of it, but we might underestimate how much more we could like something if only we ate it more often. Another issue is psychological “salience”, or the things we pay attention to. In the moment we buy a consumer good that offers cashback, the offer is claiming our attention; it might even have influenced the purchase. By the time we get home, the salience fades; the cashback goes unclaimed. When I was 10, what mattered in a car to me was that it be “cool” and fast. What did not matter to me were monthly payments, side-impact crash protection, being able to fit a stroller in the back, and wanting to avoid the appearance of being in a midlife crisis.
Even when we look back and see how much our tastes have changed, the idea that we will change equally in the future seems to confound us. It is what keeps tattoo removal practitioners in business. The psychologist Timothy Wilson and colleagues have identified the illusion that for many, the present is a “watershed moment at which they have finally become the person they will be for the rest of their lives”.
In one experiment, they found that people were willing to pay more money to see their favourite band perform 10 years from now than they were willing to pay to see their favourite band from 10 years ago play now. It is reminiscent of the moment, looking through an old photo album, when you see an earlier picture of yourself and exclaim, “Oh my God, that hair!” Or “Those corduroys!” Just as pictures of ourselves can look jarring because we do not normally see ourselves as others see us, our previous tastes, viewed from “outside”, from the perspective of what looks good now, come as a surprise. Your hairstyle per se was probably not good or bad, simply a reflection of contemporary taste. We say, with condescension, “I can’t believe people actually dressed like that,” without realising we ourselves are currently wearing what will be considered bad taste in the future.
One of the reasons we cannot predict our future preferences is one of the things that makes those very preferences change: novelty. In the science of taste and preferences, novelty is a rather elusive phenomenon. On the one hand, we crave novelty, which defines a field such as fashion (“a field of ugliness so absolutely unbearable,” quipped Oscar Wilde, “that we have to alter it every six months”). As Ronald Frasch, the dapper president of Saks Fifth Avenue, once told me, on the women’s designer floor of the flagship store: “The first thing the customer asks when they come into the store is, ‘What’s new?’ They don’t want to know what was; they want to know what is.” How strong is this impulse? “We will sell 60% of what we’re going to sell the first four weeks the goods are on the floor.”
But we also adore familiarity. There are many who believe we like what we are used to. And yet if this were strictly true, nothing would ever change. There would be no new art styles, no new musical genres, no new products. The economist Joseph Schumpeter argued that capitalism’s role was in teaching people to want (and buy) new things. Producers drive economic change, he wrote, and consumers “are taught to want new things, or things which differ in some respect or other from those which they have been in the habit of using”.
“A lot of times, people don’t know what they want until you show it to them,” as Steve Jobs put it. And even then, they still might not want it. Apple’s ill-fated Newton PDA device, as quaint as it now looks in this age of smartphone as human prosthesis, was arguably too new at the time of its release, anticipating needs and behaviours that were not yet fully realised. As Wired described it, it was “a completely new category of device running an entirely new architecture housed in a form factor that represented a completely new and bold design language”.
So, novelty or familiarity? As is often the case, the answer lies somewhere in between, on the midway point of some optimal U-shaped curve plotting the new and the known. The noted industrial designer Raymond Loewy sensed this optimum in what he termed the “MAYA stage”, for “most advanced, yet acceptable”. This was the moment in a product design cycle when, Loewy argued, “resistance to the unfamiliar reaches the threshold of a shock-zone and resistance to buying sets in”. We like the new as long as it reminds us in some way of the old.
Anticipating how much our tastes will change is hard because we cannot see past our inherent resistance to the unfamiliar. Or how much we will change when we do and how each change will open the door to another change. We forget just how fleeting even the most jarring novelty can be. When you had your first sip of beer (or whisky), you probably did not slap your knee and exclaim, “Where has this been all my life?” It was, “People like this?”
We come to like beer, but it is arguably wrong to call beer an “acquired taste”, as the philosopher Daniel Dennett argues, because it is not that first taste that people are coming to like. “If beer went on tasting to me the way the first sip tasted,” he writes, “I would never have gone on drinking beer.” Part of the problem is that alcohol is a shock to the system: it tastes like nothing that has come before, or at least nothing pleasant. New music or art can have the same effect.
It was not just that my 10-year-old self could not predict whom I would become but that I was incapable of imagining that my tastes could undergo such wholesale change. How could I know what I would want if I did not know who I would be?
One problem is that we do not anticipate the effect of experiencing things. We may instinctively realise we will tire of our favourite food if we eat too much of it, but we might underestimate how much more we could like something if only we ate it more often. Another issue is psychological “salience”, or the things we pay attention to. In the moment we buy a consumer good that offers cashback, the offer is claiming our attention; it might even have influenced the purchase. By the time we get home, the salience fades; the cashback goes unclaimed. When I was 10, what mattered in a car to me was that it be “cool” and fast. What did not matter to me were monthly payments, side-impact crash protection, being able to fit a stroller in the back, and wanting to avoid the appearance of being in a midlife crisis.Even when we look back and see how much our tastes have changed, the idea that we will change equally in the future seems to confound us. It is what keeps tattoo removal practitioners in business. The psychologist Timothy Wilson and colleagues have identified the illusion that for many, the present is a “watershed moment at which they have finally become the person they will be for the rest of their lives”.
In one experiment, they found that people were willing to pay more money to see their favourite band perform 10 years from now than they were willing to pay to see their favourite band from 10 years ago play now. It is reminiscent of the moment, looking through an old photo album, when you see an earlier picture of yourself and exclaim, “Oh my God, that hair!” Or “Those corduroys!” Just as pictures of ourselves can look jarring because we do not normally see ourselves as others see us, our previous tastes, viewed from “outside”, from the perspective of what looks good now, come as a surprise. Your hairstyle per se was probably not good or bad, simply a reflection of contemporary taste. We say, with condescension, “I can’t believe people actually dressed like that,” without realising we ourselves are currently wearing what will be considered bad taste in the future.
One of the reasons we cannot predict our future preferences is one of the things that makes those very preferences change: novelty. In the science of taste and preferences, novelty is a rather elusive phenomenon. On the one hand, we crave novelty, which defines a field such as fashion (“a field of ugliness so absolutely unbearable,” quipped Oscar Wilde, “that we have to alter it every six months”). As Ronald Frasch, the dapper president of Saks Fifth Avenue, once told me, on the women’s designer floor of the flagship store: “The first thing the customer asks when they come into the store is, ‘What’s new?’ They don’t want to know what was; they want to know what is.” How strong is this impulse? “We will sell 60% of what we’re going to sell the first four weeks the goods are on the floor.”
But we also adore familiarity. There are many who believe we like what we are used to. And yet if this were strictly true, nothing would ever change. There would be no new art styles, no new musical genres, no new products. The economist Joseph Schumpeter argued that capitalism’s role was in teaching people to want (and buy) new things. Producers drive economic change, he wrote, and consumers “are taught to want new things, or things which differ in some respect or other from those which they have been in the habit of using”.
“A lot of times, people don’t know what they want until you show it to them,” as Steve Jobs put it. And even then, they still might not want it. Apple’s ill-fated Newton PDA device, as quaint as it now looks in this age of smartphone as human prosthesis, was arguably too new at the time of its release, anticipating needs and behaviours that were not yet fully realised. As Wired described it, it was “a completely new category of device running an entirely new architecture housed in a form factor that represented a completely new and bold design language”.
So, novelty or familiarity? As is often the case, the answer lies somewhere in between, on the midway point of some optimal U-shaped curve plotting the new and the known. The noted industrial designer Raymond Loewy sensed this optimum in what he termed the “MAYA stage”, for “most advanced, yet acceptable”. This was the moment in a product design cycle when, Loewy argued, “resistance to the unfamiliar reaches the threshold of a shock-zone and resistance to buying sets in”. We like the new as long as it reminds us in some way of the old.
Anticipating how much our tastes will change is hard because we cannot see past our inherent resistance to the unfamiliar. Or how much we will change when we do and how each change will open the door to another change. We forget just how fleeting even the most jarring novelty can be. When you had your first sip of beer (or whisky), you probably did not slap your knee and exclaim, “Where has this been all my life?” It was, “People like this?”
We come to like beer, but it is arguably wrong to call beer an “acquired taste”, as the philosopher Daniel Dennett argues, because it is not that first taste that people are coming to like. “If beer went on tasting to me the way the first sip tasted,” he writes, “I would never have gone on drinking beer.” Part of the problem is that alcohol is a shock to the system: it tastes like nothing that has come before, or at least nothing pleasant. New music or art can have the same effect.
by Tom Vanderbilt, The Guardian | Read more:
Image: Aart-Jan VenemaTuesday, June 21, 2016
The Second Amendment Doesn’t Give You the Right to Own a Gun
Can we please stop pretending that the Second Amendment contains an unfettered right for everyone to buy a gun? It doesn’t, and it never has. The claims made by the small number of extremists, before and after the Orlando, Fla., massacre, are based on a deliberate lie.
The Second Amendment of the U.S. Constitution doesn’t just say Congress shall not infringe the right to “keep and bear arms.” It specifically says that right exists in order to maintain “a well-regulated militia.” Even the late conservative Supreme Court Associate Justice Antonin Scalia admitted those words weren’t in there by accident. Oh, and the Constitution doesn’t just say a “militia.” It says a “well-regulated” militia.
What did the Founding Fathers mean by that? We don’t have to guess because they told us. In Federalist No. 29 of the Federalist Papers, Alexander Hamilton explained at great length precisely what a “well-regulated militia” was, why the Founding Fathers thought we needed one, and why they wanted to protect it from being disarmed by the federal government.
And there’s a reason absolutely no gun extremist will ever direct you to that 1788 essay because it blows their baloney into a million pieces.
A “well-regulated militia” didn’t mean guys who read Soldier of Fortune magazine running around in the woods with AK-47s and warpaint on their faces. It basically meant what today we call the National Guard.
It should be a properly constituted, ordered and drilled (“well-regulated”) military force, organized state by state, explained Hamilton. Each state militia should be a “select corps,” “well-trained” and able to perform all the “operations of an army.” The militia needed “uniformity in … organization and discipline,” wrote Hamilton, so that it could operate like a proper army “in camp and field,” and so that it could gain the “essential … degree of proficiency in military functions.” And although it was organized state by state, it needed to be under the explicit control of the national government. The “well-regulated militia” was under the command of the president. It was “the military arm” of the government.
The one big difference between this militia and a professional army? It shouldn’t be made up of full-time professional soldiers, said the Founding Fathers. Such soldiers could be used against the people as King George had used his mercenary Redcoats. Instead, the American republic should make up its military force from part-time volunteers drawn from regular citizens. Such men would be less likely to turn on the population.
And the creation of this “well-regulated militia,” aka the National Guard, would help safeguard the freedom of the new republic because it would make the creation of a professional, mercenary army “unnecessary,” wrote Hamilton. “This appears to me the only substitute that can be devised for a standing army, and the best possible security against it,” he wrote.
That was the point. And that was why they wanted to make sure it couldn’t be disarmed by the federal government: So a future “tyrant” couldn’t disarm the National Guard, and then use a mercenary army to impose martial law.
The Second Amendment of the U.S. Constitution doesn’t just say Congress shall not infringe the right to “keep and bear arms.” It specifically says that right exists in order to maintain “a well-regulated militia.” Even the late conservative Supreme Court Associate Justice Antonin Scalia admitted those words weren’t in there by accident. Oh, and the Constitution doesn’t just say a “militia.” It says a “well-regulated” militia.
What did the Founding Fathers mean by that? We don’t have to guess because they told us. In Federalist No. 29 of the Federalist Papers, Alexander Hamilton explained at great length precisely what a “well-regulated militia” was, why the Founding Fathers thought we needed one, and why they wanted to protect it from being disarmed by the federal government.And there’s a reason absolutely no gun extremist will ever direct you to that 1788 essay because it blows their baloney into a million pieces.
A “well-regulated militia” didn’t mean guys who read Soldier of Fortune magazine running around in the woods with AK-47s and warpaint on their faces. It basically meant what today we call the National Guard.
It should be a properly constituted, ordered and drilled (“well-regulated”) military force, organized state by state, explained Hamilton. Each state militia should be a “select corps,” “well-trained” and able to perform all the “operations of an army.” The militia needed “uniformity in … organization and discipline,” wrote Hamilton, so that it could operate like a proper army “in camp and field,” and so that it could gain the “essential … degree of proficiency in military functions.” And although it was organized state by state, it needed to be under the explicit control of the national government. The “well-regulated militia” was under the command of the president. It was “the military arm” of the government.
The one big difference between this militia and a professional army? It shouldn’t be made up of full-time professional soldiers, said the Founding Fathers. Such soldiers could be used against the people as King George had used his mercenary Redcoats. Instead, the American republic should make up its military force from part-time volunteers drawn from regular citizens. Such men would be less likely to turn on the population.
And the creation of this “well-regulated militia,” aka the National Guard, would help safeguard the freedom of the new republic because it would make the creation of a professional, mercenary army “unnecessary,” wrote Hamilton. “This appears to me the only substitute that can be devised for a standing army, and the best possible security against it,” he wrote.
That was the point. And that was why they wanted to make sure it couldn’t be disarmed by the federal government: So a future “tyrant” couldn’t disarm the National Guard, and then use a mercenary army to impose martial law.
by Brett Arends, Marketwatch | Read more:
Image: Alexander Hamilton, uncredited
Monday, June 20, 2016
The Art of Disclosure: Fashion’s Influence Economy and the FTC
This September, lifestyle guru Aimee Song’s first book, "Capture Your Style: Transform Your Instagram Images, Showcase Your Life and Build the Ultimate Platform," will hit retailers. And if the size of her 3.6 million-strong Instagram following is any indication, it’s sure to be a commercial success. A mere mention in one of Song’s Instagram posts is powerful marketing, attracting tens of thousands of likes and hundreds of comments. It’s little wonder, then, that companies from Laura Mercier to Dior have paid her to market their brands and products to her followers.
Song is something of a poster child for fashion’s lucrative influencer economy from which top digital stars generate hundreds of thousands — and, in some cases, millions — of dollars each year in income, not to mention perks like free product, travel and meals. Indeed, Song’s business is so good — she is thought to earn into the six figures for long-term projects — that she has written an instructional manual about how to achieve her level of success.
For some fashion influencers, amassing a following on social media is simply a hobby, or no more than an exercise in personal brand building. But as fashion businesses move their marketing dollars online and the number of native advertising deals grows, it’s becoming more difficult to discern between organic commentary and paid sponsorship.
Are consumers being deceived?
In 2009, the Federal Trade Commission — an independent agency of the US government tasked with consumer protection — issued a list of guidelines regarding “dot com” disclosures for sponsored content. The guidelines were updated in 2013 and once again in 2015 to account for newer forms of social media.
The rules unequivocally require that paid marketing posts are disclosed as such, but do not stipulate specific language. “There is some vagueness,” says Susan Scafidi, professor of fashion law at Fordham Law School and founder of the Fashion Law Institute. “There are some questions in the guidelines as to whether a simple ‘#ad’ is enough.”
The guidelines do clearly delineate when and where disclosures should take place, however. “Required disclosures must be clear and conspicuous. In evaluating whether a disclosure is likely to be clear and conspicuous, advertisers should consider its placement in the ad and its proximity to the relevant claim. The closer the disclosure is to the claim to which it relates, the better,” says the latest version of the FTC rules, which continue: “Additional considerations include: the prominence of the disclosure; whether it is unavoidable; whether other parts of the ad distract attention from the disclosure; whether the disclosure needs to be repeated at different places on a website; whether disclosures in audio messages are presented in an adequate volume and cadence; whether visual disclosures appear for a sufficient duration; and whether the language of the disclosure is understandable to the intended audience.”
Are influencers complying?
A series of recent articles examining these issues, written by Julie Zerbo, a consultant and editor-in-chief of Thefashionlaw.com, has raised the question once again. In her pieces, Zerbo has taken independent influencers (including Song), websites and traditional editors and publishers to task for failing to properly disclose when they’ve received compensation from a brand in exchange for favourable coverage. But the issue may be less black-and-white than Zerbo suggests.
To be sure, the question is particularly pertinent at the moment. Although the FTC has yet to target influencers themselves, instead going after the brands that pay for undisclosed influencer marketing, the agency has recently shifted its enforcement tactics from simply shaming offending brands to charging them with violations. In the past, companies like Ann Taylor and Cole Haan were publicly taken to task for running influencer campaigns that didn’t follow the FTC guidelines. However, no legal action was taken against these companies. That changed in the past year when American department store Lord & Taylor was formally accused by the FTC of violating its guidelines. The wrongdoing: 50 influencers were paid between $1,000 and $4,000 to post Instagram photos to wear a specific paisley dress sold at the retailer, but many failed to disclose that they were paid, or that they were given the dress for free.
Song is something of a poster child for fashion’s lucrative influencer economy from which top digital stars generate hundreds of thousands — and, in some cases, millions — of dollars each year in income, not to mention perks like free product, travel and meals. Indeed, Song’s business is so good — she is thought to earn into the six figures for long-term projects — that she has written an instructional manual about how to achieve her level of success.For some fashion influencers, amassing a following on social media is simply a hobby, or no more than an exercise in personal brand building. But as fashion businesses move their marketing dollars online and the number of native advertising deals grows, it’s becoming more difficult to discern between organic commentary and paid sponsorship.
Are consumers being deceived?
In 2009, the Federal Trade Commission — an independent agency of the US government tasked with consumer protection — issued a list of guidelines regarding “dot com” disclosures for sponsored content. The guidelines were updated in 2013 and once again in 2015 to account for newer forms of social media.
The rules unequivocally require that paid marketing posts are disclosed as such, but do not stipulate specific language. “There is some vagueness,” says Susan Scafidi, professor of fashion law at Fordham Law School and founder of the Fashion Law Institute. “There are some questions in the guidelines as to whether a simple ‘#ad’ is enough.”
The guidelines do clearly delineate when and where disclosures should take place, however. “Required disclosures must be clear and conspicuous. In evaluating whether a disclosure is likely to be clear and conspicuous, advertisers should consider its placement in the ad and its proximity to the relevant claim. The closer the disclosure is to the claim to which it relates, the better,” says the latest version of the FTC rules, which continue: “Additional considerations include: the prominence of the disclosure; whether it is unavoidable; whether other parts of the ad distract attention from the disclosure; whether the disclosure needs to be repeated at different places on a website; whether disclosures in audio messages are presented in an adequate volume and cadence; whether visual disclosures appear for a sufficient duration; and whether the language of the disclosure is understandable to the intended audience.”
Are influencers complying?
A series of recent articles examining these issues, written by Julie Zerbo, a consultant and editor-in-chief of Thefashionlaw.com, has raised the question once again. In her pieces, Zerbo has taken independent influencers (including Song), websites and traditional editors and publishers to task for failing to properly disclose when they’ve received compensation from a brand in exchange for favourable coverage. But the issue may be less black-and-white than Zerbo suggests.
To be sure, the question is particularly pertinent at the moment. Although the FTC has yet to target influencers themselves, instead going after the brands that pay for undisclosed influencer marketing, the agency has recently shifted its enforcement tactics from simply shaming offending brands to charging them with violations. In the past, companies like Ann Taylor and Cole Haan were publicly taken to task for running influencer campaigns that didn’t follow the FTC guidelines. However, no legal action was taken against these companies. That changed in the past year when American department store Lord & Taylor was formally accused by the FTC of violating its guidelines. The wrongdoing: 50 influencers were paid between $1,000 and $4,000 to post Instagram photos to wear a specific paisley dress sold at the retailer, but many failed to disclose that they were paid, or that they were given the dress for free.
by Lauren Sherman, BOF | Read more:
Image: Instagram/@lauramercier, @nicolettemason, @songofstyle, @manrepeller, @chrisellelim, @sincerelyjules
Image: Instagram/@lauramercier, @nicolettemason, @songofstyle, @manrepeller, @chrisellelim, @sincerelyjules
Subscribe to:
Comments (Atom)








