Monday, October 29, 2018
Sunday, October 28, 2018
Defensible Space
“Megafires” are now a staple of life in the Pacific Northwest, but how we talk about them illustrates the tension at the heart of the western myth itself.
In the Pacific Northwest, people are beginning to refer to the month of August as “smoke season.” For most of this past August, for example, the Methow Valley in Washington State was choked with smoke from the Crescent Mountain fire to the southwest and from the McLeod Fire to the north. The Okanogan County post offices and community centers were offering free particulate respirator masks, and fire progression maps were updated daily and posted outside the town halls. Local businesses offered 10 percent off to all firefighting personnel, who were camped in tents on the sprawling rodeo grounds outside of town. Helicopters with drop buckets of water and red fire retardant were constantly overhead. And at dinner, everyone’s cell phone rang at once with fire updates from the county.
The irony is that, when I was growing up there, August was the month that could be most relied upon for sunny weather. But in August of 2014, during the massive Carlton Complex wildfire, nearly 260,000 acres of Okanogan County burned and destroyed 363 homes, the largest single fire in state history. In August of 2015, the Okanogan Complex fires burned over 300,000 acres, killed three U.S. Forest Service firefighters, and forced the evacuation of several towns. My parents were evacuated for several days in 2015, and this summer, I helped them dust ashes from the vegetables in the garden. On particularly bad days, the sun shone red and the air smelled like campfires and hurt your lungs. Not being able to see the mountains hurt your heart.
Smoke season is not exactly new, for the forests of the West have always burned. But the scale of these huge wildfires—“megafires,” they are called—have grown, due to a complex interplay of increased human habitation in and near the forests, the multifaceted effects of climate change, and the long practice of fire suppression rather than fire management by the U.S. Forest Service. While wildfires are a constant of the forests’ ecology, the once-exceptional burns have now become routine.
So routine, in fact, that researchers now study the mental health effects of prolonged exposure to the “smoke apocalypse.” Last summer, New York Times contributing opinion writer (and, like me, a Pacific Northwesterner) Lindy West described smoke-blanketed Seattle, four hours southwest of Okanogan County, as filled with “the claustrophobia, the tension, the suffocating, ugly air,” and rightly pointed to it as a phenomenon exacerbated by climate change. “In Seattle, in a week or so, a big wind will come and give us our blue sky back,” she wrote. “Someday, though, it won’t.”
Indeed, friends of my parents are talking about moving away. Those who stay long for the smoke to clear and for the summer sky to be as blue as it once was. But this nostalgia is worth attending to, for how we talk about the wildfires is also how we talk about the West. The idea of the West—as region, ideology, national mythos—is all about desiring the authentic in a landscape of inauthenticity, about safely yearning for something never there in the first place, about obscuring violence with romance.
Since the landscape of the West is indelibly shaped by its own story, talking about land in the West always contains a moral. How we talk about the wildfires illustrates the tension at the heart of the western myth itself, one that will need to collapse from its own weight if we ever hope to see the sky for what it truly is. And each summer now, that sky is on fire.
Forest fires are an intrinsic part of our world’s carbon-rich ecology. Ecosystems such as Washington’s thick central and eastern forests are reliant on fire to help liberate nutrients in the soil, open the tree cones that need heat to release their seeds, clear out unwanted underbrush, and produce a healthily shifting mosaic of micro-ecologies on the forest floor. Fire is also one of the oldest—and perhaps the most determinative—parts of the human world, and native economies used it to transform the North American landscape well before Europeans arrived. Native peoples turned forests into grassland and savannah, cleared and carefully curated forest vegetation and fauna to better hunt and gather, and even practiced fire prevention and, when necessary, fought wildfires.
Living among wildfire smoke is also not new, especially in the Northwest as settlements formed in the drainages and valleys of mountains where smoke tends to pool. During the big fires—1865, when a million acres burned from the Olympics to the Sierras, the Tillamook cycle, which burned from 1933 until 1951 in a series of reburns—smoke was endemic to the Pacific Northwest. In the 1880s, smoke was reportedly so thick through the summer and fall seasons that geological survey crews in the Cascades had to abandon their work.
Yet today Washington State has more homes in fire-prone wildland areas—known as the “wildland-urban interface,” or WUI—than anywhere else in the country. There is estimated to be a 40 percent increase in homes in the WUI between 2001 and 2030, with no sign of such development abating, despite the megafires. New developments have no mandatory review procedures to assess wildfire risk. The Okanogan County Comprehensive Plan on managing growth, for example, released just after the Carlton Complex fires in 2014, didn’t include a single concrete guideline or requirement. Instead, it is up to each individual property owner to reduce risk on their own land.
As a result, state and federal firefighters have to actively suppress fires—not merely manage them—in order to save homes (which they do with remarkable and laudable precision). This suppression leaves forests overly dense and ready to burn while the increased presence of people also makes fires much more likely: in the dry tinderbox of southern California, for instance, 95 percent of fires are started by human activity.
The reigning ethos of development is, of course, private property: let people do what they like on their own land. There is a byzantine patchwork of environmental regulations and land usage laws at the county, state, and federal level, but these are largely geared toward managing growth rather than suppressing it. “I’m not real big on over-regulating people,” Andy Hover, one of the current Okanogan County Commissioners, said in the middle of this year’s fire season. “Rules and regulations are kind of like—well, is that really what we want?”
Whether or not “we” really want rules and regulations in the West is the historically vexed question that has driven the development of the West since colonial settlement. Despite its mythic ethos of self-reliance, independence, and rugged autonomy, a massive influx of federal funds and intervention has always been necessary for non-Native settlers to live in the West. The federal government funded decades of military campaigns and genocidal wars against indigenous people to clear the land. Federal land grants of over 100 million acres, tax incentives, and government loans all helped build the transcontinental railroads, which both opened the West to increased settlement and built the power of banks and finance on Wall Street. The Homestead Act of 1862 offered free land to white farmers if they agreed to “improve” it for five years; and the Dawes Severalty Act in 1887 broke up the grants of reservation land initially sanctioned for Native Americans. One name for this, popularized by historian Frederick Jackson Turner, is the frontier thesis; another is manifest destiny. Yet another is imperialism. Its legacy continues in the approach to western housing developments today: what was once held in common is nominally and culturally understood as the preserve of the individual yet underwritten by the federal government.
Today, more than half of the U.S. Forest Service’s budget goes to fighting wildfires and, increasingly, keeping them away from people’s private property.
So while fire season is not new, it still feels new to many of us who are used to seeing summer mountain skies where the blue was so vast it humbled even the mountains at its edge. It feels new when the hills you’ve driven through for years are lined with blackened, charred trunks, and the old and chipping Smokey Bear sign, just across the street from the tiny U.S. Forest Service office in the Methow Valley, continually points to the color-coded scale of today’s fire danger: red for EXTREME. (...)
I was reading a book about wildfires in a local bakery in Winthrop when a contractor who rents firefighting equipment to the Forest Service gamely tried to pick me up. But because this is a western story, instead of offering me his phone number, he offered me a pamphlet on how to defend my home from wildfire.
“Defensible space,” I learned, is the goal behind any wildfire preparedness campaign. It denotes the area between a house and an oncoming fire that has been managed by the homeowner to reduce wildfire risk and provide firefighters with a clear space of operations. Defensible space has become the watchword of private programs such as Firewise USA®, a partnership between a nonprofit organization and federal agencies that teaches property owners how to “adapt to living with wildfire” and prepare their homes for fire risk.
Creating defensible space involves reducing excessive vegetation (shrubs, dense clusters of trees, dried grass) from around the house, and replacing them with well-irrigated lawn or flowerbeds, as well as surrounding your home with inflammable materials to deflect burning embers. Depending on your particular vegetation type and the percent of slope on which your house rests, you will need between 30 to 200 feet of defensible space surrounding your home.
The idea of defensible space strikes me as an intrinsically western one. It has taken a tremendous amount of government money, environmental engineering, and colonial violence for there to be such a thing as “private property” in the West, and for people to live out their—historically speaking—absurd fantasies of independence and self-reliance, to create their own western defensible space. And yet still, for the one third of the United States that lives in the wildland-urban interface, each house in each subdivision attempts to surround itself by its own barrier of self-created defensible space, each pretending to be self-reliant yet in need of massive federal funds for power, water, roads, and firefighting.
In the Pacific Northwest, people are beginning to refer to the month of August as “smoke season.” For most of this past August, for example, the Methow Valley in Washington State was choked with smoke from the Crescent Mountain fire to the southwest and from the McLeod Fire to the north. The Okanogan County post offices and community centers were offering free particulate respirator masks, and fire progression maps were updated daily and posted outside the town halls. Local businesses offered 10 percent off to all firefighting personnel, who were camped in tents on the sprawling rodeo grounds outside of town. Helicopters with drop buckets of water and red fire retardant were constantly overhead. And at dinner, everyone’s cell phone rang at once with fire updates from the county.
The irony is that, when I was growing up there, August was the month that could be most relied upon for sunny weather. But in August of 2014, during the massive Carlton Complex wildfire, nearly 260,000 acres of Okanogan County burned and destroyed 363 homes, the largest single fire in state history. In August of 2015, the Okanogan Complex fires burned over 300,000 acres, killed three U.S. Forest Service firefighters, and forced the evacuation of several towns. My parents were evacuated for several days in 2015, and this summer, I helped them dust ashes from the vegetables in the garden. On particularly bad days, the sun shone red and the air smelled like campfires and hurt your lungs. Not being able to see the mountains hurt your heart.Smoke season is not exactly new, for the forests of the West have always burned. But the scale of these huge wildfires—“megafires,” they are called—have grown, due to a complex interplay of increased human habitation in and near the forests, the multifaceted effects of climate change, and the long practice of fire suppression rather than fire management by the U.S. Forest Service. While wildfires are a constant of the forests’ ecology, the once-exceptional burns have now become routine.
So routine, in fact, that researchers now study the mental health effects of prolonged exposure to the “smoke apocalypse.” Last summer, New York Times contributing opinion writer (and, like me, a Pacific Northwesterner) Lindy West described smoke-blanketed Seattle, four hours southwest of Okanogan County, as filled with “the claustrophobia, the tension, the suffocating, ugly air,” and rightly pointed to it as a phenomenon exacerbated by climate change. “In Seattle, in a week or so, a big wind will come and give us our blue sky back,” she wrote. “Someday, though, it won’t.”
Indeed, friends of my parents are talking about moving away. Those who stay long for the smoke to clear and for the summer sky to be as blue as it once was. But this nostalgia is worth attending to, for how we talk about the wildfires is also how we talk about the West. The idea of the West—as region, ideology, national mythos—is all about desiring the authentic in a landscape of inauthenticity, about safely yearning for something never there in the first place, about obscuring violence with romance.
Since the landscape of the West is indelibly shaped by its own story, talking about land in the West always contains a moral. How we talk about the wildfires illustrates the tension at the heart of the western myth itself, one that will need to collapse from its own weight if we ever hope to see the sky for what it truly is. And each summer now, that sky is on fire.
Forest fires are an intrinsic part of our world’s carbon-rich ecology. Ecosystems such as Washington’s thick central and eastern forests are reliant on fire to help liberate nutrients in the soil, open the tree cones that need heat to release their seeds, clear out unwanted underbrush, and produce a healthily shifting mosaic of micro-ecologies on the forest floor. Fire is also one of the oldest—and perhaps the most determinative—parts of the human world, and native economies used it to transform the North American landscape well before Europeans arrived. Native peoples turned forests into grassland and savannah, cleared and carefully curated forest vegetation and fauna to better hunt and gather, and even practiced fire prevention and, when necessary, fought wildfires.
Living among wildfire smoke is also not new, especially in the Northwest as settlements formed in the drainages and valleys of mountains where smoke tends to pool. During the big fires—1865, when a million acres burned from the Olympics to the Sierras, the Tillamook cycle, which burned from 1933 until 1951 in a series of reburns—smoke was endemic to the Pacific Northwest. In the 1880s, smoke was reportedly so thick through the summer and fall seasons that geological survey crews in the Cascades had to abandon their work.
Yet today Washington State has more homes in fire-prone wildland areas—known as the “wildland-urban interface,” or WUI—than anywhere else in the country. There is estimated to be a 40 percent increase in homes in the WUI between 2001 and 2030, with no sign of such development abating, despite the megafires. New developments have no mandatory review procedures to assess wildfire risk. The Okanogan County Comprehensive Plan on managing growth, for example, released just after the Carlton Complex fires in 2014, didn’t include a single concrete guideline or requirement. Instead, it is up to each individual property owner to reduce risk on their own land.
As a result, state and federal firefighters have to actively suppress fires—not merely manage them—in order to save homes (which they do with remarkable and laudable precision). This suppression leaves forests overly dense and ready to burn while the increased presence of people also makes fires much more likely: in the dry tinderbox of southern California, for instance, 95 percent of fires are started by human activity.
The reigning ethos of development is, of course, private property: let people do what they like on their own land. There is a byzantine patchwork of environmental regulations and land usage laws at the county, state, and federal level, but these are largely geared toward managing growth rather than suppressing it. “I’m not real big on over-regulating people,” Andy Hover, one of the current Okanogan County Commissioners, said in the middle of this year’s fire season. “Rules and regulations are kind of like—well, is that really what we want?”
Whether or not “we” really want rules and regulations in the West is the historically vexed question that has driven the development of the West since colonial settlement. Despite its mythic ethos of self-reliance, independence, and rugged autonomy, a massive influx of federal funds and intervention has always been necessary for non-Native settlers to live in the West. The federal government funded decades of military campaigns and genocidal wars against indigenous people to clear the land. Federal land grants of over 100 million acres, tax incentives, and government loans all helped build the transcontinental railroads, which both opened the West to increased settlement and built the power of banks and finance on Wall Street. The Homestead Act of 1862 offered free land to white farmers if they agreed to “improve” it for five years; and the Dawes Severalty Act in 1887 broke up the grants of reservation land initially sanctioned for Native Americans. One name for this, popularized by historian Frederick Jackson Turner, is the frontier thesis; another is manifest destiny. Yet another is imperialism. Its legacy continues in the approach to western housing developments today: what was once held in common is nominally and culturally understood as the preserve of the individual yet underwritten by the federal government.
Today, more than half of the U.S. Forest Service’s budget goes to fighting wildfires and, increasingly, keeping them away from people’s private property.
So while fire season is not new, it still feels new to many of us who are used to seeing summer mountain skies where the blue was so vast it humbled even the mountains at its edge. It feels new when the hills you’ve driven through for years are lined with blackened, charred trunks, and the old and chipping Smokey Bear sign, just across the street from the tiny U.S. Forest Service office in the Methow Valley, continually points to the color-coded scale of today’s fire danger: red for EXTREME. (...)
I was reading a book about wildfires in a local bakery in Winthrop when a contractor who rents firefighting equipment to the Forest Service gamely tried to pick me up. But because this is a western story, instead of offering me his phone number, he offered me a pamphlet on how to defend my home from wildfire.
“Defensible space,” I learned, is the goal behind any wildfire preparedness campaign. It denotes the area between a house and an oncoming fire that has been managed by the homeowner to reduce wildfire risk and provide firefighters with a clear space of operations. Defensible space has become the watchword of private programs such as Firewise USA®, a partnership between a nonprofit organization and federal agencies that teaches property owners how to “adapt to living with wildfire” and prepare their homes for fire risk.
Creating defensible space involves reducing excessive vegetation (shrubs, dense clusters of trees, dried grass) from around the house, and replacing them with well-irrigated lawn or flowerbeds, as well as surrounding your home with inflammable materials to deflect burning embers. Depending on your particular vegetation type and the percent of slope on which your house rests, you will need between 30 to 200 feet of defensible space surrounding your home.
The idea of defensible space strikes me as an intrinsically western one. It has taken a tremendous amount of government money, environmental engineering, and colonial violence for there to be such a thing as “private property” in the West, and for people to live out their—historically speaking—absurd fantasies of independence and self-reliance, to create their own western defensible space. And yet still, for the one third of the United States that lives in the wildland-urban interface, each house in each subdivision attempts to surround itself by its own barrier of self-created defensible space, each pretending to be self-reliant yet in need of massive federal funds for power, water, roads, and firefighting.
by Jessie Kindig, Boston Review | Read more:
Image: Ashley SipleWho Shot the Sheriff?
Goings-on in the Tivoli Gardens: A Brief History of Seven Killings
Bob Marley had called a break during a band rehearsal at his house on the evening of 3 December 1976 when two cars pulled up and seven or more gunmen got out. One found his way to the kitchen, where Marley was eating a grapefruit, and opened fire. A bullet scraped his chest before hitting his upper arm, and four or five hit his manager, Don Taylor, who was standing between him and the doorway. The keyboard player’s girlfriend saw ‘a kid’ with his eyes squeezed shut emptying a pistol into the rehearsal area. The lead guitarist, an American session man on his first visit to Jamaica, took cover behind a flight case. The bass player and others – accounts vary as to how many – dived into a metal bathtub. Marley’s wife, Rita, was hit in the driveway while trying to get their children out and went down with a bullet fragment in her scalp. There were shouts: ‘Did you get him?’ ‘Yeah! I shot him!’ Then police arrived to investigate the gunfire and the attackers took off.
The manager had to be flown to Miami for surgery, but all the victims survived, and while each of the gunmen gets killed in A Brief History of Seven Killings, the novel restages the assault on Marley’s house with eight shooters, most of whom get given names: Josey Wales, Weeper, Bam-Bam, Demus, Heckle and Funky Chicken, plus ‘two man from Jungle, one fat, one skinny’. (‘Jungle’ is a nickname for one of the many social housing developments that sprang up in Kingston in the 1960s and 1970s.) The killings in the title of Marlon James’s novel – a novel that’s built around the attempt on Marley’s life much as Don DeLillo’s Libra (1988) and James Ellroy’s American Tabloid (1995) are built around the Kennedy assassination – turn out, after hundreds of pages, to be modelled on a massacre carried out years later in an American crack house, allegedly by Lester Coke, a Kingston gang boss who burned to death, in unexplained circumstances, in a high-security prison cell in 1992. His son and heir, Christopher ‘Dudus’ Coke, is the man the Jamaican army and police were looking for when they killed at least 73 civilians in a raid on the Tivoli Gardens estate in West Kingston in 2010. So there are more than enough killings to go around.
James begins his story with the build-up to Marley’s shooting and ends with the burning of Josey Wales, the character corresponding to Lester Coke, with a Dudus-like figure ready in the wings. (A sequel was projected early on, but I wouldn’t be surprised if it got slowed down by James’s work on a script for HBO, which bought the screen rights to the novel in April.) He has no trouble constructing a plausible narrative connecting the attack to many aspects of Jamaican history, and in outline his plot sticks closely, especially in its opening stages, to the facts and testimony and rumours gathered up by Timothy White, an American music journalist who periodically updated his 1983 biography of Marley, Catch a Fire, until his death in 2002. The characters are all freely imagined even when they’re filling the roles of real people, with the exception of Marley, who’s seen only through the eyes of a range of first-person narrators, and whose stage time is judiciously rationed. He’s referred to throughout as ‘the Singer’, though James doesn’t tie himself in knots for the sake of consistency: a character called Alex Pierce, a writer for Rolling Stone whose research seems to be a fantasticated version of White’s, urges himself at one point to ‘head back to Marley’s house’.
Marley isn’t left blank, exactly: we hear quite a lot about his under-the-table philanthropy, his physical beauty, his politico-religious worldview, and about the sniffiness with which he was viewed by the small, determinedly self-improving black middle class, which wasn’t at first thrilled by the outside world’s interest in some ‘damn nasty Rasta’, all ‘ganja smell and frowsy arm’, as an angry mother puts it. Other characters do impressions of foreign music-business types – ‘You reggae dudes are far out, man, got any gawn-ja?’ – or fulminate about Eric Clapton, who drunkenly shared his views on ‘wogs’ and ‘fucking Jamaicans’ with an audience in Birmingham in August 1976, two years after he had his first American number one with a cover of Marley’s ‘I Shot the Sheriff’. (‘He think naigger boy never going read the Melody Maker.’) But animating a pre-mythic Marley, ‘outside of him being in every frat boy’s dorm room’, as James put it in an interview last year, isn’t the first order of business. ‘The people around him, the ones who come and go,’ Alex the journalist muses, ‘might actually provide a bigger picture than me asking him why he smokes ganja. Damn if I’m not fooling myself I’m Gay Talese again.’
‘The ones who come and go’, in James’s telling, include a young woman called Nina Burgess, who’s had a one night stand with Marley; Barry Diflorio, a CIA man; and Alex. The rest are gangsters, and the bigger picture they open up is a view from the ground of the working relationship between organised crime and Jamaican parliamentary politics. Marley’s shooting is a good device for getting at that, because no one seriously disputes that it was triggered by the 1976 election campaign, then the most violent in the country’s history, contested by two sons of the light-skinned post-independence elite: Michael Manley, the leader of the social democratic People’s National Party, and Edward Seaga, the leader of the conservative Jamaica Labour Party. The Jamaican system of ‘garrisons’ – social housing estates, usually built over bulldozed shantytowns, run by ‘dons’ on behalf of one or other of the parties – was up and running by the 1970s, with Tivoli Gardens, a pet project of Seaga’s and his electoral power base, as exhibit A. The novel reimagines it as ‘Copenhagen City’, perhaps to emphasise the contrast between the name’s promise of Scandinavian sleekness and the reality of votes delivered by armed enforcers.
Marley wasn’t faking it when he sang about his memories of a similarly downtrodden ‘government yard’, and didn’t need instruction on the dons’ multiple roles as providers of stuff the state wasn’t supplying, such as arbitration and policing of sorts, on top of their function as political goons and in workaday criminal enterprises. After he’d become a national celebrity in the 1960s, he sometimes played host to Claudie Massop, the JLP gang boss of Tivoli Gardens, whom he’d known as a child. Massop’s counterpart in the novel is called Papa-Lo. James casts him as an enforcer of the old school, still capable of murdering a schoolboy when necessary but sick at heart and out of his depth in an increasingly vicious electoral struggle. Papa-Lo’s younger ally, who calls himself Josey Wales after the Clint Eastwood character (Lester Coke himself operated as ‘Jim Brown’ in tribute to the only African-American star of The Dirty Dozen), is better adapted to the shifting state of affairs. Josey is made to seem dangerous not so much because he’s irretrievably damaged by previous rounds of slum clearance, gang warfare and police brutality – so is everyone around him – as because he’s attuned to goings-on in the wider world.
The opportunities Josey sees come from the external pressures that made the 1976 election, in the eyes of many participants, a Cold War proxy conflict. Manley’s PNP government, in power since 1972, had annoyed the bauxite companies, Washington and large swathes of local elite opinion with its leftish reforms and friendliness to Cuba. Manley blamed a rise in political shootouts and some of the country’s economic setbacks on a covert destabilisation campaign, and the Americans were widely understood – thanks partly to the writings of Philip Agee, a CIA whistleblower – to be shipping arms and money to Seaga’s JLP. Seaga’s supporters countered by putting it about that Castro was training the other side’s gunmen, and portrayed the sweeping police powers introduced by Manley’s government as a step towards a one-party state. Either way, no one was badly off for guns and grievances when Manley offered himself for re-election. ‘The world,’ Papa-Lo says, ‘now feeling like the seven seals breaking one after the other. Hataclaps’ – from ‘apocalypse’ – ‘in the air.’
Marley dropped a hint about his stance towards all this in one of the less cryptic lines on Rastaman Vibration, released eight months before the election: ‘Rasta don’t work for no CIA.’ Formal politics, he felt, belonged to Babylon, the modern materialist society, and he tried to keep his distance from it. But he was suspected, with some reason, of supporting the PNP. Both party leaders took an interest in the kinds of constituency Marley spoke for, and kept an ear to the ground when it came to popular culture. Seaga, early on in his career, had produced a few ska recordings in West Kingston, some of them featuring Marley’s mentor Joe Higgs. Manley, not to be outdone, had visited Ethiopia and returned with – in White’s words – ‘an elaborate miniature walking stick’, a gift from Haile Selassie, to show Rasta voters. Back in 1971 he had also pressed Marley into joining an explicitly PNP-oriented Carnival of Stars tour to warm up his first campaign. And in 1976 his people issued Marley with a pressing invitation to play a free concert in the name of national unity. It was to take place shortly before the election with an eye to overshadowing a JLP campaign event, and it’s what Marley was rehearsing for when, two days before the concert, the shooters arrived.
by Christopher Tayler, LRB | Read more:Bob Marley had called a break during a band rehearsal at his house on the evening of 3 December 1976 when two cars pulled up and seven or more gunmen got out. One found his way to the kitchen, where Marley was eating a grapefruit, and opened fire. A bullet scraped his chest before hitting his upper arm, and four or five hit his manager, Don Taylor, who was standing between him and the doorway. The keyboard player’s girlfriend saw ‘a kid’ with his eyes squeezed shut emptying a pistol into the rehearsal area. The lead guitarist, an American session man on his first visit to Jamaica, took cover behind a flight case. The bass player and others – accounts vary as to how many – dived into a metal bathtub. Marley’s wife, Rita, was hit in the driveway while trying to get their children out and went down with a bullet fragment in her scalp. There were shouts: ‘Did you get him?’ ‘Yeah! I shot him!’ Then police arrived to investigate the gunfire and the attackers took off.
The manager had to be flown to Miami for surgery, but all the victims survived, and while each of the gunmen gets killed in A Brief History of Seven Killings, the novel restages the assault on Marley’s house with eight shooters, most of whom get given names: Josey Wales, Weeper, Bam-Bam, Demus, Heckle and Funky Chicken, plus ‘two man from Jungle, one fat, one skinny’. (‘Jungle’ is a nickname for one of the many social housing developments that sprang up in Kingston in the 1960s and 1970s.) The killings in the title of Marlon James’s novel – a novel that’s built around the attempt on Marley’s life much as Don DeLillo’s Libra (1988) and James Ellroy’s American Tabloid (1995) are built around the Kennedy assassination – turn out, after hundreds of pages, to be modelled on a massacre carried out years later in an American crack house, allegedly by Lester Coke, a Kingston gang boss who burned to death, in unexplained circumstances, in a high-security prison cell in 1992. His son and heir, Christopher ‘Dudus’ Coke, is the man the Jamaican army and police were looking for when they killed at least 73 civilians in a raid on the Tivoli Gardens estate in West Kingston in 2010. So there are more than enough killings to go around.James begins his story with the build-up to Marley’s shooting and ends with the burning of Josey Wales, the character corresponding to Lester Coke, with a Dudus-like figure ready in the wings. (A sequel was projected early on, but I wouldn’t be surprised if it got slowed down by James’s work on a script for HBO, which bought the screen rights to the novel in April.) He has no trouble constructing a plausible narrative connecting the attack to many aspects of Jamaican history, and in outline his plot sticks closely, especially in its opening stages, to the facts and testimony and rumours gathered up by Timothy White, an American music journalist who periodically updated his 1983 biography of Marley, Catch a Fire, until his death in 2002. The characters are all freely imagined even when they’re filling the roles of real people, with the exception of Marley, who’s seen only through the eyes of a range of first-person narrators, and whose stage time is judiciously rationed. He’s referred to throughout as ‘the Singer’, though James doesn’t tie himself in knots for the sake of consistency: a character called Alex Pierce, a writer for Rolling Stone whose research seems to be a fantasticated version of White’s, urges himself at one point to ‘head back to Marley’s house’.
Marley isn’t left blank, exactly: we hear quite a lot about his under-the-table philanthropy, his physical beauty, his politico-religious worldview, and about the sniffiness with which he was viewed by the small, determinedly self-improving black middle class, which wasn’t at first thrilled by the outside world’s interest in some ‘damn nasty Rasta’, all ‘ganja smell and frowsy arm’, as an angry mother puts it. Other characters do impressions of foreign music-business types – ‘You reggae dudes are far out, man, got any gawn-ja?’ – or fulminate about Eric Clapton, who drunkenly shared his views on ‘wogs’ and ‘fucking Jamaicans’ with an audience in Birmingham in August 1976, two years after he had his first American number one with a cover of Marley’s ‘I Shot the Sheriff’. (‘He think naigger boy never going read the Melody Maker.’) But animating a pre-mythic Marley, ‘outside of him being in every frat boy’s dorm room’, as James put it in an interview last year, isn’t the first order of business. ‘The people around him, the ones who come and go,’ Alex the journalist muses, ‘might actually provide a bigger picture than me asking him why he smokes ganja. Damn if I’m not fooling myself I’m Gay Talese again.’
‘The ones who come and go’, in James’s telling, include a young woman called Nina Burgess, who’s had a one night stand with Marley; Barry Diflorio, a CIA man; and Alex. The rest are gangsters, and the bigger picture they open up is a view from the ground of the working relationship between organised crime and Jamaican parliamentary politics. Marley’s shooting is a good device for getting at that, because no one seriously disputes that it was triggered by the 1976 election campaign, then the most violent in the country’s history, contested by two sons of the light-skinned post-independence elite: Michael Manley, the leader of the social democratic People’s National Party, and Edward Seaga, the leader of the conservative Jamaica Labour Party. The Jamaican system of ‘garrisons’ – social housing estates, usually built over bulldozed shantytowns, run by ‘dons’ on behalf of one or other of the parties – was up and running by the 1970s, with Tivoli Gardens, a pet project of Seaga’s and his electoral power base, as exhibit A. The novel reimagines it as ‘Copenhagen City’, perhaps to emphasise the contrast between the name’s promise of Scandinavian sleekness and the reality of votes delivered by armed enforcers.
Marley wasn’t faking it when he sang about his memories of a similarly downtrodden ‘government yard’, and didn’t need instruction on the dons’ multiple roles as providers of stuff the state wasn’t supplying, such as arbitration and policing of sorts, on top of their function as political goons and in workaday criminal enterprises. After he’d become a national celebrity in the 1960s, he sometimes played host to Claudie Massop, the JLP gang boss of Tivoli Gardens, whom he’d known as a child. Massop’s counterpart in the novel is called Papa-Lo. James casts him as an enforcer of the old school, still capable of murdering a schoolboy when necessary but sick at heart and out of his depth in an increasingly vicious electoral struggle. Papa-Lo’s younger ally, who calls himself Josey Wales after the Clint Eastwood character (Lester Coke himself operated as ‘Jim Brown’ in tribute to the only African-American star of The Dirty Dozen), is better adapted to the shifting state of affairs. Josey is made to seem dangerous not so much because he’s irretrievably damaged by previous rounds of slum clearance, gang warfare and police brutality – so is everyone around him – as because he’s attuned to goings-on in the wider world.
The opportunities Josey sees come from the external pressures that made the 1976 election, in the eyes of many participants, a Cold War proxy conflict. Manley’s PNP government, in power since 1972, had annoyed the bauxite companies, Washington and large swathes of local elite opinion with its leftish reforms and friendliness to Cuba. Manley blamed a rise in political shootouts and some of the country’s economic setbacks on a covert destabilisation campaign, and the Americans were widely understood – thanks partly to the writings of Philip Agee, a CIA whistleblower – to be shipping arms and money to Seaga’s JLP. Seaga’s supporters countered by putting it about that Castro was training the other side’s gunmen, and portrayed the sweeping police powers introduced by Manley’s government as a step towards a one-party state. Either way, no one was badly off for guns and grievances when Manley offered himself for re-election. ‘The world,’ Papa-Lo says, ‘now feeling like the seven seals breaking one after the other. Hataclaps’ – from ‘apocalypse’ – ‘in the air.’
Marley dropped a hint about his stance towards all this in one of the less cryptic lines on Rastaman Vibration, released eight months before the election: ‘Rasta don’t work for no CIA.’ Formal politics, he felt, belonged to Babylon, the modern materialist society, and he tried to keep his distance from it. But he was suspected, with some reason, of supporting the PNP. Both party leaders took an interest in the kinds of constituency Marley spoke for, and kept an ear to the ground when it came to popular culture. Seaga, early on in his career, had produced a few ska recordings in West Kingston, some of them featuring Marley’s mentor Joe Higgs. Manley, not to be outdone, had visited Ethiopia and returned with – in White’s words – ‘an elaborate miniature walking stick’, a gift from Haile Selassie, to show Rasta voters. Back in 1971 he had also pressed Marley into joining an explicitly PNP-oriented Carnival of Stars tour to warm up his first campaign. And in 1976 his people issued Marley with a pressing invitation to play a free concert in the name of national unity. It was to take place shortly before the election with an eye to overshadowing a JLP campaign event, and it’s what Marley was rehearsing for when, two days before the concert, the shooters arrived.
Image: Jonathan Player/Shutterstock via Rolling Stone
[ed. Netflix apparently has a new "docuseries" out about the 1976 attempted assassination of Bob Marley - Who Shot the Sheriff? (the subject of Marlon James' Booker Prize winning fictional novel A Brief History of Seven Killings... one of the most violent novels I've read since Cormac McCarthy's Blood Meridian or Bolano's 2066). A tough read.]
“Bohemian Rhapsody” Is the Least Orgiastic Rock Bio-Pic
Extra teeth. That was the secret of Freddie Mercury, or, at any rate, of the singular sound he made. In “Bohemian Rhapsody,” a new bio-pic about him, Mercury (Rami Malek) reveals all: “I was born with four more incisors. More space in my mouth, and more range.” Basically, he’s walking around with an opera house in his head. That explains the diva-like throb of his singing, and we are left to ponder the other crowd-wooing rockers of his generation; do they, too, rely upon oral eccentricity? Is it true that Rod Stewart’s vocal cords are lined with cinders, and that Mick Jagger has a red carpet instead of a tongue? What happens inside Elton John’s mouth, Lord knows, although “Rocketman,” next year’s bio-pic about him, will presumably spill the beans.
“Bohemian Rhapsody” starts with the Live Aid concert, in 1985. That was the talent-heavy occasion on which Queen, fronted by Mercury, took complete command of Wembley Stadium and, it is generally agreed, destroyed the competition. We then flip back to 1970, and to the younger Freddie—born Farrokh Bulsara, in Zanzibar, and educated partly at a boarding school in India, but now dwelling in the London suburbs. This being a rock movie, his parents are required to be conservative and stiff, and he is required to vex them by going out at night to see bands.
If the film is to be trusted (and one instinctively feels that it isn’t), the birth of Queen was smooth and unproblematic. Mercury approaches two musicians, Roger Taylor (Ben Hardy) and Brian May (Gwilym Lee), in a parking lot, having enjoyed their gig; learns that their group’s lead singer has defected; and, then and there, launches into an impromptu audition for the job. Bingo! The resulting lineup, now graced with John Deacon (Joseph Mazzello) on bass, lets rip onstage, with Freddie tearing the microphone from its base to create the long-handled-lollipop look that will stay with him forever. Queen already sounds like Queen, and, before you know it, the boys have a manager, a contract, an album, and a cascade of wealth. It’s that easy. As for their first global tour, it is illustrated by the names of cities flashing up on the screen—“Tokyo,” “Rio,” and so forth, in one of those excitable montages which were starting to seem old-fashioned by 1940.
As a film, “Bohemian Rhapsody” is all over the place. So is “Bohemian Rhapsody” as a song, yet somehow, by dint of shameless alchemy and professional stamina, it coheres; the movie shows poor Roger Taylor doing take after take of the dreaded “Galileo!” shrieks, bravely risking a falsetto-related injury in the cause of art. Anyone hoping to be let in on Queen’s trade secrets will feel frustrated, although I liked the coins that rattled and bounced on the skin of Taylor’s drum, and it’s good to watch Deacon noodle a new bass riff—for “Another One Bites the Dust”—purely to stop the other band members squabbling. The later sections of the story, dealing with Mercury’s AIDS diagnosis, are carefully handled, but most of the film is stuffed with lumps of cheesy rock-speak (“We’re just not thinking big enough”; “I won’t compromise my vision”), and gives off the delicious aroma of parody. When Mercury tries out the plangent “Love of My Life” on the piano, it’s impossible not to recall the great Nigel Tufnel, in “This Is Spinal Tap” (1984), playing something similar in D minor, “the saddest of all keys,” and adding that it’s called “Lick My Love Pump.”
The funniest thing about the new film is that its creation was clearly more rocklike than anything to be found in the end product. Bryan Singer, who is credited as the director, was fired from the production last year and replaced by Dexter Fletcher, although some scenes appear to have been directed by no one at all, or perhaps by a pizza delivery guy who strayed onto the set. The lead role was originally assigned to Sacha Baron Cohen (a performance of which we can but dream), although Malek, mixing shyness with muscularity, and sporting a set of false teeth that would make Bela Lugosi climb back into his casket, spares nothing in his devotion to the Mercurial. The character’s carnal wants, by all accounts prodigious, are reduced to the pinching of a waiter’s backside, plus the laughable glance that Freddie receives from a bearded American truck driver at a gas station as he enters the bathroom. With its PG-13 rating, and its solemn statements of faith in the band as a family, “Bohemian Rhapsody” may be the least orgiastic tribute ever paid to the world of rock. Is this the real life? Nope. Is this just fantasy? Not entirely, for the climax, quite rightly, returns us to Live Aid—to a majestic restaging of Queen’s contribution, with Malek displaying his perfect peacock strut in front of the mob. If only for twenty minutes, Freddie Mercury is the champion of the world.
“Bohemian Rhapsody” starts with the Live Aid concert, in 1985. That was the talent-heavy occasion on which Queen, fronted by Mercury, took complete command of Wembley Stadium and, it is generally agreed, destroyed the competition. We then flip back to 1970, and to the younger Freddie—born Farrokh Bulsara, in Zanzibar, and educated partly at a boarding school in India, but now dwelling in the London suburbs. This being a rock movie, his parents are required to be conservative and stiff, and he is required to vex them by going out at night to see bands.If the film is to be trusted (and one instinctively feels that it isn’t), the birth of Queen was smooth and unproblematic. Mercury approaches two musicians, Roger Taylor (Ben Hardy) and Brian May (Gwilym Lee), in a parking lot, having enjoyed their gig; learns that their group’s lead singer has defected; and, then and there, launches into an impromptu audition for the job. Bingo! The resulting lineup, now graced with John Deacon (Joseph Mazzello) on bass, lets rip onstage, with Freddie tearing the microphone from its base to create the long-handled-lollipop look that will stay with him forever. Queen already sounds like Queen, and, before you know it, the boys have a manager, a contract, an album, and a cascade of wealth. It’s that easy. As for their first global tour, it is illustrated by the names of cities flashing up on the screen—“Tokyo,” “Rio,” and so forth, in one of those excitable montages which were starting to seem old-fashioned by 1940.
As a film, “Bohemian Rhapsody” is all over the place. So is “Bohemian Rhapsody” as a song, yet somehow, by dint of shameless alchemy and professional stamina, it coheres; the movie shows poor Roger Taylor doing take after take of the dreaded “Galileo!” shrieks, bravely risking a falsetto-related injury in the cause of art. Anyone hoping to be let in on Queen’s trade secrets will feel frustrated, although I liked the coins that rattled and bounced on the skin of Taylor’s drum, and it’s good to watch Deacon noodle a new bass riff—for “Another One Bites the Dust”—purely to stop the other band members squabbling. The later sections of the story, dealing with Mercury’s AIDS diagnosis, are carefully handled, but most of the film is stuffed with lumps of cheesy rock-speak (“We’re just not thinking big enough”; “I won’t compromise my vision”), and gives off the delicious aroma of parody. When Mercury tries out the plangent “Love of My Life” on the piano, it’s impossible not to recall the great Nigel Tufnel, in “This Is Spinal Tap” (1984), playing something similar in D minor, “the saddest of all keys,” and adding that it’s called “Lick My Love Pump.”
The funniest thing about the new film is that its creation was clearly more rocklike than anything to be found in the end product. Bryan Singer, who is credited as the director, was fired from the production last year and replaced by Dexter Fletcher, although some scenes appear to have been directed by no one at all, or perhaps by a pizza delivery guy who strayed onto the set. The lead role was originally assigned to Sacha Baron Cohen (a performance of which we can but dream), although Malek, mixing shyness with muscularity, and sporting a set of false teeth that would make Bela Lugosi climb back into his casket, spares nothing in his devotion to the Mercurial. The character’s carnal wants, by all accounts prodigious, are reduced to the pinching of a waiter’s backside, plus the laughable glance that Freddie receives from a bearded American truck driver at a gas station as he enters the bathroom. With its PG-13 rating, and its solemn statements of faith in the band as a family, “Bohemian Rhapsody” may be the least orgiastic tribute ever paid to the world of rock. Is this the real life? Nope. Is this just fantasy? Not entirely, for the climax, quite rightly, returns us to Live Aid—to a majestic restaging of Queen’s contribution, with Malek displaying his perfect peacock strut in front of the mob. If only for twenty minutes, Freddie Mercury is the champion of the world.
by Anthony Lane, New Yorker | Read more:
Image: Zohar Lazar
Friday, October 26, 2018
The Great Risk Shift
To many economic commentators, insecurity first reared its ugly head in the wake of the financial crisis of the late-2000s. Yet the roots of the current situation run much deeper. For at least 40 years, economic risk has been shifting from the broad shoulders of government and corporations onto the backs of American workers and their families.
This sea change has occurred in nearly every area of Americans’ finances: their jobs, their health care, their retirement pensions, their homes and savings, their investments in education and training, their strategies for balancing work and family. And it has affected Americans from all demographic groups and across the income spectrum, from the bottom of the economic ladder almost to its highest rungs.
I call this transformation “The Great Risk Shift” — the title of a book I wrote in the mid-2000s, which I’ve recently updated for a second edition. My goal in writing the book was to highlight a long-term trend toward greater insecurity, one that began well before the 2008 financial crisis but has been greatly intensified by it.
I also wanted to make clear that the Great Risk Shift wasn’t a natural occurrence — a financial hurricane beyond human control. It was the result of deliberate policy choices by political and corporate leaders, beginning in the late 1970s and accelerating in the 1980s and 1990s. These choices shredded America’s unique social contract, with its unparalleled reliance on private workplace benefits. They also left existing programs of economic protection more and more threadbare, penurious and outdated — and hence increasingly incapable of filling the resulting void.
To understand the change, we must first understand what is changing. Unique among rich democracies, the United States fostered a social contract based on stable long-term employment and widespread provision of private workplace benefits. As the figure below shows, our government framework of social protection is indeed smaller than those found in other rich countries. Yet when we take into account private health and retirement benefits — mostly voluntary, but highly subsidized through the tax code — we have an overall system that is actually larger in size than that of most other rich countries. The difference is that our system is distinctively private.
Guaranteed pensions have not been the only casualty of the Great Risk Shift. At the same time as employers have raced away from safeguarding retirement security, health insurance has become much less common in the workplace, even for college-educated workers. Indeed, coverage has risen in recent years only because more people have become eligible for Medicare and Medicaid and for subsidized plans outside the workplace under the Affordable Care Act. As late as the early 1980s, 80 percent of recent college graduates had health insurance through their job; by the late 2000s, the share had fallen to around 60 percent. And, of course, the drop has been far greater for less educated workers.
In sum, corporate retrenchment has come together with government inaction — and sometimes government retrenchment — to produce a massive transfer of economic risk from broad structures of insurance onto the fragile balance sheets of American households. Rather than enjoying the protections of insurance that pools risk broadly, Americans are increasingly facing economic risks on their own, and often at their peril.
The erosion of America’s distinctive framework of economic protection might be less worrisome if work and family were stable sources of security themselves. Unfortunately, they are not. The job market has grown more uncertain and risky, especially for those who were once best protected from its vagaries. Workers and their families now invest more in education to earn a middle-class living. Yet in today’s postindustrial economy, these costly investments are no guarantee of a high, stable, or upward-sloping path. [ed. See also:
A Follow-Up on the Reasons for Prime Age Labor Force Non-Participation]
Meanwhile, the family, a sphere that was once seen as solely a refuge from economic risk, has increasingly become a source of risk of its own. Although median wages have essentially remained flat over the last generation, middle-income families have seen stronger income growth, with their real median incomes rising around 13 percent between 1979 and 2013. Yet this seemingly hopeful statistic masks the reality that the whole of this rise is because women are working many more hours outside the home than they once did. Indeed, without the increased work hours and pay of women, middle-class incomes would have fallen between 1979 and 2013.
This sea change has occurred in nearly every area of Americans’ finances: their jobs, their health care, their retirement pensions, their homes and savings, their investments in education and training, their strategies for balancing work and family. And it has affected Americans from all demographic groups and across the income spectrum, from the bottom of the economic ladder almost to its highest rungs.
I call this transformation “The Great Risk Shift” — the title of a book I wrote in the mid-2000s, which I’ve recently updated for a second edition. My goal in writing the book was to highlight a long-term trend toward greater insecurity, one that began well before the 2008 financial crisis but has been greatly intensified by it.
I also wanted to make clear that the Great Risk Shift wasn’t a natural occurrence — a financial hurricane beyond human control. It was the result of deliberate policy choices by political and corporate leaders, beginning in the late 1970s and accelerating in the 1980s and 1990s. These choices shredded America’s unique social contract, with its unparalleled reliance on private workplace benefits. They also left existing programs of economic protection more and more threadbare, penurious and outdated — and hence increasingly incapable of filling the resulting void.
To understand the change, we must first understand what is changing. Unique among rich democracies, the United States fostered a social contract based on stable long-term employment and widespread provision of private workplace benefits. As the figure below shows, our government framework of social protection is indeed smaller than those found in other rich countries. Yet when we take into account private health and retirement benefits — mostly voluntary, but highly subsidized through the tax code — we have an overall system that is actually larger in size than that of most other rich countries. The difference is that our system is distinctively private.
This framework, however, is coming undone. The unions that once negotiated and defended private benefits have lost tremendous ground. Partly for this reason, employers no longer wish to shoulder the burdens they took on during more stable economic times. In an age of shorter job tenure and contingent work, as Monica Potts will describe in her forthcoming contribution to this series, employers also no longer highly value the long-term commitments to workers that these arrangements reflected and fostered.
Of course, policymakers could have responded to these changes by shoring up existing programs of economic security. Yet at the same time as the corporate world was turning away from an older model of employment, the political world was turning away from a longstanding approach to insecurity known as “social insurance.” The premise of social insurance is that widespread economic risks can be dealt with effectively only through institutions that spread their costs across rich and poor, healthy and sick, able-bodied and disabled, young and old.
Social insurance works like any other insurance program: We pay in — in this case, through taxes — and, in return, are offered a greater degree of protection against life’s risks. The idea is most associated with FDR, but, from the 1930s well into the 1970s, it was promoted by private insurance companies and unionized corporations, too. During this era of rising economic security, both public and private policymakers assumed that a dynamic capitalist economy required a basic foundation of protection against economic risks.
That changed during the economic and political turmoil of the late 1970s. With the economy becoming markedly more unequal and conservatives gaining political ground, many policy elites began to emphasize a different credo — one premised on the belief that social insurance was too costly and inefficient and that individuals should be given “more skin in the game” so they could manage and minimize risks on their own. Politicians began to call for greater “personal responsibility,” a dog whistle that would continue to sound for decades.
Instead of guaranteed pensions, these policymakers argued, workers should have tax-favored retirement accounts. Instead of generous health coverage, they should have high-deductible health plans. Instead of subsidized child care or paid family leave, they should receive tax breaks to arrange for family needs on their own. Instead of pooling risks, in short, companies and government should offload them.
The transformation of America’s retirement system tells the story in miniature. Thirty years ago, most workers at larger firms received a guaranteed pension that was protected from market risk. These plans built on Social Security, then at its peak. Today, such “defined-benefit” pensions are largely a thing of the past. Instead, private workers lucky enough to get a pension receive “defined-contribution” plans such as 401(k)s — tax-favored retirement accounts, first authorized in the early 1980s, that don’t require contributions and don’t provide guaranteed benefits. Meanwhile, Social Security has gradually declined as a source of secure retirement income for workers even as private guaranteed retirement income has been in retreat.
The results have not been pretty. We will not be able to assess the full extent of the change until today’s youngest workers retire. But according to researchers at Boston College, the share of working-age households at risk of being financially unprepared for retirement at age 65 has jumped from 31 percent in 1983 to more than 53 percent in 2010. In other words, more than half of younger workers are slated to retire without saving enough to maintain their standard of living in old age.
Of course, policymakers could have responded to these changes by shoring up existing programs of economic security. Yet at the same time as the corporate world was turning away from an older model of employment, the political world was turning away from a longstanding approach to insecurity known as “social insurance.” The premise of social insurance is that widespread economic risks can be dealt with effectively only through institutions that spread their costs across rich and poor, healthy and sick, able-bodied and disabled, young and old.
Social insurance works like any other insurance program: We pay in — in this case, through taxes — and, in return, are offered a greater degree of protection against life’s risks. The idea is most associated with FDR, but, from the 1930s well into the 1970s, it was promoted by private insurance companies and unionized corporations, too. During this era of rising economic security, both public and private policymakers assumed that a dynamic capitalist economy required a basic foundation of protection against economic risks.
That changed during the economic and political turmoil of the late 1970s. With the economy becoming markedly more unequal and conservatives gaining political ground, many policy elites began to emphasize a different credo — one premised on the belief that social insurance was too costly and inefficient and that individuals should be given “more skin in the game” so they could manage and minimize risks on their own. Politicians began to call for greater “personal responsibility,” a dog whistle that would continue to sound for decades.
Instead of guaranteed pensions, these policymakers argued, workers should have tax-favored retirement accounts. Instead of generous health coverage, they should have high-deductible health plans. Instead of subsidized child care or paid family leave, they should receive tax breaks to arrange for family needs on their own. Instead of pooling risks, in short, companies and government should offload them.
The transformation of America’s retirement system tells the story in miniature. Thirty years ago, most workers at larger firms received a guaranteed pension that was protected from market risk. These plans built on Social Security, then at its peak. Today, such “defined-benefit” pensions are largely a thing of the past. Instead, private workers lucky enough to get a pension receive “defined-contribution” plans such as 401(k)s — tax-favored retirement accounts, first authorized in the early 1980s, that don’t require contributions and don’t provide guaranteed benefits. Meanwhile, Social Security has gradually declined as a source of secure retirement income for workers even as private guaranteed retirement income has been in retreat.
The results have not been pretty. We will not be able to assess the full extent of the change until today’s youngest workers retire. But according to researchers at Boston College, the share of working-age households at risk of being financially unprepared for retirement at age 65 has jumped from 31 percent in 1983 to more than 53 percent in 2010. In other words, more than half of younger workers are slated to retire without saving enough to maintain their standard of living in old age.
Guaranteed pensions have not been the only casualty of the Great Risk Shift. At the same time as employers have raced away from safeguarding retirement security, health insurance has become much less common in the workplace, even for college-educated workers. Indeed, coverage has risen in recent years only because more people have become eligible for Medicare and Medicaid and for subsidized plans outside the workplace under the Affordable Care Act. As late as the early 1980s, 80 percent of recent college graduates had health insurance through their job; by the late 2000s, the share had fallen to around 60 percent. And, of course, the drop has been far greater for less educated workers.
In sum, corporate retrenchment has come together with government inaction — and sometimes government retrenchment — to produce a massive transfer of economic risk from broad structures of insurance onto the fragile balance sheets of American households. Rather than enjoying the protections of insurance that pools risk broadly, Americans are increasingly facing economic risks on their own, and often at their peril.
The erosion of America’s distinctive framework of economic protection might be less worrisome if work and family were stable sources of security themselves. Unfortunately, they are not. The job market has grown more uncertain and risky, especially for those who were once best protected from its vagaries. Workers and their families now invest more in education to earn a middle-class living. Yet in today’s postindustrial economy, these costly investments are no guarantee of a high, stable, or upward-sloping path. [ed. See also:
A Follow-Up on the Reasons for Prime Age Labor Force Non-Participation]
Meanwhile, the family, a sphere that was once seen as solely a refuge from economic risk, has increasingly become a source of risk of its own. Although median wages have essentially remained flat over the last generation, middle-income families have seen stronger income growth, with their real median incomes rising around 13 percent between 1979 and 2013. Yet this seemingly hopeful statistic masks the reality that the whole of this rise is because women are working many more hours outside the home than they once did. Indeed, without the increased work hours and pay of women, middle-class incomes would have fallen between 1979 and 2013.
by Jacob S. Hacker, TPM | Read more:
Image: Christine Frapech/TPMUnfair Advantage
Every year Americans make more and more purchases online, many of them at Amazon.com. What shoppers don’t see when browsing the selections at Amazon are the many ways the online store is transforming the economy. Our country is losing small businesses. Jobs are becoming increasingly insecure. Inequality is rising. And Amazon plays a key role in all of these trends.
Stacy Mitchell believes Amazon is creating a new type of monopoly. She says its founder and CEO, Jeff Bezos, doesn’t want Amazon to merely dominate the market; he wants it to become the market.
Amazon is already the world’s largest online retailer, drawing so much consumer Web traffic that many other retailers can compete only by becoming “Amazon third-party sellers” and doing business through their competitor. It’s a bit like the way downtown shops once had to move to the mall to survive — except in this case Amazon owns the mall, monitors the other businesses’ transactions, and controls what shoppers see.
From early in her career Mitchell has focused on retail monopolies. During the 2000s she researched the predatory practices and negative impacts of big-box stores such as Walmart. Her 2006 book, Big-Box Swindle: The True Cost of Mega-Retailers and the Fight for America’s Independent Businesses, documented the threat these supersized chains pose to independent local businesses and community well-being. (stacymitchell.com)
Now Amazon is threatening to overtake Walmart as the biggest retailer in the world. Mitchell says she occasionally shops at Amazon herself, when there’s something she can’t find locally, but this hasn’t stopped her from being a vocal critic of the way the company uses its monopoly power to stifle competition. She’s among a growing number of advocates who are calling for more vigorous enforcement of antitrust laws. (...)
Frisch: Many consumers welcome Amazon as a wonderful innovation that makes shopping more convenient, but you say the corporation has a “stranglehold” on commerce. Why?
Mitchell: Without many of us noticing, Amazon has become one of the most powerful corporations in the U.S. It is common to talk about Amazon as though it were a retailer, and it certainly sells a lot of goods — more books than any other retailer online or off, and it will soon be the top seller of clothing, toys, and electronics. One of every two dollars Americans spend online now goes to Amazon. But to think of Amazon as a retailer is to miss the true nature of this company.
Amazon wants to control the underlying infrastructure of commerce. It’s becoming the place where many online shoppers go first. Even just a couple of years ago, most of us, when we wanted to buy something online, would type the desired product into a search engine. We might search for New Balance sneakers, for example, and get multiple results: sporting-goods stores, shoe stores, and, of course, Amazon. Today more than half of shoppers are skipping Google and going directly to Amazon to search for a product. This means that other companies, if they want access to those consumers, have to become sellers on Amazon. We’re moving toward a future in which buyers and sellers no longer meet in an open public market, but rather in a private arena that Amazon controls.
From this commanding position Amazon is extending its reach in many directions. It’s building out its shipping and package-delivery infrastructure, in a bid to supplant UPS and the U.S. Postal Service. Its Web-services division powers much of the Internet and handles data storage for entities ranging from Netflix to the CIA. Amazon is producing hit television shows and movies, publishing books, and manufacturing a growing share of the goods it sells. It’s making forays into healthcare and finance. And with the purchase of Whole Foods, it’s beginning to extend its online presence into the physical world. (...)
Frisch: We hear a lot about the power of “disruptive” ideas and technologies to transform our society. Amazon seems like the epitome of a disrupter.
Mitchell: Because Amazon grew alongside the Internet, it’s easy to imagine that the innovations and conveniences of online shopping are wedded to it. They aren’t. Jeff Bezos would prefer that we believe Amazon’s dominance is the inevitable result of innovation, and that to challenge the company’s power would mean giving up the benefits of the Internet revolution. But history tells us that when monopolies are broken up, there’s often a surge of innovation in their wake.
Frisch: You don’t think e-commerce in itself is a problem?
Mitchell: No. There’s no reason why making purchases through the Internet is inherently destructive. I do think a world without local businesses would be a bad idea, because in-person, face-to-face shopping generates significant social and civic benefits for a community. But lots of independent retailers have robust e-commerce sites, including my local bookstore, hardware store, and several clothing retailers. Being online gives customers another way to buy from them. We can even imagine a situation in which many small businesses might sell their wares on a single website to create a full-service marketplace. It wouldn’t be a problem as long as the rules that govern that website are fair, the retailers are treated equally, and power isn’t abused.
Frisch: But that’s not the case with Amazon?
Mitchell: No. As search traffic migrates to Amazon, independent businesses face a Faustian bargain: Do they continue to hang their shingle on a road that is increasingly less traveled or do they become an Amazon seller? It’s no easy decision, because once you become a third-party seller, 15 percent of your revenue typically goes to Amazon — more if you use their warehouse and fulfillment services. Amazon also uses the data that it gleans from monitoring your sales to compete against you by offering the same items. And it owns the customer relationship, particularly if you use Amazon’s fulfillment services — meaning you store your goods in its warehouses and pay it to handle the shipping. In that case, you cannot communicate with your customer except through Amazon’s system, and Amazon monitors those communications. If you go out of bounds, it can suspend you as a seller.
Frisch: What’s out of bounds? Let’s say a customer wants to know which product would be better, A or B. Can a seller tell them?
Mitchell: You’re allowed to respond to that question, but if, in the process of responding, you violate Amazon’s rules, you can be suspended from Amazon and see your livelihood disappear. An example of this is a small company that made custom-designed urns for ashes.
Frisch: For people who’ve been cremated?
Mitchell: Yes. They sold these urns through their website and also through Amazon. A customer contacted the urn-maker through Amazon to ask about engraving. The company responded truthfully that there was no way to place an order for engraving through Amazon, but it could be done through the company’s website. Within twenty-four hours the urn maker got slapped down by Amazon. The rules for third-party sellers say you can never give a customer a URL, because Amazon does not want that customer going anywhere else — even in a case where Amazon can’t provide what the customer wants.
An independent retailer’s most valuable assets are its knowledge of products and ability to spot trends. Once you become a seller on Amazon, you forfeit your expertise to them. They use your sales figures to spot the latest trends. Researchers at Harvard Business School have found that when you start selling through Amazon, within a short time Amazon will have figured out what your most popular items are and begun selling them itself. Amazon is now producing thousands of products, from batteries to blouses, under its own brands. It’s copying what other companies are selling and then giving its own products top billing in its search results. For example, a company called Rain Design in San Francisco made a popular laptop stand and built a business selling it through Amazon. A couple of years ago Rain Design found that Amazon had introduced a nearly identical product. The only difference is that the company’s raindrop logo had been swapped for Amazon’s smiling arrow. (...)
Frisch: You’ve characterized Amazon as a throwback to the age of the robber barons. How so?
Mitchell: The robber barons were nineteenth-century industrialists who dominated industries like oil and steel. During the Gilded Age, toward the end of the nineteenth century, these industrialists gained control of a technology that was opening up a new way of doing business: the railroad. They used their command of the rails to disadvantage their competitors. John D. Rockefeller, who ran Standard Oil, for example, conspired with the railroad magnate Cornelius Vanderbilt to charge competing oil companies huge sums to ship their product by rail. The first antitrust laws were written in response to industrialists’ attempts to control access to the market.
It’s striking how similar this history is to what Amazon has done: a new technology comes along that gives people a novel way to bring their wares to market, but a single company gains control over it and uses that power to undermine competitors and create a monopoly.
Amazon sells nearly half of all print books and has more than 80 percent of the e-book market. That’s enough to make it a gatekeeper: if Amazon suppresses a book in its search results or removes the book’s BUY button, as it has done during disputes with certain publishers, it causes that book’s sales to plummet. That is a monopoly.
Frisch: When did the Gilded Age monopolies get broken up?
Mitchell: A turning point came in the 1930s, during Franklin D. Roosevelt’s second term as president. Roosevelt concluded that corporate concentration was impeding the economy by closing off opportunity and slowing job and wage growth. So he set about dusting off the nation’s antitrust policies and using them to go after monopolies. This aggressive approach lasted for decades. Republican and Democratic presidents alike talked about the importance of fighting monopolies.
Then in the 1970s a group of legal and economic scholars, led by Robert Bork, argued that corporate consolidation should be allowed to go unchecked as long as consumer prices stayed low. The Reagan administration embraced this view. Under Reagan the antitrust laws were left intact, but how the antitrust agencies interpret and enforce the laws was radically altered. Antitrust policy was stripped of its original purpose and power. Subsequent administrations, including Democrats, followed suit.
All of the concerns that used to drive antitrust enforcement have collapsed into a single concern: low prices. But we aren’t just consumers. We’re workers who need to earn a living. We’re small businesspeople. We’re innovators and inventors. As the economy has grown more consolidated, with fewer and fewer companies dominating just about every industry, one consequence is lower wages. Economic consolidation means workers have fewer options for employment. This appears to be a big reason why wages have been stagnant now for decades. We should also remember that our antitrust laws, at their heart, are about protecting democracy. Amazon shouldn’t be allowed to decide which books succeed or fail, which companies are allowed to compete. (...)
Frisch: Before you took on Amazon, you helped galvanize community opposition to Walmart. Why should people be against the big-box retailer coming to their town?
Mitchell: Walmart’s pitch to communities is always that it will offer low prices and create jobs and tax revenue. Particularly for smaller communities, this seems like a great deal. But an overwhelming majority of research has found that Walmart is much more of an extractive force. Poverty actually rises in places where Walmart opens a store.
Independent businesses, on the other hand, help communities thrive, because they buy many goods and services locally. When a small business needs an accountant, it’s likely to hire someone nearby. When it needs a website, it hires a local web designer. It banks at the local bank and advertises on the local radio station. It also tends to carry more local and regional products. An independent bookstore, for example, might feature local authors prominently.
Economic relationships often involve other types of relationships, too. When you shop at a small business, you’re dealing with your neighbors. You’re buying from someone whose kids go to school with your kids. That matters for the health of communities.
When Walmart comes in, it systematically wipes out a lot of those relationships. Instead of circulating locally, most dollars spent at the Walmart store leave the community. You’re left with fewer jobs than you had to start with, and they’re low-wage positions.
Stacy Mitchell believes Amazon is creating a new type of monopoly. She says its founder and CEO, Jeff Bezos, doesn’t want Amazon to merely dominate the market; he wants it to become the market.
Amazon is already the world’s largest online retailer, drawing so much consumer Web traffic that many other retailers can compete only by becoming “Amazon third-party sellers” and doing business through their competitor. It’s a bit like the way downtown shops once had to move to the mall to survive — except in this case Amazon owns the mall, monitors the other businesses’ transactions, and controls what shoppers see.
From early in her career Mitchell has focused on retail monopolies. During the 2000s she researched the predatory practices and negative impacts of big-box stores such as Walmart. Her 2006 book, Big-Box Swindle: The True Cost of Mega-Retailers and the Fight for America’s Independent Businesses, documented the threat these supersized chains pose to independent local businesses and community well-being. (stacymitchell.com)
Now Amazon is threatening to overtake Walmart as the biggest retailer in the world. Mitchell says she occasionally shops at Amazon herself, when there’s something she can’t find locally, but this hasn’t stopped her from being a vocal critic of the way the company uses its monopoly power to stifle competition. She’s among a growing number of advocates who are calling for more vigorous enforcement of antitrust laws. (...)
Frisch: Many consumers welcome Amazon as a wonderful innovation that makes shopping more convenient, but you say the corporation has a “stranglehold” on commerce. Why?
Mitchell: Without many of us noticing, Amazon has become one of the most powerful corporations in the U.S. It is common to talk about Amazon as though it were a retailer, and it certainly sells a lot of goods — more books than any other retailer online or off, and it will soon be the top seller of clothing, toys, and electronics. One of every two dollars Americans spend online now goes to Amazon. But to think of Amazon as a retailer is to miss the true nature of this company.
Amazon wants to control the underlying infrastructure of commerce. It’s becoming the place where many online shoppers go first. Even just a couple of years ago, most of us, when we wanted to buy something online, would type the desired product into a search engine. We might search for New Balance sneakers, for example, and get multiple results: sporting-goods stores, shoe stores, and, of course, Amazon. Today more than half of shoppers are skipping Google and going directly to Amazon to search for a product. This means that other companies, if they want access to those consumers, have to become sellers on Amazon. We’re moving toward a future in which buyers and sellers no longer meet in an open public market, but rather in a private arena that Amazon controls.From this commanding position Amazon is extending its reach in many directions. It’s building out its shipping and package-delivery infrastructure, in a bid to supplant UPS and the U.S. Postal Service. Its Web-services division powers much of the Internet and handles data storage for entities ranging from Netflix to the CIA. Amazon is producing hit television shows and movies, publishing books, and manufacturing a growing share of the goods it sells. It’s making forays into healthcare and finance. And with the purchase of Whole Foods, it’s beginning to extend its online presence into the physical world. (...)
Frisch: We hear a lot about the power of “disruptive” ideas and technologies to transform our society. Amazon seems like the epitome of a disrupter.
Mitchell: Because Amazon grew alongside the Internet, it’s easy to imagine that the innovations and conveniences of online shopping are wedded to it. They aren’t. Jeff Bezos would prefer that we believe Amazon’s dominance is the inevitable result of innovation, and that to challenge the company’s power would mean giving up the benefits of the Internet revolution. But history tells us that when monopolies are broken up, there’s often a surge of innovation in their wake.
Frisch: You don’t think e-commerce in itself is a problem?
Mitchell: No. There’s no reason why making purchases through the Internet is inherently destructive. I do think a world without local businesses would be a bad idea, because in-person, face-to-face shopping generates significant social and civic benefits for a community. But lots of independent retailers have robust e-commerce sites, including my local bookstore, hardware store, and several clothing retailers. Being online gives customers another way to buy from them. We can even imagine a situation in which many small businesses might sell their wares on a single website to create a full-service marketplace. It wouldn’t be a problem as long as the rules that govern that website are fair, the retailers are treated equally, and power isn’t abused.
Frisch: But that’s not the case with Amazon?
Mitchell: No. As search traffic migrates to Amazon, independent businesses face a Faustian bargain: Do they continue to hang their shingle on a road that is increasingly less traveled or do they become an Amazon seller? It’s no easy decision, because once you become a third-party seller, 15 percent of your revenue typically goes to Amazon — more if you use their warehouse and fulfillment services. Amazon also uses the data that it gleans from monitoring your sales to compete against you by offering the same items. And it owns the customer relationship, particularly if you use Amazon’s fulfillment services — meaning you store your goods in its warehouses and pay it to handle the shipping. In that case, you cannot communicate with your customer except through Amazon’s system, and Amazon monitors those communications. If you go out of bounds, it can suspend you as a seller.
Frisch: What’s out of bounds? Let’s say a customer wants to know which product would be better, A or B. Can a seller tell them?
Mitchell: You’re allowed to respond to that question, but if, in the process of responding, you violate Amazon’s rules, you can be suspended from Amazon and see your livelihood disappear. An example of this is a small company that made custom-designed urns for ashes.
Frisch: For people who’ve been cremated?
Mitchell: Yes. They sold these urns through their website and also through Amazon. A customer contacted the urn-maker through Amazon to ask about engraving. The company responded truthfully that there was no way to place an order for engraving through Amazon, but it could be done through the company’s website. Within twenty-four hours the urn maker got slapped down by Amazon. The rules for third-party sellers say you can never give a customer a URL, because Amazon does not want that customer going anywhere else — even in a case where Amazon can’t provide what the customer wants.
An independent retailer’s most valuable assets are its knowledge of products and ability to spot trends. Once you become a seller on Amazon, you forfeit your expertise to them. They use your sales figures to spot the latest trends. Researchers at Harvard Business School have found that when you start selling through Amazon, within a short time Amazon will have figured out what your most popular items are and begun selling them itself. Amazon is now producing thousands of products, from batteries to blouses, under its own brands. It’s copying what other companies are selling and then giving its own products top billing in its search results. For example, a company called Rain Design in San Francisco made a popular laptop stand and built a business selling it through Amazon. A couple of years ago Rain Design found that Amazon had introduced a nearly identical product. The only difference is that the company’s raindrop logo had been swapped for Amazon’s smiling arrow. (...)
Frisch: You’ve characterized Amazon as a throwback to the age of the robber barons. How so?
Mitchell: The robber barons were nineteenth-century industrialists who dominated industries like oil and steel. During the Gilded Age, toward the end of the nineteenth century, these industrialists gained control of a technology that was opening up a new way of doing business: the railroad. They used their command of the rails to disadvantage their competitors. John D. Rockefeller, who ran Standard Oil, for example, conspired with the railroad magnate Cornelius Vanderbilt to charge competing oil companies huge sums to ship their product by rail. The first antitrust laws were written in response to industrialists’ attempts to control access to the market.
It’s striking how similar this history is to what Amazon has done: a new technology comes along that gives people a novel way to bring their wares to market, but a single company gains control over it and uses that power to undermine competitors and create a monopoly.
Amazon sells nearly half of all print books and has more than 80 percent of the e-book market. That’s enough to make it a gatekeeper: if Amazon suppresses a book in its search results or removes the book’s BUY button, as it has done during disputes with certain publishers, it causes that book’s sales to plummet. That is a monopoly.
Frisch: When did the Gilded Age monopolies get broken up?
Mitchell: A turning point came in the 1930s, during Franklin D. Roosevelt’s second term as president. Roosevelt concluded that corporate concentration was impeding the economy by closing off opportunity and slowing job and wage growth. So he set about dusting off the nation’s antitrust policies and using them to go after monopolies. This aggressive approach lasted for decades. Republican and Democratic presidents alike talked about the importance of fighting monopolies.
Then in the 1970s a group of legal and economic scholars, led by Robert Bork, argued that corporate consolidation should be allowed to go unchecked as long as consumer prices stayed low. The Reagan administration embraced this view. Under Reagan the antitrust laws were left intact, but how the antitrust agencies interpret and enforce the laws was radically altered. Antitrust policy was stripped of its original purpose and power. Subsequent administrations, including Democrats, followed suit.
All of the concerns that used to drive antitrust enforcement have collapsed into a single concern: low prices. But we aren’t just consumers. We’re workers who need to earn a living. We’re small businesspeople. We’re innovators and inventors. As the economy has grown more consolidated, with fewer and fewer companies dominating just about every industry, one consequence is lower wages. Economic consolidation means workers have fewer options for employment. This appears to be a big reason why wages have been stagnant now for decades. We should also remember that our antitrust laws, at their heart, are about protecting democracy. Amazon shouldn’t be allowed to decide which books succeed or fail, which companies are allowed to compete. (...)
Frisch: Before you took on Amazon, you helped galvanize community opposition to Walmart. Why should people be against the big-box retailer coming to their town?
Mitchell: Walmart’s pitch to communities is always that it will offer low prices and create jobs and tax revenue. Particularly for smaller communities, this seems like a great deal. But an overwhelming majority of research has found that Walmart is much more of an extractive force. Poverty actually rises in places where Walmart opens a store.
Independent businesses, on the other hand, help communities thrive, because they buy many goods and services locally. When a small business needs an accountant, it’s likely to hire someone nearby. When it needs a website, it hires a local web designer. It banks at the local bank and advertises on the local radio station. It also tends to carry more local and regional products. An independent bookstore, for example, might feature local authors prominently.
Economic relationships often involve other types of relationships, too. When you shop at a small business, you’re dealing with your neighbors. You’re buying from someone whose kids go to school with your kids. That matters for the health of communities.
When Walmart comes in, it systematically wipes out a lot of those relationships. Instead of circulating locally, most dollars spent at the Walmart store leave the community. You’re left with fewer jobs than you had to start with, and they’re low-wage positions.
by Tracy Frisch and Stacy Mitchell, The Sun | Read more:
Image: uncredited
Tech to Blame for Ever-Growing Repair Costs
It's hard to remove a part from a new car without coming across a wire attached to it. As tech grows to occupy every spare corner of the car, many buyers might not realize that all that whiz-bang stuff is going to make collision repair an absolute bear.
Even seemingly small damages to a vehicle's front end can incur costs nearing $3,000, according to new research from AAA. The study looked at three solid sellers in multiple vehicle segments, including a small SUV, a midsize sedan and a pickup truck. It looked at repair costs using original equipment list prices and an established average for technician labor rates.
Let's use AAA's examples for some relatable horror stories. Mess up your rear bumper? Well, if you have ultrasonic parking sensors or radar back there, it could cost anywhere from $500 to $2,000 to fix. Knock off a side mirror equipped with a camera as part of a surround-view system? $500 to $1,100. (...)
AAA wasn't the first group to realize how nuts these costs can get. On a recent episode of Autoline, a CEO of a nonprofit focused on collision repair education pointed out that a front-corner collision repair on a Kia K900 could cost as much as $34,000. Sure, it's a low-production luxury sedan, but is anyone truly ready to drop $34,000 on a car that starts around $50,000?
by Andrew Krok, CNET | Read more:
Image: AAA
Even seemingly small damages to a vehicle's front end can incur costs nearing $3,000, according to new research from AAA. The study looked at three solid sellers in multiple vehicle segments, including a small SUV, a midsize sedan and a pickup truck. It looked at repair costs using original equipment list prices and an established average for technician labor rates.Let's use AAA's examples for some relatable horror stories. Mess up your rear bumper? Well, if you have ultrasonic parking sensors or radar back there, it could cost anywhere from $500 to $2,000 to fix. Knock off a side mirror equipped with a camera as part of a surround-view system? $500 to $1,100. (...)
AAA wasn't the first group to realize how nuts these costs can get. On a recent episode of Autoline, a CEO of a nonprofit focused on collision repair education pointed out that a front-corner collision repair on a Kia K900 could cost as much as $34,000. Sure, it's a low-production luxury sedan, but is anyone truly ready to drop $34,000 on a car that starts around $50,000?
by Andrew Krok, CNET | Read more:
Image: AAA
Thursday, October 25, 2018
Nominating Oneself for the Short End of a Tradeoff
I’ve gotten a chance to discuss The Whole City Is Center with a few people now. They remain skeptical of the idea that anyone could “deserve” to have bad things happen to them, based on their personality traits or misdeeds.
These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer. I don’t know if this passes an Intellectual Turing Test. When I think of people deserving bad things, I think of them having nominated themselves to get the short end of a tradeoff.
Let me give three examples:
1. Imagine an antidepressant that works better than existing antidepressants, one that consistently provides depressed people real relief. If taken as prescribed, there are few side effects and people do well. If ground up, snorted, and taken at ten times the prescribed dose – something nobody could do by accident, something you have to really be trying to get wrong – it acts as a passable heroin substitute, you can get addicted to it, and it will ruin your life.
The antidepressant is popular and gets prescribed a lot, but a black market springs up, and however hard the government works to control it, a lot of it gets diverted to abusers. Many people get addicted to it and their lives are ruined. So the government bans the antidepressant, and everyone has to go back to using SSRIs instead.
Let’s suppose the government is being good utilitarians here: they calculated out the benefit from the drug treating people’s depression, and the cost from the drug being abused, and they correctly determined the costs outweighed the benefits.
But let’s also suppose that nobody abuses the drug by accident. The difference between proper use and abuse is not subtle. Everybody who knows enough to know anything about the drug at all has heard the warnings. Nobody decides to take ten times the recommended dose of antidepressant, crush it, and snort it, through an innocent mistake. And nobody has just never heard the warnings that drugs are bad and can ruin your life.
Somebody is going to get the short end of the stick. If the drug is banned, depressed people will lose access to relief for their condition. If the drug is permitted, recreational users will continue having the opportunity to destroy their lives. And we’ve posited that the utilitarian calculus says that banning the antidepressant would be better. But I still feel, in some way, that the recreational users have nominated themselves to get the worse end of this tradeoff. Depressed people shouldn’t have to suffer because you see a drug that says very clearly on the bottle “DO NOT TAKE TOO MUCH OF THIS YOU WILL GET ADDICTED AND IT WILL BE TERRIBLE” and you think “I think I shall take too much of this”.
(this story is loosely based on the history of tianeptine in the US)
2. Suppose you’re in a community where some guy is sexually harassing women. You tell him not to and he keeps doing it, because that’s just the kind of guy he is, and it’s unclear if he can even stop himself. Eventually he does it so much that you kick him out of the community.
Then one of his friends comes to you and says “This guy harassed one woman per month, and not even that severely. On the other hand, kicking him out of the community costs him all of his friends, his support network, his living situation, and his job. He is a pretty screwed-up person and it’s unclear he will ever find more friends or another community. The cost to him of not being in this community, is actually greater than the cost of being harassed is to a woman.”
Somebody is going to have their lives made worse. Either the harasser’s life will be worse because he’s kicked out of the community. Or women’s lives are worse because they are being harassed. Even if I completely believe the friend’s calculation that kicking him out will bring more harm on him than keeping him would bring harm to women, I am still comfortable letting him get the short end of the tradeoff.
And this is true even if we are good determinists and agree he only harasses somebody because of an impulse control problem secondary to an underdeveloped frontal lobe, or whatever the biological reason for harassing people might be.
(not going to bring up what this story is loosely based on, but it’s not completely hypothetical either)
3. Sometimes in discussions of basic income, someone expresses concern that some people’s lives might become less meaningful if they didn’t have a job to give them structure and purpose.
And I respond “Okay, so those people can work, basic income doesn’t prohibit you from working, it just means you don’t have to.”
And they object “But maybe these people will choose not to work even though work would make them happier, and they will just suffer and be miserable.”
Again, there’s a tradeoff. Somebody’s going to suffer. If we don’t grant basic income, it will be people stuck in horrible jobs with no other source of income. If we do grant basic income, it will be people who need work to have meaning in their lives, but still refuse to work. Since the latter group has a giant door saying “SOLUTION TO YOUR PROBLEMS” wide open in front of them but refuses to take it, I find myself sympathizing more with the former group. That’s true even if some utilitarian were to tell me that the latter group outnumbers them.
I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?
One option is to dismiss them as misfirings of the heuristic “expose people to the consequences of their actions so that they are incentivized to make the right action”. I’ve tried to avoid that escape by specifying in each example that even when they’re properly exposed and incentivized the calculus still comes out on the side of making the tradeoff in their favor. But maybe this is kind of like saying “Imagine you could silence this one incorrect person without any knock-on effects on free speech anywhere else and all the consequences would be positive, would you do it?” In the thought experiment, maybe yes; in the real world this either never happens, or never happens with 100% certainty, or never happens in a way that’s comfortably outside whatever Schelling fence you’ve built for yourself. I’m not sure I find that convincing because in real life we don’t treat “force people to bear the consequences of their action” as a 100% sacred principle that we never violate.
Another option is to dismiss them as people “revealing their true preferences”, eg if the harasser doesn’t stop harassing women, he must not want to be in the community too much. But I think this operates on a really sketchy idea of revealed preference, similar to the Caplanian one where if you abuse drugs that just means you like drugs so there’s no problem. Most of these situations feel like times when that simplified version of preferences breaks down.
A friend reframes the second situation in terms of the cost of having law at all. It’s important to be able to make rules like “don’t sexually harass people”, and adding a clause saying “…but we’ll only enforce these when utilitarianism says it’s correct” makes them less credible and creates the opportunity for a lot of corruption. I can see this as a very strong answer to the second scenario (which might be the strongest), although I’m not sure it applies much to the first or third.
I could be convinced that my desire to let people who make bad choices nominate themselves for the short end of tradeoffs is just the utilitarian justifications (about it incentivizing behavior, or it revealing people’s true preferences) crystallized into a moral principle. I’m not sure if I hold this moral principle or not. I’m reluctant to accept the ban-antidepressant, tolerate-harasser, and repeal-basic-income solutions, but I’m also not sure what justification I have for not doing so except “Here’s a totally new moral principle I’m going to tack onto the side of my existing system”.
But I hope people at least find this a more sympathetic way of understanding when people talk about “desert” than a caricatured story where some people just need to suffer because they’re bad.
These people tend to imagine the pro-desert faction as going around, actively hoping that lazy people (or criminals, or whoever) suffer. I don’t know if this passes an Intellectual Turing Test. When I think of people deserving bad things, I think of them having nominated themselves to get the short end of a tradeoff.
Let me give three examples:
1. Imagine an antidepressant that works better than existing antidepressants, one that consistently provides depressed people real relief. If taken as prescribed, there are few side effects and people do well. If ground up, snorted, and taken at ten times the prescribed dose – something nobody could do by accident, something you have to really be trying to get wrong – it acts as a passable heroin substitute, you can get addicted to it, and it will ruin your life.
The antidepressant is popular and gets prescribed a lot, but a black market springs up, and however hard the government works to control it, a lot of it gets diverted to abusers. Many people get addicted to it and their lives are ruined. So the government bans the antidepressant, and everyone has to go back to using SSRIs instead.
Let’s suppose the government is being good utilitarians here: they calculated out the benefit from the drug treating people’s depression, and the cost from the drug being abused, and they correctly determined the costs outweighed the benefits.
But let’s also suppose that nobody abuses the drug by accident. The difference between proper use and abuse is not subtle. Everybody who knows enough to know anything about the drug at all has heard the warnings. Nobody decides to take ten times the recommended dose of antidepressant, crush it, and snort it, through an innocent mistake. And nobody has just never heard the warnings that drugs are bad and can ruin your life.
Somebody is going to get the short end of the stick. If the drug is banned, depressed people will lose access to relief for their condition. If the drug is permitted, recreational users will continue having the opportunity to destroy their lives. And we’ve posited that the utilitarian calculus says that banning the antidepressant would be better. But I still feel, in some way, that the recreational users have nominated themselves to get the worse end of this tradeoff. Depressed people shouldn’t have to suffer because you see a drug that says very clearly on the bottle “DO NOT TAKE TOO MUCH OF THIS YOU WILL GET ADDICTED AND IT WILL BE TERRIBLE” and you think “I think I shall take too much of this”.
(this story is loosely based on the history of tianeptine in the US)
2. Suppose you’re in a community where some guy is sexually harassing women. You tell him not to and he keeps doing it, because that’s just the kind of guy he is, and it’s unclear if he can even stop himself. Eventually he does it so much that you kick him out of the community.
Then one of his friends comes to you and says “This guy harassed one woman per month, and not even that severely. On the other hand, kicking him out of the community costs him all of his friends, his support network, his living situation, and his job. He is a pretty screwed-up person and it’s unclear he will ever find more friends or another community. The cost to him of not being in this community, is actually greater than the cost of being harassed is to a woman.”
Somebody is going to have their lives made worse. Either the harasser’s life will be worse because he’s kicked out of the community. Or women’s lives are worse because they are being harassed. Even if I completely believe the friend’s calculation that kicking him out will bring more harm on him than keeping him would bring harm to women, I am still comfortable letting him get the short end of the tradeoff.
And this is true even if we are good determinists and agree he only harasses somebody because of an impulse control problem secondary to an underdeveloped frontal lobe, or whatever the biological reason for harassing people might be.
(not going to bring up what this story is loosely based on, but it’s not completely hypothetical either)
3. Sometimes in discussions of basic income, someone expresses concern that some people’s lives might become less meaningful if they didn’t have a job to give them structure and purpose.
And I respond “Okay, so those people can work, basic income doesn’t prohibit you from working, it just means you don’t have to.”
And they object “But maybe these people will choose not to work even though work would make them happier, and they will just suffer and be miserable.”
Again, there’s a tradeoff. Somebody’s going to suffer. If we don’t grant basic income, it will be people stuck in horrible jobs with no other source of income. If we do grant basic income, it will be people who need work to have meaning in their lives, but still refuse to work. Since the latter group has a giant door saying “SOLUTION TO YOUR PROBLEMS” wide open in front of them but refuses to take it, I find myself sympathizing more with the former group. That’s true even if some utilitarian were to tell me that the latter group outnumbers them.
I find all three of these situations joining the increasingly numerous ranks of problems where my intuitions differ from utilitarianism. What should I do?
One option is to dismiss them as misfirings of the heuristic “expose people to the consequences of their actions so that they are incentivized to make the right action”. I’ve tried to avoid that escape by specifying in each example that even when they’re properly exposed and incentivized the calculus still comes out on the side of making the tradeoff in their favor. But maybe this is kind of like saying “Imagine you could silence this one incorrect person without any knock-on effects on free speech anywhere else and all the consequences would be positive, would you do it?” In the thought experiment, maybe yes; in the real world this either never happens, or never happens with 100% certainty, or never happens in a way that’s comfortably outside whatever Schelling fence you’ve built for yourself. I’m not sure I find that convincing because in real life we don’t treat “force people to bear the consequences of their action” as a 100% sacred principle that we never violate.
Another option is to dismiss them as people “revealing their true preferences”, eg if the harasser doesn’t stop harassing women, he must not want to be in the community too much. But I think this operates on a really sketchy idea of revealed preference, similar to the Caplanian one where if you abuse drugs that just means you like drugs so there’s no problem. Most of these situations feel like times when that simplified version of preferences breaks down.
A friend reframes the second situation in terms of the cost of having law at all. It’s important to be able to make rules like “don’t sexually harass people”, and adding a clause saying “…but we’ll only enforce these when utilitarianism says it’s correct” makes them less credible and creates the opportunity for a lot of corruption. I can see this as a very strong answer to the second scenario (which might be the strongest), although I’m not sure it applies much to the first or third.
I could be convinced that my desire to let people who make bad choices nominate themselves for the short end of tradeoffs is just the utilitarian justifications (about it incentivizing behavior, or it revealing people’s true preferences) crystallized into a moral principle. I’m not sure if I hold this moral principle or not. I’m reluctant to accept the ban-antidepressant, tolerate-harasser, and repeal-basic-income solutions, but I’m also not sure what justification I have for not doing so except “Here’s a totally new moral principle I’m going to tack onto the side of my existing system”.
But I hope people at least find this a more sympathetic way of understanding when people talk about “desert” than a caricatured story where some people just need to suffer because they’re bad.
by Scott Alexander, Slate Star Codex | Read more:
[ed. I don't know what Scott's been doing in psychiatry these days since moving to SF, but his blog has benefited greatly. See also: Cognitive Enhancers: Mechanisms and Tradeoffs.]
Wednesday, October 24, 2018
Uber's Secret Restaurant Empire
Uber’s Secret Restaurant Empire
via: (Bloomberg)
Innovation Under Socialism
I have friends who revel in arriving in a place and immediately investigating the neighborhood’s shortcuts, jogging down paths without a destination, wandering down wayward trails just to see where they lead. For those whose thirst for adventure is complemented by a healthy dose of spatial awareness and cognition, discovery is a thrill. Personally, I cannot relate to any of this. Nothing means less to me than the orientation of the sunrise and sunset. Your cardinal points are wasted on me, for I am a person endowed with no sense of direction whatsoever. Throw in any language other than my native fluency in French and English, along with a flailing Spanish, and my demise is guaranteed. Yet, in recent years, I have felt confident enough to explore places where I had never been before without knowing the local official language. In all this, my saving grace has been my iPhone—the powerful pocket-sized computer whose mapping and translating superpowers have convinced me almost no place is out of my reach. I’ll say it: I am a socialist and I love my iPhone.
This confession is music to the ears of the “capitalism made your iPhone” club. Indeed, proponents of capitalism often brandish rapid innovation as if it were an automatic checkmate on collectivist socioeconomic ideologies. To them, modern technology proves not only that capitalism works, but that it is the best system to stimulate innovation. The subtext of their retort is that a socialist economy could never generate technology this advanced. When coupled with a defense of “thought leaders” as obscenely rich as Steve Jobs, Elon Musk, and Jeff Bezos, their argument also contends that concentrating capital and power in the hands of a few billionaires is a small price to pay for the astronomical leaps in innovation from which we all benefit.
Capitalism’s fan base is not wrong that the iPhone, first released in 2007, is a product of America’s fiercely capitalist economy. I will also concede that without the vision of Steve Jobs, Apple’s late CEO and the 110th richest person in the world at his death, there would be no iPhone as we know it (although it is worth noting that the army of engineers and developers whose labor actually produced the iPhone might have come up with an equally wonderful smartphone). Nonetheless, their perspective is deeply misguided. It manages to both underestimate how much capitalism stifles innovation and misunderstand how much the fundamentals of a socialist economy make it the better system for stimulating innovation.
Innovation describes a four-step process that creates or ameliorates a thing or way of doing things. It begins with invention, the design of a device or process that did not previously exist in this form. The invention is then developed, meaning that it is improved with an eye towards eventual scaling, exchange or introduction on a market, and external use by others. At the production stage, the invention is built or reproduced. Finally, the invention is distributed to a wider audience. In our present economy, a minority of the innovation process happens at the individual level, from lonesome inventors and modern Benjamin Franklins who are able to conjure all sorts of contraptions in their garage. The majority, however, results from research and development (R&D) paid for by private firms, and by the public through government agencies, research institutions, and other recipients of federal and state funding.
The profit motive and exclusive proprietary rights are central to capitalist innovation. By law, private firms must prioritize the interest of their shareholders, which tends to be interchangeable with making as much money as possible. Accordingly, investments in any stage of the innovative process must eventually produce profits. To maximize profit, private firms jealously guard the value of their invention through regulations and restrictive contracts. Statutes and regulations help protect their trade secrets. The U.S. Patent and Trademarks Office routinely grants them utility and design patents that “exclude others from making, using, offering for sale, or selling … or importing the invention” for twenty years after the patent is issued. They enforce licensing agreements that can limit the uses and dissemination of all or part of their inventions. To further frustrate efforts to innovate on the back of their inventions, private firms subject their former employees to non-compete agreements that can severely limit them from using their knowledge and skills on competing projects for a period following their departure. Breaches carry dire consequences like expensive lawsuits, big money judgments, and other enormous hassles.
By contrast, the public sector innovates under an academic model instead of for profit. Certainly, earning tenure or an executive position can be lucrative. In some industries, a revolving door gives individuals the opportunity to innovate in both the private and public sectors throughout their careers. However, innovation in this area is less motivated by extracting profit, and more so by signifiers of prestige, career appointments, recognition, publication, project funding, and prizes.
The capitalist model has its perks. At present, private firms raise massive amounts of capital from the government to fund research, but also from banks, private equity, and wealthy donors. This vast amount of capital can prove lucrative for certain classes of workers. Innovative talent might accumulate wealth through generous compensation packages, which play an important role in attracting and retaining them.
Private firms also boast a terrifying nimbleness that allows them to push projects and respond to change faster than government institutions. For instance, firms can turn over staff quickly if their industry in the absence of unions and norms against firing workers at will, other than the standard prohibitions against discriminatory practices. In other words, without the regulatory and administrative constraints that saddle publicly funded projects, private firms can move through the innovative process faster.
Another advantage of the capitalist model is that profits—potential and actual—provide some measure of how well a company is innovating. Particularly, for the many private firms that sell some of their shares to the public on stock exchanges, prices serve as a form of feedback from investors and the market. Imagine that a publicly-traded retailer announces the imminent launch of an affordable, solar-powered computer that boasts power and speeds to rival Apple’s newest models. In the hours following the press release, the retailer’s stock value triples. A week later, while at a tech conference in the Colorado mountains, the retailer’s CEO lets it slip that the first prototype will actually retail for about four thousand dollars. Unfortunately for the CEO, he was wearing a hot mic. The quote is made public in an article titled “No debt-saddled, environmentally-conscious millennial will shed $4,000 for a computer!” The stock value immediately plummets by two-hundred percent.
The original rise in the retailer’s share value communicates that investors believe in the product as a profitable enterprise, and that they see this type of innovation as a worthwhile pursuit. The drop, on the other hand, suggests that they believe this specific product would be more marketable and therefore more profitable if it were developed for an audience beyond high-end consumers. The turn in the stock value can embolden the retailer—through its management, Board of Directors, or shareholders—to revisit its plan to innovate. It also signals to competitors that their innovation of a similar product could be well received, especially if they can overcome the original product’s weaknesses.
But prioritizing profit is a double-edged sword that can hamper innovation. Owning the proprietary rights allows private firms to block workers—through anti-competitive tools like non-compete agreements, patents, and licenses—who put labor into the innovation process from applying the extensive technical expertise and intimate understanding of the product to improve the innovation substantially. This becomes especially relevant once the workers leave the firm division in which they worked, or leave the firm altogether. Understandably, this lack of control and ownership will cause some workers, however passionate they may be about a project, to be less willing to maximize their contribution to the innovation.
Of course, the so-called nimbleness that allows firms to make drastic changes like mass layoffs is extremely harmful to the workers. This is no fluke. The capitalist economy thrives on a reserve army of labor. Inching closer to full employment makes workers scarcer, which empowers the labor force as a whole to bargain for higher wages and better work conditions. These threaten the firm’s bottom line. So, the capitalist economy is structured to maintain the balance of power towards the owners of capital. Positions that pay well (and less than well) come with the precariousness of at-will employment and disappearing union power. A constant pool of unemployed labor is maintained through layoffs and other tactics like higher interest rates, which the government will compel to help slow growth and thereby hiring. This system harms the potential for innovation, too.
The fear of losing work can dissuade workers from taking risks, experimenting, or speaking up as they identify items that could improve a taken approach—all actions that foster innovation. Meanwhile, thousands of individuals who could be contributing to the innovative process are instead involuntarily un-employed. This model also encourages monopolization, as concentrating market power gives private firms the most control over how much profit they can extract. But squashing competition that could contribute fresh ideas hurts every phase of the innovation process, while giving workers in fewer workplaces space to innovate.
Deferring to profit causes many areas of R&D to go unexplored. Private firms have less reason to invest in innovations likely to be made universally available for free if managers or investors do not see much upside for the firm’s bottom line. In theory, the slack in private research can be picked up by the public sector. In reality, however, decades of austerity measures threaten the public’s ability to underwrite risky and inefficient research. Both the Democratic and Republican parties increasingly adhere to a neoliberal ideology that vilifies “big government,” promotes running government like a business, pretends that government budgets should mirror household budgets or the private firm’s balance sheet, and rams privatization under the guises of so-called public-private partnerships and private subcontractors.
In the United States, public investment in R&D has been trending downward. As documented in a 2014 report from the Information Technology & Innovation Foundation, “[f]rom 2010 to 2013, federal R&D spending fell from $158.8 to $133.2 billion … Between 2003 and 2008, state funding for university research, as a share of GDP, dropped on average by 2 percent. States such as Arizona and Utah saw decreases of 49 percent and 24 percent respectively.” Even if public investment in the least profitable aspect of research suddenly surged, in our current model, the private sector continues to be the primary driver of development, production, and distribution. Where there remains little potential for profit, private firms will be reluctant to advance to the next phases of the innovation process. Public-private projects raise similar concerns. Coordinated efforts can increase private investment by spreading some costs and risk to the public. But to attract private partners in the first place, the public sector has a greater incentive to prioritize R&D projects with more financial upsides.
This is how the quest for profits and tight grip over proprietary rights, both important features of the capitalist model, discourage risk. Innovations are bound for plateauing after a few years, as firms increasingly favor minor aesthetic tweaks and updates over bold ideas while preventing other avenues of innovation from blossoming. At the same time, massive amounts of capital continue to float into the hands of a few. The price of innovating under capitalism is then both decreased innovation and decreased equality. The idea that this approach to innovation must be our best and only option is a delusion.
As I see it, four ingredients are key to kindling innovation. First, there must be problems requiring solutions (an easy one to meet). Second, there must be capital and resources available to invent, develop, produce, and distribute the innovative product. There must also be actual human beings available to participate in every phase of the innovation process. And fourth, at least some of these human beings must be have the creativity and motivation to participate in the innovation process. The question isn’t really whether a socialist economy can provide these four ingredients at all (it can) but rather, whether it can innovate better than a capitalist economy (it can).
This confession is music to the ears of the “capitalism made your iPhone” club. Indeed, proponents of capitalism often brandish rapid innovation as if it were an automatic checkmate on collectivist socioeconomic ideologies. To them, modern technology proves not only that capitalism works, but that it is the best system to stimulate innovation. The subtext of their retort is that a socialist economy could never generate technology this advanced. When coupled with a defense of “thought leaders” as obscenely rich as Steve Jobs, Elon Musk, and Jeff Bezos, their argument also contends that concentrating capital and power in the hands of a few billionaires is a small price to pay for the astronomical leaps in innovation from which we all benefit.
Capitalism’s fan base is not wrong that the iPhone, first released in 2007, is a product of America’s fiercely capitalist economy. I will also concede that without the vision of Steve Jobs, Apple’s late CEO and the 110th richest person in the world at his death, there would be no iPhone as we know it (although it is worth noting that the army of engineers and developers whose labor actually produced the iPhone might have come up with an equally wonderful smartphone). Nonetheless, their perspective is deeply misguided. It manages to both underestimate how much capitalism stifles innovation and misunderstand how much the fundamentals of a socialist economy make it the better system for stimulating innovation.Innovation describes a four-step process that creates or ameliorates a thing or way of doing things. It begins with invention, the design of a device or process that did not previously exist in this form. The invention is then developed, meaning that it is improved with an eye towards eventual scaling, exchange or introduction on a market, and external use by others. At the production stage, the invention is built or reproduced. Finally, the invention is distributed to a wider audience. In our present economy, a minority of the innovation process happens at the individual level, from lonesome inventors and modern Benjamin Franklins who are able to conjure all sorts of contraptions in their garage. The majority, however, results from research and development (R&D) paid for by private firms, and by the public through government agencies, research institutions, and other recipients of federal and state funding.
The profit motive and exclusive proprietary rights are central to capitalist innovation. By law, private firms must prioritize the interest of their shareholders, which tends to be interchangeable with making as much money as possible. Accordingly, investments in any stage of the innovative process must eventually produce profits. To maximize profit, private firms jealously guard the value of their invention through regulations and restrictive contracts. Statutes and regulations help protect their trade secrets. The U.S. Patent and Trademarks Office routinely grants them utility and design patents that “exclude others from making, using, offering for sale, or selling … or importing the invention” for twenty years after the patent is issued. They enforce licensing agreements that can limit the uses and dissemination of all or part of their inventions. To further frustrate efforts to innovate on the back of their inventions, private firms subject their former employees to non-compete agreements that can severely limit them from using their knowledge and skills on competing projects for a period following their departure. Breaches carry dire consequences like expensive lawsuits, big money judgments, and other enormous hassles.
By contrast, the public sector innovates under an academic model instead of for profit. Certainly, earning tenure or an executive position can be lucrative. In some industries, a revolving door gives individuals the opportunity to innovate in both the private and public sectors throughout their careers. However, innovation in this area is less motivated by extracting profit, and more so by signifiers of prestige, career appointments, recognition, publication, project funding, and prizes.
The capitalist model has its perks. At present, private firms raise massive amounts of capital from the government to fund research, but also from banks, private equity, and wealthy donors. This vast amount of capital can prove lucrative for certain classes of workers. Innovative talent might accumulate wealth through generous compensation packages, which play an important role in attracting and retaining them.
Private firms also boast a terrifying nimbleness that allows them to push projects and respond to change faster than government institutions. For instance, firms can turn over staff quickly if their industry in the absence of unions and norms against firing workers at will, other than the standard prohibitions against discriminatory practices. In other words, without the regulatory and administrative constraints that saddle publicly funded projects, private firms can move through the innovative process faster.
Another advantage of the capitalist model is that profits—potential and actual—provide some measure of how well a company is innovating. Particularly, for the many private firms that sell some of their shares to the public on stock exchanges, prices serve as a form of feedback from investors and the market. Imagine that a publicly-traded retailer announces the imminent launch of an affordable, solar-powered computer that boasts power and speeds to rival Apple’s newest models. In the hours following the press release, the retailer’s stock value triples. A week later, while at a tech conference in the Colorado mountains, the retailer’s CEO lets it slip that the first prototype will actually retail for about four thousand dollars. Unfortunately for the CEO, he was wearing a hot mic. The quote is made public in an article titled “No debt-saddled, environmentally-conscious millennial will shed $4,000 for a computer!” The stock value immediately plummets by two-hundred percent.
The original rise in the retailer’s share value communicates that investors believe in the product as a profitable enterprise, and that they see this type of innovation as a worthwhile pursuit. The drop, on the other hand, suggests that they believe this specific product would be more marketable and therefore more profitable if it were developed for an audience beyond high-end consumers. The turn in the stock value can embolden the retailer—through its management, Board of Directors, or shareholders—to revisit its plan to innovate. It also signals to competitors that their innovation of a similar product could be well received, especially if they can overcome the original product’s weaknesses.
But prioritizing profit is a double-edged sword that can hamper innovation. Owning the proprietary rights allows private firms to block workers—through anti-competitive tools like non-compete agreements, patents, and licenses—who put labor into the innovation process from applying the extensive technical expertise and intimate understanding of the product to improve the innovation substantially. This becomes especially relevant once the workers leave the firm division in which they worked, or leave the firm altogether. Understandably, this lack of control and ownership will cause some workers, however passionate they may be about a project, to be less willing to maximize their contribution to the innovation.
Of course, the so-called nimbleness that allows firms to make drastic changes like mass layoffs is extremely harmful to the workers. This is no fluke. The capitalist economy thrives on a reserve army of labor. Inching closer to full employment makes workers scarcer, which empowers the labor force as a whole to bargain for higher wages and better work conditions. These threaten the firm’s bottom line. So, the capitalist economy is structured to maintain the balance of power towards the owners of capital. Positions that pay well (and less than well) come with the precariousness of at-will employment and disappearing union power. A constant pool of unemployed labor is maintained through layoffs and other tactics like higher interest rates, which the government will compel to help slow growth and thereby hiring. This system harms the potential for innovation, too.
The fear of losing work can dissuade workers from taking risks, experimenting, or speaking up as they identify items that could improve a taken approach—all actions that foster innovation. Meanwhile, thousands of individuals who could be contributing to the innovative process are instead involuntarily un-employed. This model also encourages monopolization, as concentrating market power gives private firms the most control over how much profit they can extract. But squashing competition that could contribute fresh ideas hurts every phase of the innovation process, while giving workers in fewer workplaces space to innovate.
Deferring to profit causes many areas of R&D to go unexplored. Private firms have less reason to invest in innovations likely to be made universally available for free if managers or investors do not see much upside for the firm’s bottom line. In theory, the slack in private research can be picked up by the public sector. In reality, however, decades of austerity measures threaten the public’s ability to underwrite risky and inefficient research. Both the Democratic and Republican parties increasingly adhere to a neoliberal ideology that vilifies “big government,” promotes running government like a business, pretends that government budgets should mirror household budgets or the private firm’s balance sheet, and rams privatization under the guises of so-called public-private partnerships and private subcontractors.
In the United States, public investment in R&D has been trending downward. As documented in a 2014 report from the Information Technology & Innovation Foundation, “[f]rom 2010 to 2013, federal R&D spending fell from $158.8 to $133.2 billion … Between 2003 and 2008, state funding for university research, as a share of GDP, dropped on average by 2 percent. States such as Arizona and Utah saw decreases of 49 percent and 24 percent respectively.” Even if public investment in the least profitable aspect of research suddenly surged, in our current model, the private sector continues to be the primary driver of development, production, and distribution. Where there remains little potential for profit, private firms will be reluctant to advance to the next phases of the innovation process. Public-private projects raise similar concerns. Coordinated efforts can increase private investment by spreading some costs and risk to the public. But to attract private partners in the first place, the public sector has a greater incentive to prioritize R&D projects with more financial upsides.
This is how the quest for profits and tight grip over proprietary rights, both important features of the capitalist model, discourage risk. Innovations are bound for plateauing after a few years, as firms increasingly favor minor aesthetic tweaks and updates over bold ideas while preventing other avenues of innovation from blossoming. At the same time, massive amounts of capital continue to float into the hands of a few. The price of innovating under capitalism is then both decreased innovation and decreased equality. The idea that this approach to innovation must be our best and only option is a delusion.
As I see it, four ingredients are key to kindling innovation. First, there must be problems requiring solutions (an easy one to meet). Second, there must be capital and resources available to invent, develop, produce, and distribute the innovative product. There must also be actual human beings available to participate in every phase of the innovation process. And fourth, at least some of these human beings must be have the creativity and motivation to participate in the innovation process. The question isn’t really whether a socialist economy can provide these four ingredients at all (it can) but rather, whether it can innovate better than a capitalist economy (it can).
by Vanessa Bee, Current Affairs | Read more:
Image: uncredited
Labels:
Business,
Economics,
Government,
Politics,
Technology
How Fentanyl Took over Pennsylvania
The first time Nicki Saccomanno used fentanyl, she overdosed.
It was 2016, and the 38-year-old from Kensington hadn't known that the drugs she'd bought had been cut with the deadly synthetic opioid. She just remembers injecting herself with a bag, and then waking up surrounded by paramedics frantically trying to revive her.
Saccomanno, who has been addicted to heroin for 10 years, was shaken. But, before long, there was barely anything else to take but fentanyl to stave off the intense pain of withdrawal. Every corner, it seemed, was selling it. Saccomanno and other longtime heroin users found themselves forced to adapt.
For younger users, like the twentysomethings who live in the camps off Lehigh Avenue, fentanyl is all they've ever known. Like others before them, many graduated from using legal painkillers to illicit opioids in the last few years — except when they turned to the streets to feed their addictions, they were buying a drug much more powerful than their older counterparts had started on.
Young and old are paying for it with their lives. Fentanyl was present in 84 percent of Philadelphia's 1,217 fatal overdoses last year, and in 67 percent of the state's 5,456 overdose deaths in 2017, according to a wide-ranging report on the state of the opioid crisis in Pennsylvania released this month by the U.S. Drug Enforcement Administration.
The report shows how, over the last five years, the opioid crisis ballooned into an overdose crisis — how fentanyl contaminated the state's heroin supply, overwhelmed county morgues with overdose victims, and shocked advocates, people in addiction, and law-enforcement officials alike with its sudden ubiquity.
But to all of them, the explosion of fentanyl makes a kind of terrible sense: Fentanyl is significantly cheaper to produce than heroin. It draws a significantly larger profit. It's significantly more powerful and more addictive than heroin, even Kensington's supply, which has long been known as the cheapest and purest in the country.
These days, Saccomanno uses a combination of heroin and fentanyl, even though she hates it.
"You get sicker," she said. "You need to get more fentanyl more often. It makes being able to get well and stay well even harder. But you can't find anything else."
‘A dramatic shift’
Pure economics.
That's what law-enforcement officials say is driving the rise of fentanyl in Pennsylvania.
It has legitimate use as a drug to treat serious pain, like that in cancer patients, and has been on the illicit drug market for at least 15 years, said Pat Trainor, spokesman for the Philadelphia branch of the DEA. But it mostly turned up in unusual overdose rashes and would disappear from the scene again.
"Two or three years ago, we really saw a pretty dramatic shift," Trainor said. "It was initially seen as a cut or an adulterant in low-quality heroin, and it's really shifted now that it's pretty much largely — but not completely — replaced most of the heroin supply in Philadelphia."
In Philadelphia, he said, a kilogram of heroin, or 2.2 pounds, sells for $50,000 to $80,000, and a drug trafficker can make about $500,000 in profit off it. A kilogram of fentanyl sells for $53,000 to $55,000, is 50 to 100 times stronger, and can turn a profit of up to $5 million.
"For a lot of drug-trafficking organizations, it's that simple," said Trainor.
Most of the fentanyl that ends up in Pennsylvania is manufactured in China and smuggled through Mexican drug-trafficking organizations into the United States along the same routes used to traffic heroin, according to the DEA report.
People have also tried to make it closer to home, however. Unlike heroin, which is derived from opium poppies, fentanyl and its analogues can be produced in a lab. Earlier this year, DEA agents raided what they thought was a methamphetamine lab in a hotel room in western Pennsylvania. To their surprise, it turned out that the room's occupant had been trying to make fentanyl.
Seeking out fentanyl
Earlier this year, researchers from the Philadelphia Department of Public Health, conducting a survey of opioid users at Kensington's needle exchange, posed a question to 400 people in active addiction.
They knew that most of the city's heroin supply had already been tainted with fentanyl, and wanted to know how people in addiction were reacting. And so they asked drug users what they would do if they knew that fentanyl was in the drugs they were buying.
The answers they received shocked them. Of the drug users the Health Department surveyed, 45 percent told researchers that they weren't trying to avoid fentanyl at all — that they would be more likely to use a bag of fentanyl.
"There was more acceptance — it had become part of the community in a way it hadn't been initially. It was actually something people were going for because it was an enhanced high," said Kendra Viner, manager of the department's Opioid Surveillance Program. "And people between 25 and 34 years old were significantly more likely to say they would seek out fentanyl."
by Aubrey Whelan, Philidelphia Inquirer | Read more:
Image: John Duchneskie
It was 2016, and the 38-year-old from Kensington hadn't known that the drugs she'd bought had been cut with the deadly synthetic opioid. She just remembers injecting herself with a bag, and then waking up surrounded by paramedics frantically trying to revive her.
Saccomanno, who has been addicted to heroin for 10 years, was shaken. But, before long, there was barely anything else to take but fentanyl to stave off the intense pain of withdrawal. Every corner, it seemed, was selling it. Saccomanno and other longtime heroin users found themselves forced to adapt.
For younger users, like the twentysomethings who live in the camps off Lehigh Avenue, fentanyl is all they've ever known. Like others before them, many graduated from using legal painkillers to illicit opioids in the last few years — except when they turned to the streets to feed their addictions, they were buying a drug much more powerful than their older counterparts had started on.Young and old are paying for it with their lives. Fentanyl was present in 84 percent of Philadelphia's 1,217 fatal overdoses last year, and in 67 percent of the state's 5,456 overdose deaths in 2017, according to a wide-ranging report on the state of the opioid crisis in Pennsylvania released this month by the U.S. Drug Enforcement Administration.
The report shows how, over the last five years, the opioid crisis ballooned into an overdose crisis — how fentanyl contaminated the state's heroin supply, overwhelmed county morgues with overdose victims, and shocked advocates, people in addiction, and law-enforcement officials alike with its sudden ubiquity.
But to all of them, the explosion of fentanyl makes a kind of terrible sense: Fentanyl is significantly cheaper to produce than heroin. It draws a significantly larger profit. It's significantly more powerful and more addictive than heroin, even Kensington's supply, which has long been known as the cheapest and purest in the country.
These days, Saccomanno uses a combination of heroin and fentanyl, even though she hates it.
"You get sicker," she said. "You need to get more fentanyl more often. It makes being able to get well and stay well even harder. But you can't find anything else."
‘A dramatic shift’
Pure economics.
That's what law-enforcement officials say is driving the rise of fentanyl in Pennsylvania.
It has legitimate use as a drug to treat serious pain, like that in cancer patients, and has been on the illicit drug market for at least 15 years, said Pat Trainor, spokesman for the Philadelphia branch of the DEA. But it mostly turned up in unusual overdose rashes and would disappear from the scene again.
"Two or three years ago, we really saw a pretty dramatic shift," Trainor said. "It was initially seen as a cut or an adulterant in low-quality heroin, and it's really shifted now that it's pretty much largely — but not completely — replaced most of the heroin supply in Philadelphia."
In Philadelphia, he said, a kilogram of heroin, or 2.2 pounds, sells for $50,000 to $80,000, and a drug trafficker can make about $500,000 in profit off it. A kilogram of fentanyl sells for $53,000 to $55,000, is 50 to 100 times stronger, and can turn a profit of up to $5 million.
"For a lot of drug-trafficking organizations, it's that simple," said Trainor.
Most of the fentanyl that ends up in Pennsylvania is manufactured in China and smuggled through Mexican drug-trafficking organizations into the United States along the same routes used to traffic heroin, according to the DEA report.
People have also tried to make it closer to home, however. Unlike heroin, which is derived from opium poppies, fentanyl and its analogues can be produced in a lab. Earlier this year, DEA agents raided what they thought was a methamphetamine lab in a hotel room in western Pennsylvania. To their surprise, it turned out that the room's occupant had been trying to make fentanyl.
Seeking out fentanyl
Earlier this year, researchers from the Philadelphia Department of Public Health, conducting a survey of opioid users at Kensington's needle exchange, posed a question to 400 people in active addiction.
They knew that most of the city's heroin supply had already been tainted with fentanyl, and wanted to know how people in addiction were reacting. And so they asked drug users what they would do if they knew that fentanyl was in the drugs they were buying.
The answers they received shocked them. Of the drug users the Health Department surveyed, 45 percent told researchers that they weren't trying to avoid fentanyl at all — that they would be more likely to use a bag of fentanyl.
"There was more acceptance — it had become part of the community in a way it hadn't been initially. It was actually something people were going for because it was an enhanced high," said Kendra Viner, manager of the department's Opioid Surveillance Program. "And people between 25 and 34 years old were significantly more likely to say they would seek out fentanyl."
by Aubrey Whelan, Philidelphia Inquirer | Read more:
Image: John Duchneskie
Eight Reasons a Financial Crisis is Coming
It's been about 10 years since the last financial crisis. FocusEconomics wants to know if another one is due.
The short answer is yes.
In the last 10 years not a single fundamental economic flaw has been fixed in the US, Europe, Japan, or China.
The Fed was behind the curve for years contributing to the bubble. Massive rounds of QE in the US, EU, and Japan created extreme equity and junk bond bubbles.
Trump's tariffs are ill-founded as is Congressional spending wasted on war.
Potential Catalysts
- Junk Bond Bubble Bursting
- Equity Bubble Bursting
- Italy
- Tariffs
- Brexit
- Pensions
- Housing
- China
by Mike "Mish" Shedlock, MishTalk | Read more:
Image: uncredited
[ed. See also: Smoot–Hawley Tariff Act (Prediction: you'll be hearing a lot about this in the coming few months). And: The Music Fades Out (John P. Hussman, Ph.D.)]
[ed. See also: Smoot–Hawley Tariff Act (Prediction: you'll be hearing a lot about this in the coming few months). And: The Music Fades Out (John P. Hussman, Ph.D.)]
Subscribe to:
Comments (Atom)










