Image: uncredited
Sunday, December 6, 2020
Saturday, December 5, 2020
Friday, December 4, 2020
Living in the Present is Overrated
I’d been looking forward to the meal for weeks. I already knew what I was going to eat: the rosemary crostini starter, then the lamb with courgette fries. Or maybe the cod. I planned to arrive early and sit in the window at the cool marble counter and watch London go by. In the warm bustle of the restaurant, the condensation would mist the pane. As a treat, I would order myself a glass of white wine while I waited for my friend.
It won’t surprise you to hear that the meal never happened. Coronavirus cases started rising exponentially and eating out felt less like indulgence and more like lunacy. Then it became illegal to eat together at all. Soon it became illegal even to eat at a restaurant by yourself. Then everything shut.
The cost of these lost lunches has been totted up many times: the trains not taken, the taxis not flagged down, the desserts not eaten, the waiters not tipped. Then there is the emotional toll, too. Spirits are flagging, the lonely are getting lonelier, the world is wilting. Covid has already disrupted so much of how we live. It has altered something else, as well – time itself.
Not so long ago, we had merely months and years. Things happened in November or in December, last year or this. Some events are so big that they divide the world into before and after, into the present and an increasingly alien past. Wars do this, and the pandemic has, too. Coronavirus has cut a trench through time.
The very recent past is suddenly another country. Now, amateur archaeologists of our own existence, we sort through our possessions and stumble on small relics from “then”, that strange place we used to live: a bus pass, a lipstick, a smart watch, a pair of shoes with the heels worn down, work clothes that, after just six months in stretchy active-wear, feel as stiff and preposterous as whalebone.
News of vaccines fills us with hope. But the timing, the take-up, the roll-out to ordinary souls remain unresolved. The actual future still lies drearily in front of us, with the prospect of further lockdowns, overcrowded hospitals and ever greater financial losses. Days stretch on, each much the same as the last. One week blends into the next.
Amid these cancellations something else has also been lost. It won’t appear on any spreadsheet because it is not quantifiable. But it matters. So much of life, big and small, is about fleeting moments filled with hope. The prospect of an exciting Friday evening or Saturday afternoon used to make a dismal Tuesday morning bearable. So, too, did browsing online for your future self: the top that you’d always feel good in, the bag that would take both your laptop and book.
Hope hung everywhere in the old world, hovering in our peripheral vision – on the billboard that made us ponder our next holiday or reminded us to dig out dark glasses and sun cream; among the spices in the supermarket that conjured a conversation over curry with friends, chatting about things that didn’t feel like life and death.
Many moments of happiness are about anticipation, the joy of the imagined future – and distracting ourselves from the tedious, exhausting or difficult present. Yet even our small consumer choices or our musings about what to do this weekend now bring us back to the big, overpowering reality of the pandemic. We cannot escape it. Our daydreams have come crashing back to earth: 2020 is the year that the future was cancelled.
In recent decades the present has become rather more fashionable than the future. Living in the moment, being present in our present, is the desired mind-state of our age. There’s nothing new about the idea, of course – it forms the basis of Buddhism and there are elements of it in many religions. Long ago Horace commanded us to “carpe diem” and Seneca exhorted that the present is all we have: “All the rest of existence is not living but merely time.”
Over the past ten years the once-niche idea of “mindfulness” has gone mainstream. It has become an aspiration, an advertising opportunity and an overused adjective. You can practise not only mindful meditation but mindful breathing, mindful eating, mindful drinking, mindful walking, mindful parenting, even mindful birth. (As if childbirth were something that you might miss if you weren’t paying close enough attention.)
It isn’t always clear quite what mindfulness is. Despite its promise of mental clarity, its own origins are decidedly foggy. It seems to be a translation of a Buddhist term, sati, which itself is tricky to define – its meaning lies somewhere between memory and consciousness. The English version is neither a very good translation nor a particularly helpful word. The longer you think about it, the stranger the word “mindful” seems: that puzzling “-ful” feels odd when talking about emptying your thoughts. (And is its opposite “mindlessness”?)
If the definition of mindfulness is elusive, the practice is even more so. Its aim is to empty your mind by using your mind; to liberate it by restraining it. It is a puzzling and paradoxical thing, the mental equivalent of climbing up a ladder and removing it at the same time.
by Catherine Nixey, The Economist | Read more:
Image: Anna+Elena Balbusso
It won’t surprise you to hear that the meal never happened. Coronavirus cases started rising exponentially and eating out felt less like indulgence and more like lunacy. Then it became illegal to eat together at all. Soon it became illegal even to eat at a restaurant by yourself. Then everything shut.
The cost of these lost lunches has been totted up many times: the trains not taken, the taxis not flagged down, the desserts not eaten, the waiters not tipped. Then there is the emotional toll, too. Spirits are flagging, the lonely are getting lonelier, the world is wilting. Covid has already disrupted so much of how we live. It has altered something else, as well – time itself.
Not so long ago, we had merely months and years. Things happened in November or in December, last year or this. Some events are so big that they divide the world into before and after, into the present and an increasingly alien past. Wars do this, and the pandemic has, too. Coronavirus has cut a trench through time.
The very recent past is suddenly another country. Now, amateur archaeologists of our own existence, we sort through our possessions and stumble on small relics from “then”, that strange place we used to live: a bus pass, a lipstick, a smart watch, a pair of shoes with the heels worn down, work clothes that, after just six months in stretchy active-wear, feel as stiff and preposterous as whalebone.
News of vaccines fills us with hope. But the timing, the take-up, the roll-out to ordinary souls remain unresolved. The actual future still lies drearily in front of us, with the prospect of further lockdowns, overcrowded hospitals and ever greater financial losses. Days stretch on, each much the same as the last. One week blends into the next.
Amid these cancellations something else has also been lost. It won’t appear on any spreadsheet because it is not quantifiable. But it matters. So much of life, big and small, is about fleeting moments filled with hope. The prospect of an exciting Friday evening or Saturday afternoon used to make a dismal Tuesday morning bearable. So, too, did browsing online for your future self: the top that you’d always feel good in, the bag that would take both your laptop and book.
Hope hung everywhere in the old world, hovering in our peripheral vision – on the billboard that made us ponder our next holiday or reminded us to dig out dark glasses and sun cream; among the spices in the supermarket that conjured a conversation over curry with friends, chatting about things that didn’t feel like life and death.
Many moments of happiness are about anticipation, the joy of the imagined future – and distracting ourselves from the tedious, exhausting or difficult present. Yet even our small consumer choices or our musings about what to do this weekend now bring us back to the big, overpowering reality of the pandemic. We cannot escape it. Our daydreams have come crashing back to earth: 2020 is the year that the future was cancelled.
In recent decades the present has become rather more fashionable than the future. Living in the moment, being present in our present, is the desired mind-state of our age. There’s nothing new about the idea, of course – it forms the basis of Buddhism and there are elements of it in many religions. Long ago Horace commanded us to “carpe diem” and Seneca exhorted that the present is all we have: “All the rest of existence is not living but merely time.”
Over the past ten years the once-niche idea of “mindfulness” has gone mainstream. It has become an aspiration, an advertising opportunity and an overused adjective. You can practise not only mindful meditation but mindful breathing, mindful eating, mindful drinking, mindful walking, mindful parenting, even mindful birth. (As if childbirth were something that you might miss if you weren’t paying close enough attention.)
It isn’t always clear quite what mindfulness is. Despite its promise of mental clarity, its own origins are decidedly foggy. It seems to be a translation of a Buddhist term, sati, which itself is tricky to define – its meaning lies somewhere between memory and consciousness. The English version is neither a very good translation nor a particularly helpful word. The longer you think about it, the stranger the word “mindful” seems: that puzzling “-ful” feels odd when talking about emptying your thoughts. (And is its opposite “mindlessness”?)
If the definition of mindfulness is elusive, the practice is even more so. Its aim is to empty your mind by using your mind; to liberate it by restraining it. It is a puzzling and paradoxical thing, the mental equivalent of climbing up a ladder and removing it at the same time.
by Catherine Nixey, The Economist | Read more:
Image: Anna+Elena Balbusso
[ed. See also: anhedonia (inability to feel pleasure/anticipation); a classic feature of depression.]
Thursday, December 3, 2020
California Plans Sweeping Stay-at-Home Orders
Image: Frederic J Brown/AFP/Getty Images
Weekend Warriors: Using the Homeless to Guard Empty Houses
Wandering around Northwest Pasadena, I pressed my face against the window of a dingy pink stucco house at 265 Robinson Road. It was April, 2019, and in two blocks I had passed thirteen bungalows, duplexes, and multifamily homes that had gone through foreclosure in the past fifteen years. Twelve of them were still unoccupied. No. 265 had been in foreclosure for a year and a half, and the two small houses on the property had long sat empty. But now, inside the rear house, there was a gallon jug of water and a bag of peanuts on a Formica kitchen counter. The walls were a mangy taupe, but African-print sheets hung over the windows. As I walked away, I heard a genteel Southern accent from behind me: “Can I help you?” A Black man with perfect posture, wearing loafers and a black T-shirt tucked into belted trousers, introduced himself as Augustus Evans.
I wasn’t the first person to wonder what Evans was doing there. A few weeks earlier, two sheriffs had knocked on the door around 11 p.m. and handcuffed him. In his car’s glove compartment, they found a letter of employment and the cell-phone number of a woman named Diane Montano, who runs Weekend Warriors, a company that provides security for vacant houses. Like many of Montano’s employees, Evans was homeless when he was hired. Now he lives in properties that are being flipped, guarding them through the renovation, staging, open-house, and inspection periods. In the past seven years, he has protected more than twenty-two homes, in thirteen neighborhoods around Los Angeles, almost all historically Black and Latino communities. A McMansion in Fontana; a four-unit apartment complex in Compton; a “baby mansion on the peak of the mountain” in East L.A., which had been left to a son who, according to the neighbors, borrowed so much against the equity of the house that he lost it to foreclosure. Before leaving, he poured liquid cement down the drains. Evans guarded the property as the plumbing system was replaced.
Empty houses are a strange sight in an area that has one of the most severe housing shortages in the United States. L.A. has the highest median home prices, relative to income, and among the lowest homeownership rates of any major city, according to the U.C.L.A. Center for Neighborhood Knowledge. Renting isn’t any easier. The area has one of the lowest vacancy rates in the country, and the average rent is twenty-two hundred dollars a month. On any night, some sixty-six thousand people there sleep in cars, in shelters, or on the street, an increase of thirteen per cent since last year.
The housing shortage was caused, in part, by restrictive zoning, rampant nimbyism, and the use of California’s environmental laws to thwart urban development. In 1960, Los Angeles was zoned to house some ten million people. By 1990, decades of downzoning had reduced that number to 3.9 million, roughly the city’s current population. Then, in 2008, the subprime-mortgage crisis struck, and in the years that followed thousands of foreclosed homes were sold at auction. Because they had to be purchased in cash, many of them were bought by wealthy investors, private-equity-backed real-estate funds, and countless other real-estate companies, leaving less inventory for individual buyers. In the end, the 2008 crash made housing in California even more expensive.
No. 265, along with thousands of other homes in L.A., was acquired by Wedgewood, a real-estate company, founded in 1983, that specializes in flipping homes, managing everything from lockouts and financing to renovation and staging. In gentrifying neighborhoods, empty houses are sitting ducks, so companies like Wedgewood hire Weekend Warriors and other house-sitting services for cheap security. (...)
One morning, a customer told Evans that he supplemented his Social Security income by house-sitting for Weekend Warriors. There were two types of gigs, he explained: 7 p.m. to 7 a.m., which paid five hundred dollars a month, and 24/7, which paid eight hundred dollars. All you needed was an I.D. Evans called Diane Montano at around 10 a.m., and at 2 p.m. a van picked him up and took him to a house in Riverside.
The rules were simple: don’t leave, don’t host guests, and don’t talk to anyone—not contractors, property managers, real-estate agents, or prospective buyers. If you were working a 24/7, only short trips to the market or the laundromat were allowed. The premises had to be kept clean at all times, or pay would be docked. The driver supplied Evans with a mini-fridge, a small microwave, an inflatable mattress, and plastic floor coverings to protect the carpet.
The driver came by to check on Evans occasionally, always unannounced, photographing each room and sending the pictures to Montano, so that she could monitor Evans’s cleanliness and track the progress of the renovations. By the time Evans was living at No. 265, he had learned the rhythms of the gig. He knew that the driver wouldn’t come by at night or on Sundays. When he could, he’d steal out to Moreno Valley, an hour and twenty minutes away, to visit his sons. He kept loose change in a coffee cup in his car, and he’d give his youngest son all the coins he’d collected since his last visit. “They know Daddy has to work away from the house,” he told me. “They’re big boys now.”
by Francesca Mari, New Yorker | Read more:
Image: Ricardo Nagaoka for The New Yorker
I wasn’t the first person to wonder what Evans was doing there. A few weeks earlier, two sheriffs had knocked on the door around 11 p.m. and handcuffed him. In his car’s glove compartment, they found a letter of employment and the cell-phone number of a woman named Diane Montano, who runs Weekend Warriors, a company that provides security for vacant houses. Like many of Montano’s employees, Evans was homeless when he was hired. Now he lives in properties that are being flipped, guarding them through the renovation, staging, open-house, and inspection periods. In the past seven years, he has protected more than twenty-two homes, in thirteen neighborhoods around Los Angeles, almost all historically Black and Latino communities. A McMansion in Fontana; a four-unit apartment complex in Compton; a “baby mansion on the peak of the mountain” in East L.A., which had been left to a son who, according to the neighbors, borrowed so much against the equity of the house that he lost it to foreclosure. Before leaving, he poured liquid cement down the drains. Evans guarded the property as the plumbing system was replaced.
Empty houses are a strange sight in an area that has one of the most severe housing shortages in the United States. L.A. has the highest median home prices, relative to income, and among the lowest homeownership rates of any major city, according to the U.C.L.A. Center for Neighborhood Knowledge. Renting isn’t any easier. The area has one of the lowest vacancy rates in the country, and the average rent is twenty-two hundred dollars a month. On any night, some sixty-six thousand people there sleep in cars, in shelters, or on the street, an increase of thirteen per cent since last year.
The housing shortage was caused, in part, by restrictive zoning, rampant nimbyism, and the use of California’s environmental laws to thwart urban development. In 1960, Los Angeles was zoned to house some ten million people. By 1990, decades of downzoning had reduced that number to 3.9 million, roughly the city’s current population. Then, in 2008, the subprime-mortgage crisis struck, and in the years that followed thousands of foreclosed homes were sold at auction. Because they had to be purchased in cash, many of them were bought by wealthy investors, private-equity-backed real-estate funds, and countless other real-estate companies, leaving less inventory for individual buyers. In the end, the 2008 crash made housing in California even more expensive.
No. 265, along with thousands of other homes in L.A., was acquired by Wedgewood, a real-estate company, founded in 1983, that specializes in flipping homes, managing everything from lockouts and financing to renovation and staging. In gentrifying neighborhoods, empty houses are sitting ducks, so companies like Wedgewood hire Weekend Warriors and other house-sitting services for cheap security. (...)
One morning, a customer told Evans that he supplemented his Social Security income by house-sitting for Weekend Warriors. There were two types of gigs, he explained: 7 p.m. to 7 a.m., which paid five hundred dollars a month, and 24/7, which paid eight hundred dollars. All you needed was an I.D. Evans called Diane Montano at around 10 a.m., and at 2 p.m. a van picked him up and took him to a house in Riverside.
The rules were simple: don’t leave, don’t host guests, and don’t talk to anyone—not contractors, property managers, real-estate agents, or prospective buyers. If you were working a 24/7, only short trips to the market or the laundromat were allowed. The premises had to be kept clean at all times, or pay would be docked. The driver supplied Evans with a mini-fridge, a small microwave, an inflatable mattress, and plastic floor coverings to protect the carpet.
The driver came by to check on Evans occasionally, always unannounced, photographing each room and sending the pictures to Montano, so that she could monitor Evans’s cleanliness and track the progress of the renovations. By the time Evans was living at No. 265, he had learned the rhythms of the gig. He knew that the driver wouldn’t come by at night or on Sundays. When he could, he’d steal out to Moreno Valley, an hour and twenty minutes away, to visit his sons. He kept loose change in a coffee cup in his car, and he’d give his youngest son all the coins he’d collected since his last visit. “They know Daddy has to work away from the house,” he told me. “They’re big boys now.”
Image: Ricardo Nagaoka for The New Yorker
Wednesday, December 2, 2020
New Effort to Pass Emergency Covid-19 Relief Bill
A bipartisan group of senators and members of the House unveiled a new $908 billion plan for emergency Covid-19 relief funding on Tuesday to extend unemployment benefits and small business loans.
The proposal comes after months of stalemate on stimulus talks, and during a critical time in the Covid-19 crisis. About 14 million Americans receiving unemployment benefits will see those programs expire at the end of the month unless Congress takes action, and cities and states around the country are also facing massive budget shortfalls.
This new proposal is a $908 billion package that repurposes $560 billion in unused funds from the CARES Act, the $2.2 trillion stimulus package that passed in March, meaning that this new proposal adds only $348 billion in new spending. It’s much smaller than the $2.2 trillion revised HEROES Act that House Democrats passed in October, but larger than the $500 billion Senate Republicans were proposing in October.
Two large sticking points in negotiations have been whether there should be another round of stimulus checks (a priority for Pelosi and Trump) and liability protections for businesses worried about being sued for exposing customers and workers to Covid-19 (a priority of McConnell’s). Republicans came out on top on both of these issues (at least in this initial proposal) — stimulus checks are not included in this new bipartisan proposal and the framework provided by Sen. Manchin’s office notes that the proposal will “provide short term Federal protection from Coronavirus related lawsuits with the purpose of giving states time to develop their own response.”
Both Pelosi and Trump have signaled support for another round of stimulus payments going out to working Americans.
Here’s what actually made it into the proposal:
by Ella Nilsen and Jerusalem Demsas, Vox | Read more:
Image: Tasos Katopodis/Getty Images
[ed. See also: The government’s failure to provide economic relief is killing people (Vox).]
The proposal comes after months of stalemate on stimulus talks, and during a critical time in the Covid-19 crisis. About 14 million Americans receiving unemployment benefits will see those programs expire at the end of the month unless Congress takes action, and cities and states around the country are also facing massive budget shortfalls.
This new proposal is a $908 billion package that repurposes $560 billion in unused funds from the CARES Act, the $2.2 trillion stimulus package that passed in March, meaning that this new proposal adds only $348 billion in new spending. It’s much smaller than the $2.2 trillion revised HEROES Act that House Democrats passed in October, but larger than the $500 billion Senate Republicans were proposing in October.
Two large sticking points in negotiations have been whether there should be another round of stimulus checks (a priority for Pelosi and Trump) and liability protections for businesses worried about being sued for exposing customers and workers to Covid-19 (a priority of McConnell’s). Republicans came out on top on both of these issues (at least in this initial proposal) — stimulus checks are not included in this new bipartisan proposal and the framework provided by Sen. Manchin’s office notes that the proposal will “provide short term Federal protection from Coronavirus related lawsuits with the purpose of giving states time to develop their own response.”
Both Pelosi and Trump have signaled support for another round of stimulus payments going out to working Americans.
Here’s what actually made it into the proposal:
- $160 billion for state, local, and tribal governments. For context, US cities alone are facing a $360 billion shortfall and are being forced to pursue austerity measures to balance their budgets. As Emily Stewart has reported for Vox, state budget shortfalls could exceed $500 billion. In other words, this money could be a drop in the bucket. State and local government woes have been lower on McConnell’s list of priorities — at one point he suggested that states declare bankruptcy.
- $180 billion in unemployment insurance (UI). The CARES Act gave unemployed Americans a weekly $600 lifeline on top of state unemployment insurance, a move widely regarded as staving off catastrophe for the millions of Americans who lost their jobs this year. As Dylan Matthews has reported for Vox, research has shown that “the average UI recipient is getting 134 percent of their previous salary,” and it may have temporarily lowered the poverty rate. This program expired in August, so any relief will be welcome for those still unemployed. Congress originally estimated that the UI program would cost $260 billion which the Tax Policy Center viewed as an underestimate, so it’s likely this extension wouldn’t cover the full cost of unemployed workers’ needs. The Washington Post reported that this amount would cover an additional $300 a week for four months.
- $288 billion in support for small businesses. This support will partially come through the Paycheck Protection Program (PPP) and Economic Injury Disaster Loans. As Forbes has reported, an August 2020 survey from the US census showed that almost 79 percent of small businesses reported being negatively affected by Covid-19.
- $25 billion in rental assistance. Notably, the new bipartisan proposal only provides for $25 billion in rental assistance even as economists are predicting that tenants could owe nearly $70 billion in back rent by year’s end. Vox has reported that policy experts and advocates have been pushing for $100 billion to be included in stimulus negotiations in order to prevent an eviction crisis that could impact as many as 40 million Americans.
What's up next for this proposal (...)
No new coronavirus aid package is going to get through the Senate without bipartisan support, so the new plan is a signal that Republicans and Democrats are indeed talking. Lawmakers supporting the plan emphasized on Tuesday that while each party is not going to get exactly what they want, their framework contains key points of agreement.
McConnell is circulating his own proposal among Senate Republicans, after he and House Minority Leader Kevin McCarthy met with White House officials on Tuesday to suss out what President Donald Trump wants to come out of a coronavirus relief deal. McConnell’s version of an emergency package is more limited, providing just a one-month extension of unemployment benefits, rather than the three-month extension in the bipartisan proposal.
No new coronavirus aid package is going to get through the Senate without bipartisan support, so the new plan is a signal that Republicans and Democrats are indeed talking. Lawmakers supporting the plan emphasized on Tuesday that while each party is not going to get exactly what they want, their framework contains key points of agreement.
McConnell is circulating his own proposal among Senate Republicans, after he and House Minority Leader Kevin McCarthy met with White House officials on Tuesday to suss out what President Donald Trump wants to come out of a coronavirus relief deal. McConnell’s version of an emergency package is more limited, providing just a one-month extension of unemployment benefits, rather than the three-month extension in the bipartisan proposal.
Image: Tasos Katopodis/Getty Images
[ed. See also: The government’s failure to provide economic relief is killing people (Vox).]
What Lies Beyond Boredom: Post-Boredom
I have already started practising my small talk for Christmas. “Good, thanks. You?” I keep saying into a mirror, fully aware that in the past eight months I have more or less completely lost the ability to make conversation with humans. “What did I do with the time? Wow, the year has gone so quickly, hasn’t it?”
At this point I pause meaningfully because I know I have about two minutes of material to stretch over a five-day festive period with fewer people than usual, so I really need to make it last. “Let’s see, umm … got really into jigsaws for a bit. Rearranged the spare room into an office. Learned to make this one really good curry recipe from the BBC website. Uh … got 11 solo wins and about 24 duo wins on Fortnite.” Is that good?, they’ll ask, and I’ll have to admit that no, not particularly. “It’s a game for 12-year-olds that I play compulsively,” I’ll explain. “Every day I log in and let adolescents embarrass me in an online world that allows them to dance joyously on the remains of my corpse.” Oh, they’ll say. I think there’s something – I think there’s something happening in the other room. I really ought to…
I think it’s important to address the fact that I am bored. I am, to my bones, bored of this. I know that in the current climate, being bored is a high luxury, but it doesn’t make it any more thrilling. In fact, I am so deep into boredom that I have burrowed beneath the previously accepted boundaries of the concept, and have now emerged, apathetically, into post-boredom.
I never thought this would happen: if you had offered me, at the start of the year, the chance to sit inside for eight months chain-watching Netflix and not really going out or doing anything, and told me that being glued to my sofa would be reframed from a “sign of a life falling apart” to something I was doing “for the moral good of the country and the world as a whole”, I would have bitten your hand off for it.
I excel in inactivity. A squalid little part of me always imagined that I’d thrive in the ambient boredom of prison – not the gangs part of prison, or the crapping in a room with someone watching you part, or the shanking someone for some cigarette bit, or getting a pool cue cracked over me, but I really think I’d get some good letter-writing done. Lockdown has offered all the perks of prison (time) and none of the cons (prison), and yet what have I done with it? Watched part, but somehow still not all, of The Sopranos. That’s not really good enough.
This boredom is dangerous, because I’m not the only one experiencing it. Humans can only live in fear for so long, and I think, for a lot of us, being high-key scared of coronavirus wore off some time around June. Second lockdown has been a poor impersonation of the first one – no clapping, no supermarket queues, no Houseparty, The Undoing – but we wore through our boredom reserves and gnawed at the core of the human condition.
Though I think it’s psychologically ungreat for the biggest health threat of my lifetime to be reduced to a background hum of danger, an unseen force that just makes me swerve people in the corridors of my block of flats as I go downstairs for the post and not much else, it’s possibly even worse that we’ve worn boredom down to the bone. If we’ve worked through fear, and worked our way through boredom, what, really, is there left? Speaking only for myself – someone who mildly considered buying prescription orange-tinted glasses this week just to feel something – the answer can only be “chaos”.
At this point I pause meaningfully because I know I have about two minutes of material to stretch over a five-day festive period with fewer people than usual, so I really need to make it last. “Let’s see, umm … got really into jigsaws for a bit. Rearranged the spare room into an office. Learned to make this one really good curry recipe from the BBC website. Uh … got 11 solo wins and about 24 duo wins on Fortnite.” Is that good?, they’ll ask, and I’ll have to admit that no, not particularly. “It’s a game for 12-year-olds that I play compulsively,” I’ll explain. “Every day I log in and let adolescents embarrass me in an online world that allows them to dance joyously on the remains of my corpse.” Oh, they’ll say. I think there’s something – I think there’s something happening in the other room. I really ought to…
I think it’s important to address the fact that I am bored. I am, to my bones, bored of this. I know that in the current climate, being bored is a high luxury, but it doesn’t make it any more thrilling. In fact, I am so deep into boredom that I have burrowed beneath the previously accepted boundaries of the concept, and have now emerged, apathetically, into post-boredom.
I never thought this would happen: if you had offered me, at the start of the year, the chance to sit inside for eight months chain-watching Netflix and not really going out or doing anything, and told me that being glued to my sofa would be reframed from a “sign of a life falling apart” to something I was doing “for the moral good of the country and the world as a whole”, I would have bitten your hand off for it.
I excel in inactivity. A squalid little part of me always imagined that I’d thrive in the ambient boredom of prison – not the gangs part of prison, or the crapping in a room with someone watching you part, or the shanking someone for some cigarette bit, or getting a pool cue cracked over me, but I really think I’d get some good letter-writing done. Lockdown has offered all the perks of prison (time) and none of the cons (prison), and yet what have I done with it? Watched part, but somehow still not all, of The Sopranos. That’s not really good enough.
This boredom is dangerous, because I’m not the only one experiencing it. Humans can only live in fear for so long, and I think, for a lot of us, being high-key scared of coronavirus wore off some time around June. Second lockdown has been a poor impersonation of the first one – no clapping, no supermarket queues, no Houseparty, The Undoing – but we wore through our boredom reserves and gnawed at the core of the human condition.
Though I think it’s psychologically ungreat for the biggest health threat of my lifetime to be reduced to a background hum of danger, an unseen force that just makes me swerve people in the corridors of my block of flats as I go downstairs for the post and not much else, it’s possibly even worse that we’ve worn boredom down to the bone. If we’ve worked through fear, and worked our way through boredom, what, really, is there left? Speaking only for myself – someone who mildly considered buying prescription orange-tinted glasses this week just to feel something – the answer can only be “chaos”.
by Joel Golby, The Guardian | Read more:
Image: Matthew Horwood/Getty ImagesFive Lessons From Dave Chappelle
... the first lesson from Dave Chappelle’s latest release on Instagram, Unforgiven, is that one best not compete with Chappelle when it comes to story-telling; the way in which the comedian weaves together multiple stories from his childhood on up to the present to make his argument about why he should be paid for the rights to stream Chappelle’s Show is truly extraordinary.
Lesson Two: Talent in an Analog World
To that end, I thought a more prosaic approach might be in order: Chappelle’s 18-minute special, which I highly suggest you watch in full, is chock-full of insights about how the Internet has transformed the entertainment industry specifically, and business broadly; my goal is to, in my own clumsy way, highlight and expand on those insights. (...)
Lesson Two: Talent in an Analog World
Chappelle may have been preternaturally gifted, but that wasn’t enough to avoid being broke in the early 2000s when he signed that contract with Comedy Central. Granted, Chappelle was almost certainly scratching out a living doing standup, but to truly make it big meant signing up with a network (or, in the case of music, a label), because they controlled distribution at scale.
That’s the big difference between stand-up and something like Chappelle’s Show: when it comes to the former your income is directly tied to your output; if you do a live show, you get paid, and if you don’t, you don’t. A TV show or record, on the other hand, only needs to be made once, at which point it can not only be shown across the country or across the world, but can also be shown again and again.
It’s the latter that is the key to getting rich as a creator, but in the analog world there were two big obstacles facing creators: first, the cost of creating a show or record was very high, and second, it was impossible to get said show or record distributed even if you managed to get it made. The networks and labels were the ones that had actual access to customers, whether that be via theaters, cable TV, record stores, or whatever physical channel existed.
Over the last two decades, though, technology has demolished both obstacles: anyone with access to a computer has access to the tools necessary to create compelling content, and, more importantly, the Internet has made distribution free. Of course the Internet did exist when Chappelle signed that contract, but there are two further differences: first, the advent of broadband, which makes far richer content accessible, and second, social networks, which provide far more reach than traditional channels, for free. Today it is far more viable for talent to not only create content and distribute it, but also promote it in a way that has tangible economic benefits.
Lesson Three: The House Wins
What is noteworthy about Chappelle’s argument is that he is quite ready to admit that everyone involved is acting legally:
Lesson Three: The House Wins
What is noteworthy about Chappelle’s argument is that he is quite ready to admit that everyone involved is acting legally:
From the perspective of 2020, and Chappelle’s overall point about how he feels his content was taken from him, this seems blatantly unfair. At the same time, from a network’s perspective, Chappelle’s success pays for all of the other shows that failed. It’s the same idea as the music industry: yes, record companies claim rights to your recordings forever, but for the vast majority of artists those rights are worthless. In fact, for that vast majority of artists, they represent a loss, because the money the network or label spent on making the show or record, promoting it, and distributing it, is gone forever.
There is an analogy to venture capital here, which I made five years ago in the context of Tidal:
This is why, by the way, I’m generally quite unsympathetic to artists belly-aching about how unfair their labels are. Is it unfair that all of the artists who don’t break through are not compelled to repay the labels the money that was invested in them? No one begrudges venture capitalists for profiting when a startup IPOs, because that return pays for all the other startups in the portfolio that failed.
It’s not a perfect analogy, in part because the output is very different: a founder will typically only ever have one company, so of course they retain a much more meaningful ownership stake from the beginning; an artist, on the other hand, will hopefully produce new art, which they will be in a much stronger position to monetize if their initial efforts are successful. Chappelle, for example, earns around $20 million per stand-up special on Netflix; Taylor Swift, another artist embroiled in an ongoing controversy around rights to her original work, fully owns the rights for her two most recent records.
The lesson to be learned, though, is that for many years venture capitalists, networks, and record labels could ensure that the expected value of their bets was firmly in their favor. There were more entrepreneurs that wanted to start companies, more comedians that wanted to make TV shows, and more musicians that wanted to make records than there was money to fund them, which meant the house always came out ahead: sure, money was lost on companies, comedians, and musicians that failed, but the upside earned by those that succeeded more than made up for it.
Over the last two decades venture has been flooded with new sources of capital, resulting in far more founder-friendly terms than before; comedy, meanwhile, has been a particularly notable beneficiary of the podcast boom, as more and more artists create shows that are inexpensive to produce yet extremely lucrative for the artist. Music has seen its own independent artists emerge, although the labels, thanks in part to the power of their back catalogs, have retained their power longer than many expected. Still, the inevitable outcome of Lesson Two is that Lesson Three is shakier than ever.
Lesson Four: Aggregators and the Individual
The one company that comes out looking great is Netflix:
Technically speaking, Netflix did exist when Chappelle negotiated that contract with Comedy Central, but the company was a DVD-by-mail service; the streaming iteration that Chappelle is referring to wasn’t viable back then. Indeed, the entire premise of the streaming company is that it takes advantage of the changes wrought by the Internet to achieve distribution that is not simply equivalent to a TV network, but actually superior, both in terms of reaching the entire world and also in digitizing time. On Netflix, everything is available at anytime anywhere, because of the Internet.
Netflix’s integration of distribution and production also means that they are incentivized to care more about the perspective of an individual artist than a network; that is the optimal point of modularity for the streaming company. At the same time, it is worth noting that Netflix is actually claiming even more rights for their original content than networks ever did, in exchange for larger up-front payments. This makes sense given Netflix’s model, which is even more deeply predicated on leveraging fixed cost investments in content than networks ever were, not simply to retain users but also to decrease the cost of acquiring new ones.
Lesson Four: Aggregators and the Individual
The one company that comes out looking great is Netflix:
Technically speaking, Netflix did exist when Chappelle negotiated that contract with Comedy Central, but the company was a DVD-by-mail service; the streaming iteration that Chappelle is referring to wasn’t viable back then. Indeed, the entire premise of the streaming company is that it takes advantage of the changes wrought by the Internet to achieve distribution that is not simply equivalent to a TV network, but actually superior, both in terms of reaching the entire world and also in digitizing time. On Netflix, everything is available at anytime anywhere, because of the Internet.
Netflix’s integration of distribution and production also means that they are incentivized to care more about the perspective of an individual artist than a network; that is the optimal point of modularity for the streaming company. At the same time, it is worth noting that Netflix is actually claiming even more rights for their original content than networks ever did, in exchange for larger up-front payments. This makes sense given Netflix’s model, which is even more deeply predicated on leveraging fixed cost investments in content than networks ever were, not simply to retain users but also to decrease the cost of acquiring new ones.
by Ben Thompson, Stratechery | Read more:
Image: YouTube
Labels:
Business,
Celebrities,
Culture,
Environment,
Media,
Movies
Tuesday, December 1, 2020
Ghost Kitchens: How Taxpayers are Picking Up the Bill for the Destruction of Local Restaurants
This past summer, Kroger, one of the nation’s largest grocery store chains, received a 15-year, 75 percent sales tax exemption for setting up two new data centers in Ohio. This is the definition of unnecessary. Kroger is not exactly poverty struck – it accrued profits of more than $2 billion last year. Moreover, subsidizing data centers is for suckers. Companies need to build that infrastructure, and they don’t create all that many positions. Municipalities and state governments that subsidize data centers sometimes literally pay upwards of seven figures per job.
Then it got worse: Kroger is using its data to move into what’s known as the “ghost kitchen” business, something that is a terrible development for local independent restaurants. So, Ohio taxpayers are helping a massive supermarket chain put other businesses out of business, including their favorite corner eatery. That Ohio is doing this in a year when small restaurant proprietors are under all but existential threat adds insult to injury.
Ghost kitchens are as spooky as they sound. Big corporations like gig companies, supermarkets, and fast food chains use the data they collect through their various lines of business to create delivery-only food operations. But here’s the catch: They often hide and disguise the fact that they aren’t actual restaurants. They give them homey sounding names, like Seaside or Lorenzo’s, and build out web pages that make them appear to be places you could drop in on. In fact, they are randomly located in warehouses and other industrial spaces, and backed by big investors and corporations whose participation is often hidden by a web of shell companies.
The poster children for this issue are the big delivery app companies — UberEats, GrubHub, and Doordash — which use the data they collect doing deliveries for restaurants, and which they don’t subsequently share with those restaurants, to see what sort of items sell best and when. Then, much like Amazon weaponizes the data it collects from small businesses that sell on its platform to create its own products, the delivery apps use the data to create their own, delivery-only food outlets, with the aim of cutting real restaurants out of the business entirely. (Amazon, of course, won’t miss this opportunity either: It has invested in a delivery and ghost kitchen company called Deliveroo.)
This model of operating a platform and then also competing on it should just be illegal, even though it’s widespread. Whether it’s Amazon using info gained from its third-party sellers to steal products, Google using data gleaned from its advertising technology to outbid publishers, or delivery apps cutting real restaurants out of the restaurant business, the issue is the same: The corporation that runs the infrastructure has an anticompetitive advantage over all of the other participants. As Sen. Elizabeth Warren, D-MA, succinctly puts it, “You can be an umpire, or you can be a player—but you can’t be both.”
But even if authorities woke up and banned Uber from going into the ghost kitchen business, that wouldn’t stop Kroger. It’s got the data too, and it doesn’t need to trick rival businesses into turning it over. It’s the largest grocer in the U.S., and the second-largest in-person retailer after Walmart. It runs stores under its own corporate name, as well as Harris Teeter and 14 other brands. They are using info gained from their own shoppers. Kroger is partnering with an outfit called ClusterTruck that uses algorithms to remove the so-called “pain points” of ordering food, which I suppose means orders showing up cold, or something. (...)
Think of it this way: taxpayers — in this case in Ohio — are subsidizing the destruction of small, local, independent businesses in order to benefit the biggest corporations in the country. (What makes this even more offensive: Kroger is also headquartered in Ohio. It doesn’t need incentives to build new facilities in the state, since the cost of starting from scratch in some other locale would probably be higher, even in the absence of subsidies.)
by Pat Garofalo, Public Seminar | Read more:
Image: uncredited via
Then it got worse: Kroger is using its data to move into what’s known as the “ghost kitchen” business, something that is a terrible development for local independent restaurants. So, Ohio taxpayers are helping a massive supermarket chain put other businesses out of business, including their favorite corner eatery. That Ohio is doing this in a year when small restaurant proprietors are under all but existential threat adds insult to injury.
Ghost kitchens are as spooky as they sound. Big corporations like gig companies, supermarkets, and fast food chains use the data they collect through their various lines of business to create delivery-only food operations. But here’s the catch: They often hide and disguise the fact that they aren’t actual restaurants. They give them homey sounding names, like Seaside or Lorenzo’s, and build out web pages that make them appear to be places you could drop in on. In fact, they are randomly located in warehouses and other industrial spaces, and backed by big investors and corporations whose participation is often hidden by a web of shell companies.
The poster children for this issue are the big delivery app companies — UberEats, GrubHub, and Doordash — which use the data they collect doing deliveries for restaurants, and which they don’t subsequently share with those restaurants, to see what sort of items sell best and when. Then, much like Amazon weaponizes the data it collects from small businesses that sell on its platform to create its own products, the delivery apps use the data to create their own, delivery-only food outlets, with the aim of cutting real restaurants out of the business entirely. (Amazon, of course, won’t miss this opportunity either: It has invested in a delivery and ghost kitchen company called Deliveroo.)
This model of operating a platform and then also competing on it should just be illegal, even though it’s widespread. Whether it’s Amazon using info gained from its third-party sellers to steal products, Google using data gleaned from its advertising technology to outbid publishers, or delivery apps cutting real restaurants out of the restaurant business, the issue is the same: The corporation that runs the infrastructure has an anticompetitive advantage over all of the other participants. As Sen. Elizabeth Warren, D-MA, succinctly puts it, “You can be an umpire, or you can be a player—but you can’t be both.”
But even if authorities woke up and banned Uber from going into the ghost kitchen business, that wouldn’t stop Kroger. It’s got the data too, and it doesn’t need to trick rival businesses into turning it over. It’s the largest grocer in the U.S., and the second-largest in-person retailer after Walmart. It runs stores under its own corporate name, as well as Harris Teeter and 14 other brands. They are using info gained from their own shoppers. Kroger is partnering with an outfit called ClusterTruck that uses algorithms to remove the so-called “pain points” of ordering food, which I suppose means orders showing up cold, or something. (...)
Think of it this way: taxpayers — in this case in Ohio — are subsidizing the destruction of small, local, independent businesses in order to benefit the biggest corporations in the country. (What makes this even more offensive: Kroger is also headquartered in Ohio. It doesn’t need incentives to build new facilities in the state, since the cost of starting from scratch in some other locale would probably be higher, even in the absence of subsidies.)
by Pat Garofalo, Public Seminar | Read more:
Image: uncredited via
Travel Industry Is Up Against a Psychological Make-or-Break
A Successful U.S. Missile Intercept Ends the Era of Nuclear Stability
This month, an intercontinental ballistic missile was fired in the general direction of the Hawaiian islands. During its descent a few minutes later, still outside the earth’s atmosphere, it was struck by another missile that destroyed it.
With that detonation, the world’s tenuous nuclear balance suddenly threatened to come out of kilter. The danger of atom bombs being used again was already increasing. Now it’s grown once more.
The ICBM flying over the Pacific was an American dummy designed to test a new kind of interceptor technology. As it flew, satellites spotted it and alerted an Air Force base in Colorado, which in turn communicated with a Navy destroyer positioned northeast of Hawaii. This ship, the USS John Finn, fired its own missile which, in the jargon, hit and killed the incoming one.
At first glimpse, this sort of technological wizardry would seem to be a cause for not only awe but also joy, for it promises to protect the U.S. from missile attacks by North Korea, for example. But in the weird logic of nuclear strategy, a breakthrough intended to make us safer could end up making us less safe.
That’s because the new interception technology cuts the link between offense and defense that underlies all calculations about nuclear scenarios. Since the Cold War, stability — and thus peace — has been preserved through the macabre reality of mutual assured destruction, or MAD. No nation will launch a first strike if it expects immediate retaliation in kind. A different way of describing MAD is mutual vulnerability.
If one player in this game-theory scenario suddenly gets a shield (these American systems are in fact called Aegis), this mutual vulnerability is gone. Adversaries, in this case mainly Russia but increasingly China too, must assume that their own deterrent is no longer effective because they may not be able to successfully strike back.
For this reason defensive escalation has become almost as controversial as the offensive kind. Russia has been railing against land-based American interceptor systems in places like eastern Europe and Alaska. But this month’s test was the first in which a ship did the intercepting. This twist means that before long the U.S. or another nation could protect itself from all sides.
This new uncertainty complicates a situation that was already becoming fiendishly intricate. The U.S. and Russia, which have about 90% of the world’s nukes, have ditched two arms-control treaties in as many decades. The only one remaining, called New START, is due to expire on Feb. 5, a mere 16 days after Joe Biden takes office as president. The Nuclear Non-Proliferation Treaty, which has for 50 years tried to keep nations without nukes from acquiring them, is also in deep trouble, and due to be renegotiated next year. Iran’s intentions remain unknown.
At the same time, both the U.S. and Russia are modernizing their arsenals, while China is adding to its own as fast as it can. Among the new weapons are nukes carried by hypersonic missiles, which are so fast that the leaders of the target nation only have minutes to decide what’s incoming and how to respond. They also include so-called tactical nukes, with “smaller” (in a very relative sense) payloads that make them more suitable for conventional wars, thus lowering the threshold for their use.
The risk thus keeps rising that a nuclear war starts by accident, miscalculation or false alarm, especially when factoring in scenarios that involve terrorism, rogue states or conflicts in outer or cyberspace. In a sort of global protest against this insanity, 84 countries without nukes have signed a Treaty on the Prohibition of Nuclear Weapons, which will take effect next year. But neither the nine nuclear nations nor their closest allies will ever sign it.
by Andreas Kluth, Bloomberg | Read more:
Image: U.S. Navy/Getty Images
With that detonation, the world’s tenuous nuclear balance suddenly threatened to come out of kilter. The danger of atom bombs being used again was already increasing. Now it’s grown once more.
The ICBM flying over the Pacific was an American dummy designed to test a new kind of interceptor technology. As it flew, satellites spotted it and alerted an Air Force base in Colorado, which in turn communicated with a Navy destroyer positioned northeast of Hawaii. This ship, the USS John Finn, fired its own missile which, in the jargon, hit and killed the incoming one.
At first glimpse, this sort of technological wizardry would seem to be a cause for not only awe but also joy, for it promises to protect the U.S. from missile attacks by North Korea, for example. But in the weird logic of nuclear strategy, a breakthrough intended to make us safer could end up making us less safe.
That’s because the new interception technology cuts the link between offense and defense that underlies all calculations about nuclear scenarios. Since the Cold War, stability — and thus peace — has been preserved through the macabre reality of mutual assured destruction, or MAD. No nation will launch a first strike if it expects immediate retaliation in kind. A different way of describing MAD is mutual vulnerability.
If one player in this game-theory scenario suddenly gets a shield (these American systems are in fact called Aegis), this mutual vulnerability is gone. Adversaries, in this case mainly Russia but increasingly China too, must assume that their own deterrent is no longer effective because they may not be able to successfully strike back.
For this reason defensive escalation has become almost as controversial as the offensive kind. Russia has been railing against land-based American interceptor systems in places like eastern Europe and Alaska. But this month’s test was the first in which a ship did the intercepting. This twist means that before long the U.S. or another nation could protect itself from all sides.
This new uncertainty complicates a situation that was already becoming fiendishly intricate. The U.S. and Russia, which have about 90% of the world’s nukes, have ditched two arms-control treaties in as many decades. The only one remaining, called New START, is due to expire on Feb. 5, a mere 16 days after Joe Biden takes office as president. The Nuclear Non-Proliferation Treaty, which has for 50 years tried to keep nations without nukes from acquiring them, is also in deep trouble, and due to be renegotiated next year. Iran’s intentions remain unknown.
At the same time, both the U.S. and Russia are modernizing their arsenals, while China is adding to its own as fast as it can. Among the new weapons are nukes carried by hypersonic missiles, which are so fast that the leaders of the target nation only have minutes to decide what’s incoming and how to respond. They also include so-called tactical nukes, with “smaller” (in a very relative sense) payloads that make them more suitable for conventional wars, thus lowering the threshold for their use.
The risk thus keeps rising that a nuclear war starts by accident, miscalculation or false alarm, especially when factoring in scenarios that involve terrorism, rogue states or conflicts in outer or cyberspace. In a sort of global protest against this insanity, 84 countries without nukes have signed a Treaty on the Prohibition of Nuclear Weapons, which will take effect next year. But neither the nine nuclear nations nor their closest allies will ever sign it.
by Andreas Kluth, Bloomberg | Read more:
Image: U.S. Navy/Getty Images
Labels:
Government,
history,
Military,
Politics,
Technology
Monday, November 30, 2020
The Logic of Pandemic Restrictions Is Falling Apart
Josh was irritated, but not because of me. If indoor dining couldn’t be made safe, he wondered, why were people being encouraged to do it? Why were temperature checks being required if they actually weren’t useful? Why make rules that don’t keep people safe?
Across America, this type of honest confusion abounds. While a misinformation-gorged segment of the population rejects the expert consensus on virus safety outright, so many other people, like Josh, are trying to do everything right, but run afoul of science without realizing it. Often, safety protocols, of all things, are what’s misleading them. In the country’s new devastating wave of infections, a perilous gap exists between the realities of transmission and the rules implemented to prevent it. “When health authorities present one rule after another without clear, science-based substantiation, their advice ends up seeming arbitrary and capricious,” the science journalist Roxanne Khamsi recently wrote in Wired. “That erodes public trust and makes it harder to implement rules that do make sense.” Experts know what has to be done to keep people safe, but confusing policies and tangled messages from some of the country’s most celebrated local leaders are setting people up to die.
Since my conversation with Josh, the internal logic of New York’s coronavirus protocols has deteriorated further. As more and more New Yorkers have become sick, officials have urged people to skip Thanksgiving, because of the danger of eating indoors with people you don’t live with. Rather than closing indoor dining, however, Cuomo has ordered all restaurants and bars simply to close by 10 p.m. This curfew also applies to gyms, which are not exactly hotbeds of late-night activity even in normal times. Meanwhile, case counts have risen enough to trigger the closure of New York City public schools, but businesses still have full discretion to require employees to come into work. (Cuomo’s office did not respond to a request for comment.)
It isn’t just New York; in states across the country, local officials have urged caution and fastidiousness. But those words can seem tenuously connected, at best, to the types of safety measures they’ve put in place. In Rhode Island, for example, residents are prohibited from gathering with even one person outside their household, even in the open air of a public park. But inside a restaurant? Well, 25 people is fine. Hire a caterer? You’re legally cleared to have up to 75 outdoors. The governor’s executive order merely notes: “The lower attendance at such events, the lower the risk.” (The Rhode Island governor’s office did not respond to a request for comment.)
Before you can dig into how cities and states are handling their coronavirus response, you have to deal with the elephant in the hospital room: Almost all of this would be simpler if the Trump administration and its allies had, at any point since January, behaved responsibly. Early federal financial-aid programs could have been renewed and expanded as the pandemic worsened. Centrally coordinated testing and contact-tracing strategies could have been implemented. Reliable, data-based federal guidelines for what kinds of local restrictions to implement and when could have been developed. The country could have had a national mask mandate. Donald Trump and his congressional allies could have governed instead of spending most of the year urging people to violate emergency orders and “liberate” their states from basic safety protocols.
But that’s not the country Americans live in. Responding to this national disaster has been left to governors, mayors, and city councils, basically since day one. “You’ve got a lot of problems if every state has to develop everything from scratch,” Tara Kirk Sell, a researcher at the Johns Hopkins Center for Health Security, told me. “First of all, it’s a lot of wasted time and money.” Instead of centralizing the development of infrastructure and methods to deal with the pandemic, states with significantly different financial resources and political climates have all built their own information environments and have total freedom to interpret their data as they please.
In the worst-case scenarios, that interpretation has privileged politics over the health of the population. Vociferously Trump-allied governors in hard-hit states such as Georgia, Florida, and South Dakota have declined to so much as implement a public mask mandate while local caseloads have soared. Sometimes, they have sparred with municipal leaders trying to do more. In hard-hit El Paso, Texas, for example, a local stay-at-home order was recently overturned by a state court, even as local officials have had to call in refrigerated trucks to serve as makeshift morgues.
Even in cities and states that have had some success controlling the pandemic, a discrepancy between rules and reality has become its own kind of problem. When places including New York, California, and Massachusetts first faced surging outbreaks, they implemented stringent safety restrictions—shelter-in-place orders, mask mandates, indoor-dining and bar closures. The strategy worked: Transmission decreased, and businesses reopened. But as people ventured out and cases began to rise again, many of those same local governments have warned residents of the need to hunker down and avoid holiday gatherings, yet haven’t reinstated the safety mandates that saved lives six months ago. The pandemic is surging virtually everywhere in America; last week alone, it infected more than 1 million people and killed more than 8,000. And yet indoor dining largely remains open, even as leaders warn of the very real perils of Thanksgiving dinner.
Across America, this type of honest confusion abounds. While a misinformation-gorged segment of the population rejects the expert consensus on virus safety outright, so many other people, like Josh, are trying to do everything right, but run afoul of science without realizing it. Often, safety protocols, of all things, are what’s misleading them. In the country’s new devastating wave of infections, a perilous gap exists between the realities of transmission and the rules implemented to prevent it. “When health authorities present one rule after another without clear, science-based substantiation, their advice ends up seeming arbitrary and capricious,” the science journalist Roxanne Khamsi recently wrote in Wired. “That erodes public trust and makes it harder to implement rules that do make sense.” Experts know what has to be done to keep people safe, but confusing policies and tangled messages from some of the country’s most celebrated local leaders are setting people up to die.
Since my conversation with Josh, the internal logic of New York’s coronavirus protocols has deteriorated further. As more and more New Yorkers have become sick, officials have urged people to skip Thanksgiving, because of the danger of eating indoors with people you don’t live with. Rather than closing indoor dining, however, Cuomo has ordered all restaurants and bars simply to close by 10 p.m. This curfew also applies to gyms, which are not exactly hotbeds of late-night activity even in normal times. Meanwhile, case counts have risen enough to trigger the closure of New York City public schools, but businesses still have full discretion to require employees to come into work. (Cuomo’s office did not respond to a request for comment.)
It isn’t just New York; in states across the country, local officials have urged caution and fastidiousness. But those words can seem tenuously connected, at best, to the types of safety measures they’ve put in place. In Rhode Island, for example, residents are prohibited from gathering with even one person outside their household, even in the open air of a public park. But inside a restaurant? Well, 25 people is fine. Hire a caterer? You’re legally cleared to have up to 75 outdoors. The governor’s executive order merely notes: “The lower attendance at such events, the lower the risk.” (The Rhode Island governor’s office did not respond to a request for comment.)
Before you can dig into how cities and states are handling their coronavirus response, you have to deal with the elephant in the hospital room: Almost all of this would be simpler if the Trump administration and its allies had, at any point since January, behaved responsibly. Early federal financial-aid programs could have been renewed and expanded as the pandemic worsened. Centrally coordinated testing and contact-tracing strategies could have been implemented. Reliable, data-based federal guidelines for what kinds of local restrictions to implement and when could have been developed. The country could have had a national mask mandate. Donald Trump and his congressional allies could have governed instead of spending most of the year urging people to violate emergency orders and “liberate” their states from basic safety protocols.
But that’s not the country Americans live in. Responding to this national disaster has been left to governors, mayors, and city councils, basically since day one. “You’ve got a lot of problems if every state has to develop everything from scratch,” Tara Kirk Sell, a researcher at the Johns Hopkins Center for Health Security, told me. “First of all, it’s a lot of wasted time and money.” Instead of centralizing the development of infrastructure and methods to deal with the pandemic, states with significantly different financial resources and political climates have all built their own information environments and have total freedom to interpret their data as they please.
In the worst-case scenarios, that interpretation has privileged politics over the health of the population. Vociferously Trump-allied governors in hard-hit states such as Georgia, Florida, and South Dakota have declined to so much as implement a public mask mandate while local caseloads have soared. Sometimes, they have sparred with municipal leaders trying to do more. In hard-hit El Paso, Texas, for example, a local stay-at-home order was recently overturned by a state court, even as local officials have had to call in refrigerated trucks to serve as makeshift morgues.
Even in cities and states that have had some success controlling the pandemic, a discrepancy between rules and reality has become its own kind of problem. When places including New York, California, and Massachusetts first faced surging outbreaks, they implemented stringent safety restrictions—shelter-in-place orders, mask mandates, indoor-dining and bar closures. The strategy worked: Transmission decreased, and businesses reopened. But as people ventured out and cases began to rise again, many of those same local governments have warned residents of the need to hunker down and avoid holiday gatherings, yet haven’t reinstated the safety mandates that saved lives six months ago. The pandemic is surging virtually everywhere in America; last week alone, it infected more than 1 million people and killed more than 8,000. And yet indoor dining largely remains open, even as leaders warn of the very real perils of Thanksgiving dinner.
by Amanda Mull, The Atlantic | Read more:
Image: Suzanne Kreiter/Boston Globe/Getty
Sunday, November 29, 2020
Alvin Lee
[ed. Special guest: George Harrison, slide guitar.]
'It Stretches the Limits of Performance': The Race to Make the World's Fastest Running Shoe
Natasha Cockram never really cared about shoes. When the Welsh runner entered her first marathon in 2017, she wore a pair of two-year-old Nike racing flats that cost her £15 at an outlet store. And she was a talented athlete: a former junior cross country and middle distance champion, she had won an athletics scholarship to the University of Tulsa in Oklahoma. She studied psychology and raced hard.
“What I’ve always loved about running is that it was so accessible,” Cockram, who is 27, says when we first speak in early September. “All you needed was a pair of trainers. It didn’t matter what they were – anyone could just do it.”
But “just doing it” didn’t seem enough for the brand that built an empire on that phrase. In the year that Cockram began her marathon journey, Nike revealed a radical new shoe during Breaking2, its unsuccessful attempt to smash the two-hour barrier in the men’s marathon with the Kenyan athlete Eliud Kipchoge. The neon Vaporfly shoes, which had thick foam soles embedded with carbon fibre plates, would shake up distance running with their outlandish looks and claim to save a runner 4% in energy expenditure – equivalent to several minutes in a marathon.
The shoes soon inspired accusations of technological doping, not only challenging the purity of a great Olympic event but causing the biggest ethical schism in sports equipment since Speedo’s shark-inspired suits rocked swimming in 2008. The slippery material, versions of which other brands swiftly produced, enabled swimmers, including Michael Phelps, to glide more quickly through the water, triggering a wave of new world records, before being banned by Fina, the sport’s governing body, in 2009.
Runners said Nike’s Vaporflys offered a similar advantage; they felt as if they contained springs, and experts lined up to cry foul. Ross Tucker, a leading South African sport scientist, called them “the shoe that broke running”. Nevertheless, they rapidly sold out, contributing to a near tripling of Nike’s share price and triggering an industry arms race that is still playing out among its rivals. (...)
Only 10 years earlier, distance running had been locked in a very different arms race. As part of a “barefoot” running craze, brands had focused on featherlight shoes with barely-there soles. Nike had looked into minimalist shoes in its early research for what became Breaking2. But runners complained that they were too unforgiving; fatigue trumped any weight advantage.
Nike’s own scientists, led by Matthew Nurse, a biomechanics researcher at the brand’s Oregon HQ, had begun to look for a solution in much thicker foam. But it needed to be lighter. The breakthrough lay in Pebax, a plastic that has been used for years in dozens of applications, including catheter pipes. Produced in raw granules by Arkema, a French company, its chemical structure is a chain of alternating soft and rigid blocks, the ratio of which can be tweaked precisely. Together, the blocks offer toughness and flexibility at a very low weight, as well as a strong energy return, or bounce. “The nature of the chain is not a big secret,” says François Tanguy, a scientist and European manager for Arkema. “How we make it is a very well-kept secret.”
By turning Pebax granules into a foam, Nike got what it needed: a Boost-killer that it would market as ZoomX; an unusually soft, light sole that would return rather than absorb energy, while also reducing fatigue in the brutal last miles of a marathon. A carbon plate – a feature that Reebok and Adidas had experimented with in the 1990s – added structure and support, reducing energy-wasting flex in the toes.
“What I’ve always loved about running is that it was so accessible,” Cockram, who is 27, says when we first speak in early September. “All you needed was a pair of trainers. It didn’t matter what they were – anyone could just do it.”
But “just doing it” didn’t seem enough for the brand that built an empire on that phrase. In the year that Cockram began her marathon journey, Nike revealed a radical new shoe during Breaking2, its unsuccessful attempt to smash the two-hour barrier in the men’s marathon with the Kenyan athlete Eliud Kipchoge. The neon Vaporfly shoes, which had thick foam soles embedded with carbon fibre plates, would shake up distance running with their outlandish looks and claim to save a runner 4% in energy expenditure – equivalent to several minutes in a marathon.
The shoes soon inspired accusations of technological doping, not only challenging the purity of a great Olympic event but causing the biggest ethical schism in sports equipment since Speedo’s shark-inspired suits rocked swimming in 2008. The slippery material, versions of which other brands swiftly produced, enabled swimmers, including Michael Phelps, to glide more quickly through the water, triggering a wave of new world records, before being banned by Fina, the sport’s governing body, in 2009.
Runners said Nike’s Vaporflys offered a similar advantage; they felt as if they contained springs, and experts lined up to cry foul. Ross Tucker, a leading South African sport scientist, called them “the shoe that broke running”. Nevertheless, they rapidly sold out, contributing to a near tripling of Nike’s share price and triggering an industry arms race that is still playing out among its rivals. (...)
Only 10 years earlier, distance running had been locked in a very different arms race. As part of a “barefoot” running craze, brands had focused on featherlight shoes with barely-there soles. Nike had looked into minimalist shoes in its early research for what became Breaking2. But runners complained that they were too unforgiving; fatigue trumped any weight advantage.
Nike’s own scientists, led by Matthew Nurse, a biomechanics researcher at the brand’s Oregon HQ, had begun to look for a solution in much thicker foam. But it needed to be lighter. The breakthrough lay in Pebax, a plastic that has been used for years in dozens of applications, including catheter pipes. Produced in raw granules by Arkema, a French company, its chemical structure is a chain of alternating soft and rigid blocks, the ratio of which can be tweaked precisely. Together, the blocks offer toughness and flexibility at a very low weight, as well as a strong energy return, or bounce. “The nature of the chain is not a big secret,” says François Tanguy, a scientist and European manager for Arkema. “How we make it is a very well-kept secret.”
By turning Pebax granules into a foam, Nike got what it needed: a Boost-killer that it would market as ZoomX; an unusually soft, light sole that would return rather than absorb energy, while also reducing fatigue in the brutal last miles of a marathon. A carbon plate – a feature that Reebok and Adidas had experimented with in the 1990s – added structure and support, reducing energy-wasting flex in the toes.
by Simon Usborne, The Guardian | Read more:
Image: EPA
Subscribe to:
Posts (Atom)