Wednesday, December 9, 2020

Universal’s Bob Dylan Catalog Buy Is About Survival

In the 24 hours since Bob Dylan sold his peerless songwriting catalog to Universal Music Group for a nine-figure sum, discussion has, understandably, centered on Dylan himself.

I keep hearing the same two questions: Will this affect the way fans digest his music? (No, but expect to hear his hits in more perfume commercials.) And what might have been Dylan’s motivation for selling his crown jewels now? (Music catalogs are fetching all-time-high prices, he’s nearly 80 years old, and Joe Biden may significantly hike taxes on big U.S. asset sales when he becomes president.)

What hasn’t drawn as much attention is the motivation of the buyer, Universal Music Publishing Group (UMPG), which my sources indicate paid closer to $400 million than $300 million to get Dylan’s 600 songs. Obviously, Dylan’s catalog is one of the most evergreen collections of music to ever be committed to notation. As Universal boss Sir Lucian Grainge said in an internal email yesterday: “In an instant, we have forever transformed the legacy of this company.” He hinted that UMPG won the deal against stiff industry competition because of its historical pedigree: “That this opportunity came to us was no accident,” he wrote. “When you put songwriters first, when you achieve unparalleled value for the art they create, when your track record is clear and consistent then the best of the best come to you.”

Grainge’s choice of words here is very deliberate. The “clear and consistent track record” comment is an obvious slight against newer companies — like Hipgnosis Songs Fund and Primary Wave — which have recently been nibbling into Universal’s market share. These firms have quickly acquired triple-A publishing catalogs from the likes of Bob Marley, Whitney Houston, Stevie Nicks, and Mark Ronson, using institutional investor money to pay more than traditional music companies like Universal are willing to.

Universal’s Dylan acquisition, then, is a landmark statement from the world’s biggest music rights company: We’re not going to sit back and just let the greatest music in history be auctioned off to Wall Street under our nose.

Which raises the question: Who’s this statement for? To a degree, it’s for the current investors of Universal’s publicly-traded French parent Vivendi. But here’s the thing: Vivendi has confirmed Universal Music Group will be spun out for an IPO in 2022. In doing so, it’s deliberately seeded excitement amongst new would-be investors, who have seen music rights become one of the most reliable growth assets of the pandemic era.

The bear-case counterargument on Universal is that it has allowed cash-rich industry upstarts to reduce its commercial leverage; maybe, critics have said, Universal doesn’t have the fight or the funds to buy triple-A catalogs in the modern era. So — as its two-fingers to the financial naysayers — Universal went out and snatched up 600 Bob Dylan songs.

The Dylan buy is Universal putting a flag in the ground that reads, “We’re still Number One, and we’re staying that way.” The company wants to demonstrate its ability to survive, long-term, as king of the jungle — and, of course, to drive that future IPO price through the roof.

This is a trend amongst the major music companies, by the way — a public fightback against existential threats to their dominant position — that has really come to the fore during the pandemic. In October, Warner Music Group took the unusual step of raising $250 million in debt with the express intention of spending it on two acquisitions, at a combined cost of $338 million. My sources suggest that one of these deal, which took up the majority of the $338 million, saw Warner quietly acquire the publishing catalog of an all-time giant of music.

by Tim Ingram, Rolling Stone |  Read more:
Image: Gianni Schicchi/AP

Tuesday, December 8, 2020

Monday, December 7, 2020

We Had the Vaccine the Whole Time

You may be surprised to learn that of the trio of long-awaited coronavirus vaccines, the most promising, Moderna’s mRNA-1273, which reported a 94.5 percent efficacy rate on November 16, had been designed by January 13. This was just two days after the genetic sequence had been made public in an act of scientific and humanitarian generosity that resulted in China’s Yong-Zhen Zhang’s being temporarily forced out of his lab. In Massachusetts, the Moderna vaccine design took all of one weekend. It was completed before China had even acknowledged that the disease could be transmitted from human to human, more than a week before the first confirmed coronavirus case in the United States. By the time the first American death was announced a month later, the vaccine had already been manufactured and shipped to the National Institutes of Health for the beginning of its Phase I clinical trial. This is — as the country and the world are rightly celebrating — the fastest timeline of development in the history of vaccines. It also means that for the entire span of the pandemic in this country, which has already killed more than 250,000 Americans, we had the tools we needed to prevent it .

To be clear, I don’t want to suggest that Moderna should have been allowed to roll out its vaccine in February or even in May, when interim results from its Phase I trial demonstrated its basic safety. “That would be like saying we put a man on the moon and then asking the very same day, ‘What about going to Mars?’ ” says Nicholas Christakis, who directs Yale’s Human Nature Lab and whose new book, Apollo’s Arrow, sketches the way COVID-19 may shape our near-term future. Moderna’s speed was “astonishing,” Christakis says, though the design of other vaccines was nearly as fast: BioNTech with Pfizer, Johnson & Johnson, AstraZeneca.

Could things have moved faster from design to deployment? Given the grim prospects for winter, it is tempting to wonder. Perhaps, in the future, we will. But given existing vaccine infrastructure, probably not. Already, as Baylor’s Peter Hotez pointed out to me, “Operation Warp Speed” meant running clinical trials simultaneously rather than sequentially, manufacturing the vaccine at the same time, and authorizing the vaccine under “emergency use” in December based only on preliminary data that doesn’t track the long-term durability of protection or even measure the vaccine’s effect on transmission (only how much it protects against disease). And as Georgetown virologist Angela Rasmussen told me, the name itself may have needlessly risked the trust of Americans already concerned about the safety of this, or any, vaccine. Indeed, it would have been difficult in May to find a single credentialed epidemiologist, vaccine researcher, or public-health official recommending a rapid vaccine rollout — though, it’s worth noting, as early as July the MIT Technology Review reported that a group of 70 scientists in the orbit of Harvard and MIT, including “celebrity geneticist” George Church, were taking a totally DIY nasal-spray vaccine, never even intended to be tested, and developed by a personal genomics entrepreneur named Preston Estep (also the author of a self-help-slash-life-extension book called The Mindspan Diet). China began administering a vaccine to its military in June. Russia approved its version in August. And while most American scientists worried about the speed of those rollouts, and the risks they implied, our approach to the pandemic here raises questions, too, about the strange, complicated, often contradictory ways we approach matters of risk and uncertainty during a pandemic — and how, perhaps, we might think about doing things differently next time. That a vaccine was available for the entire brutal duration may be, to future generations trying to draw lessons from our death and suffering, the most tragic, and ironic, feature of this plague.

For all of modern medical history, Christakis writes in Apollo’s Arrow, vaccines and cures for infectious disease have typically arrived, if they arrive, only in the end stage of the disease, once most of the damage had already been done and the death rate had dramatically declined. For measles, for scarlet fever, for tuberculosis, for typhoid, the miracle drugs didn’t bring rampant disease to a sudden end — they shut the door for good on outbreaks that had largely died out already. This phenomenon is called the McKeown hypothesis — that medical interventions tend to play only a small role compared to public-health measures, socioeconomic advances, and the natural dynamics of the disease as it spreads through a population. The new coronavirus vaccines have arrived at what counts as warp speed, but not in time to prevent what CDC director Robert Redfield predicts will be “the most difficult time in the public-health history of this nation,” and do not necessarily represent a reversal of the McKeown hypothesis: The country may still reach herd immunity through natural disease spread, Christakis says, at roughly the same time as the rollout of vaccines is completed. Redfield believes there may be 200,000 more American deaths to come. This would mean what Christakis calls a “once-in-a-century calamity” had unfolded start-to-finish between the time the solution had been found and the time we felt comfortable administering it. A half a million American lives would have been lost in the interim. Around the world, considerably more. (...)

The treatment dilemmas facing physicians and patients in the early stages of a novel pandemic are, of course, not the same as the dilemma of rushing a new vaccine to a still-healthy population — we defer to the judgment of desperate patients, with physicians inclined to try to help them, but not to the desires of vaccine candidates, no matter how desperate. An unsafe vaccine, like the one for polio that killed ten and paralyzed 200 in 1955, could cause medical disaster and public-health backlash — though, as Balloux points out, since none of the new coronavirus vaccines use real viral material, that kind of accident, which affected one in a thousand recipients, would be impossible. (These days, one adverse impact in a million is the rule-of-thumb threshold of acceptability.) An ineffective vaccine could also give false security to those receiving it, thereby helping spread the disease by providing population-scale license to irresponsible behavior (indoor parties, say, or masklessness). But on other matters of population-level guidance, our messaging about risk has been erratic all year, too. In February and March, we were warned against the use of masks, in part on the grounds that a false sense of security would lead to irresponsible behavior — on balance, perhaps the most consequential public-health mistake in the whole horrid pandemic. In April, with schools already shut, we closed playgrounds. In May, beaches — unable or unwilling to live with even the very-close-to-zero risk of socializing outside (often shaming those who gathered there anyway). But in September, we opened bars and restaurants and gyms, inviting pandemic spread even as we knew the seasonality of the disease would make everything much riskier in the fall. The whole time, we also knew that the Moderna vaccine was essentially safe. We were just waiting to know for sure that it worked, too.

None of the scientists I spoke to for this story were at all surprised by either outcome — all said they expected the vaccines were safe and effective all along. Which has made a number of them wonder whether, in the future, at least, we might find a way to do things differently — without even thinking in terms of trade-offs. Rethinking our approach to vaccine development, they told me, could mean moving faster without moving any more recklessly. A layperson might look at the 2020 timelines and question whether, in the case of an onrushing pandemic, a lengthy Phase III trial — which tests for efficacy — is necessary. But the scientists I spoke to about the way this pandemic may reshape future vaccine development were more focused on how to accelerate or skip Phase I, which tests for safety. More precisely, they thought it would be possible to do all the research, development, preclinical testing, and Phase I trials for new viral pandemics before those new viruses had even emerged — to have those vaccines sitting on the shelf and ready to go when they did. They also thought it was possible to do this for nearly the entire universe of potential future viral pandemics — at least 90 percent of them, one of them told me, and likely more.

As Hotez explained to me, the major reason this vaccine timeline has shrunk is that much of the research and preclinical animal testing was done in the aftermath of the 2003 SARS pandemic (that is, for instance, how we knew to target the spike protein). This would be the model. Scientists have a very clear sense of which virus families have pandemic potential, and given the resemblance of those viruses, can develop not only vaccines for all of them but also ones that could easily be tweaked to respond to new variants within those families.

“We do this every year for influenza,” Rasmussen says. “We don’t know which influenza viruses are going to be circulating, so we make our best guess. And then we formulate that into a vaccine using essentially the same technology platform that all the other influenza vaccines are based on.” The whole process takes a few months, and utilizes a “platform” that we already know is basically safe. With enough funding, you could do the same for viral pandemics, and indeed conduct Phase I trials for the entire set of possible future outbreaks before any of them made themselves known to the public. In the case of a pandemic produced by a new strain in these families, you might want to do some limited additional safety testing, but because the most consequential adverse effects take place in the days right after the vaccine is given, that additional diligence could be almost immediate.

by David Wallace-Wells, Intelligencer | Read more:
Image: AP Photo/AP2009

The Problem With Hashtag Activism

In the thirteen years since Twitter’s inception, users from every political stripe have launched countless campaigns, many of which have subsequently been covered or even adopted by traditional media and become household names. In #HashtagActivism: Networks of Race and Gender Justice, authors Sarah J. Jackson, Moya Bailey, and Brooke Foucault Welles propose that Twitter has become an important tool for activists to “advocate, mobilize and communicate.” They say the platform itself has become a powerful counterpublic for marginalized groups, who use Twitter’s hashtag function to facilitate political coalitions and networks. More specifically, the book investigates one particular corner of Twitter activism, defined by a distinct political culture that is liberal, social-justice oriented, consciousness focused, identitarian, intersectionalist, minoritarian, and moralist.

Even reducing the scope of their study to this particular online culture, it would be impossible for the authors to cover their subject in thorough detail. To their credit, the book is focused and provides an honest and dutiful record of the major campaigns of social justice hashtag activism, outlining a history, a trajectory, and a digital landscape. Curiously, though, the authors’ accounts of these campaigns only serve to thoroughly — almost relentlessly — contradict the book’s techno-optimist thesis, page after page, from the very beginning. (...)

In the larger world, it is difficult to spot a victory or any lasting legacy of power among even hugely popular campaigns like #OccupyWallStreet, #ArabSpring, #BlackLivesMatter, #YesAllWomen, and #MeToo. Occupy Wall Street fizzled, the Arab Spring flopped, George Zimmerman walks free, and police murders of black people have not decreased. While it’s true that a few of the high-profile voices of #MeToo managed to punish and even lock up a few of their higher-profile predators (and publicly censure a few more harmless perverts), no meaningful legislation has been passed to protect or empower ordinary women in the workplace. The campaigns featured in #HashtagActivism have given us little more than the prosecution of Harvey Weinstein, the cancellation of a TV cooking show, and the forestalling of a few tawdry book deals.

Who Runs Twitter Town?

In the introduction to #HashtagActivism, the authors are quick to reference academic and “techno-sociologist” Zeynep Tufekci, who observes that Twitter activism “looks very different from traditional, institutional-based politics — a kind of democratic participation that is inclined toward a horizontal, identity-based movement-building that arrives out of grievances and claims.” Like the authors, I would agree with Tufekci’s characterization.

It comes as no surprise, however, that they don’t engage further with Tufekci’s work, or even mention the title of her 2017 book, Twitter and Tear Gas: The Power and Fragility of Networked Protest. As one of the first academics writing on technology and movement-building, Tufekci has been openly and consistently skeptical of social media’s “transformative” potential since at least 2014. Unlike Jackson, Bailey, and Foucault Welles, she engages with the history of progressive online activism as a series of failures that she subjects to critical analysis and comparative-historical investigation.

Tufekci does not regard social media as a poison tree capable of bearing only poison fruit, per se, but she is not naive about the digital means of production. In talks and in print, she has illustrated that governments and capital have far more power than the masses over social media, which they often use to spy, censor, and misinform with impunity. (...)

The free and easy voluntarism of posting and content creation obscures an essential fact: the internet is deceptively vulnerable to corporate manipulation — in many ways, far more so than print, radio, or television. Consider TV: the owners create content, the audience consumes it and judges its value, and the government regulates programming, sometimes ever so slightly, even if only under massive public pressure. With the internet, the audience is invited to create their own content (generally for free), and the owners are largely rentiers or digital landlords that remain totally unaccountable for anything that happens on their preserve. Meanwhile, in the United States, the government and its attendant regulatory bodies are either in bed with big tech or can’t even remember their email passwords.

It is difficult to determine whether the Federal Communications Commission (FCC) is uninterested in or merely incapable of regulating the internet, and while advocates of free speech or even basic democracy should regard any attempt to do so with a healthy skepticism, it is significant that you can’t sue a tech company for abuse, harassment, stalking, libel, slander, or defamation that occurs on their platform. What little regulation is adopted is largely designed by the tech companies themselves, and it’s easily sidestepped when convenient. In the United States, the internet operates unlike any other form of media in that it is not subject to the rules that are, at least theoretically, imposed by representatives of the people. With all this in mind, it is difficult to imagine online activity as a revolutionary home base. The omnipotent rulers of these companies yield no transparency, accountability, or democratic control to users, the majority of whom do not display the dedicated platform loyalty of the activists in #HashtagActivism.

Twitter Is on Its Way Out

Even if the public gained some sort of democratic control over Twitter, we would be extremely late to the party. Internet users tend to cycle through social media platforms as they emerge, particularly as new platforms target youth markets with the promise of a parent-free online experience. At this point, Twitter is distinctly millennial, with younger users initially defecting to Instagram, then Snapchat, and now TikTok. Social media platforms also produce their own self-selecting demographics, which are never a particularly representative cross-section of anything. Since online activism is entirely voluntarist, and therefore siloed (“networks” work for the Right as well), mediums for communication will always be a moving target. Facebook, Twitter, Instagram, Snapchat, TikTok — as our options expand, the crowds disperse.

Twitter users are not only more insular and itinerant than the authors seem to imagine, there are actually very few of them, relatively speaking. A 2019 Pew Research Center study found that only about 22 percent of American adults use Twitter, and they tend to be younger and more progressive than the average American. Moreover, about 80 percent of tweets are produced by 20 percent of accounts, meaning the majority of activity on Twitter comes from a very small (and ever shrinking) number of highly active users. In February 2019, Twitter publicly announced their active user numbers for the first time; previously, the company only publicized their user “growth,” a percentage that was said to be padded with bots and dead accounts. After their grand reveal indicated a much smaller and still shrinking user base, they decided to no longer inform the public about their platform’s numbers.

Even without an accurate inventory of users, the material account of hashtag activism’s record to date exposes it as a midwife to impotent movements that grow and die far too quickly on undemocratic platforms that are corporate-controlled and fleetingly faddish. But what if we could fix all of that? What if we had a social media platform of our very own, one that corrected the aforementioned flaws? Could there be a platform of the people, a publicly controlled fixture that would attract a critical mass of users, with an architecture that patiently fosters the specialization of talents and skills that would herd all the cats of social justice, laying the groundwork for a deft, unified, and democratic organization? Assuming for a moment that such a thing is possible, would it even be desirable?

Can Social Media Be Social?

Once again in clear opposition to the conclusions of #HashtagActivism, Tufekci argues that the rapidity of the growth and spread of online-borne movements may be a potentially intractable obstacle, rather than an advantage, as the speed of horizontalism only seems to foster a specific kind of social formation: the undifferentiated mass. She observes the “tactical freeze” these sprawling movements are inevitably saddled with, as they expand into erratic, unwieldy, unstable blobs, incapable of specialization or coordination. Eventually, they become movements that are unable to move, so they stall out, then dissolve. Tufekci contrasts this life cycle with the slow, heavily coordinated, and decidedly very unspontaneous activism of the civil rights movement, concluding that the March on Washington succeeded as a result of these traditional organizing strategies, while Occupy Wall Street (along with so many other gods that failed) always crumble for lack of them.

Herein lies the fundamental misunderstanding of movement-building in #HashtagActivism. It’s true that political sentiments irradiated by the internet do experience remarkably rapid growth — but so does a tumor. The impressive speed and size of online movements are too often mistaken for viability and maturity, when, in fact, the accelerated development of online activism belies a deadly progeria: it burns hotly, brightly, and briefly, often with nothing to show in the end but a glut of forgettable, disposable content and the emotional exhaustion of participants (and perhaps a monograph or two).

by Amber A’Lee Frost, Jacobin | Read more:
Image: David McNew/Getty Images

Sunday, December 6, 2020

Student Loan Horror Stories

Whether it’s CNBC telling us what issues mattered to the young in the presidential election, or Yahoo! Finance telling us the big winners in the 2020 election were “young people and student voters,” or Forbes telling us “young people with student loan debt have a harder time reaching financial milestones,” the student loan controversy is almost universally presented as a “youth” issue.

This is the first of many deceptions baked into coverage of one of the more misunderstood and misreported issues of our time. Student loans matter to older people, too. In fact, that’s the problem. They matter far too much, to too many older people.

“People that are 45 years and older, that's where the student loan problem is a real issue,” says “Chris,” who took out his first loan in 1981. “Because those are the people that normally would have the highest balances.”

Now 59, Chris asks to tell his story under a pseudonym, to protect the service industry career he’s built in part with the hope of someday escaping his student debt.

“In the realm I'm in now, I don't really advertise the fact that I owe $236,000,” he sighs.

It’s often argued that forgiving student debt would unfairly punish other groups, particularly those who “did the right thing” and paid off their loans. In truth, political changes have already punished plenty of student loan holders. Chris is a prime example.

He grew up in the Midwest, and began studying philosophy and political science at Southwest Missouri State (now called Missouri State) in 1980. He began paying for his undergrad studies upfront, a decision that would have fateful consequences. He entered school just as Americans were electing Ronald Reagan, who wanted to dramatically re-order federal spending priorities. Among his first acts: raising the interest rates for some federally-guaranteed student loans from 7% to 9%.

“What’s really ironic,” Chris says, “is that if I hadn't paid cash the first year and a half that I was in college, my loans would have gotten locked in at a much lower rate.”

Paying the Reagan rate instead of the pre-Reagan rate was Chris’s first political misfortune. The second kicked in years later, in the mid-eighties, by which time he’d transferred to the University of Missouri-Columbia, graduated with a B.A., entered and completed a grad program there, and moved on to Joe Biden’s Alma Mater at Syracuse law. He left graduate school owing $14,000, and left law school with a total balance of $79,000.

He thought he’d be graduating with a law degree, and expected to be able to make his payments. Part of his calculation involved the fact that student loan interest was once tax-deductible, much like mortgage interest. But the Tax Reform Act of 1986 began a see-sawing journey for the student loan deduction, essentially eliminating it as a personal deduction for a time.

“I looked at education as a capital expenditure,” Chris says. “Part of my strategy was, is that the interest would always be tax-deductible. So that would at least give me a little bit of a [cushion] in making my payments, because, I would have that tax deduction.”

After they changed the law, “It was like, ‘Wow, this is going to be difficult, this is going to be interesting.’” (...)

In 2002, Chris got a middle-level job with one of the world’s larger service-industry companies. His first position paid him $28,000 a year, but he didn’t see much of that money. In 2004, his wages began to be garnished. A single federal lender can garnish up to 15% of “disposable” pay, i.e. what’s left over after mandatory withholdings. If there is more than one lender, they may garnish a maximum of 25% of wages.

Chris’s pay was garnished at 15% from 2004-2011, and at 25% from 2011 on. He paid, but didn’t gain ground, thanks to another painful quirk of the system, involving the order of obligation.

“They apply your penalties first, then your interest, then your principal,” he says. “So really they're guaranteeing that you're never going to pay down your loans.”

Into his second decade of garnishment, Chris was paying pure penalties, fees, and interest, not touching a dollar of principal. Although the government had since re-introduced some student loan interest deductions, these were capped at $2500 per year. “At the height of my garnishment, I was paying $900 every two weeks,” he says.

by Matt Taibbi, TK News |  Read more:
Image: uncredited

Life and Death in a Covid-19 Epicenter

Life and Death in a Covid-19 Epicenter (NY Times)
Image: uncredited

Friday, December 4, 2020


Kenneth Josephson, Chicago (under the el), 1961
via:

Living in the Present is Overrated

I’d been looking forward to the meal for weeks. I already knew what I was going to eat: the rosemary crostini starter, then the lamb with courgette fries. Or maybe the cod. I planned to arrive early and sit in the window at the cool marble counter and watch London go by. In the warm bustle of the restaurant, the condensation would mist the pane. As a treat, I would order myself a glass of white wine while I waited for my friend.

It won’t surprise you to hear that the meal never happened. Coronavirus cases started rising exponentially and eating out felt less like indulgence and more like lunacy. Then it became illegal to eat together at all. Soon it became illegal even to eat at a restaurant by yourself. Then everything shut.

The cost of these lost lunches has been totted up many times: the trains not taken, the taxis not flagged down, the desserts not eaten, the waiters not tipped. Then there is the emotional toll, too. Spirits are flagging, the lonely are getting lonelier, the world is wilting. Covid has already disrupted so much of how we live. It has altered something else, as well – time itself.

Not so long ago, we had merely months and years. Things happened in November or in December, last year or this. Some events are so big that they divide the world into before and after, into the present and an increasingly alien past. Wars do this, and the pandemic has, too. Coronavirus has cut a trench through time.

The very recent past is suddenly another country. Now, amateur archaeologists of our own existence, we sort through our possessions and stumble on small relics from “then”, that strange place we used to live: a bus pass, a lipstick, a smart watch, a pair of shoes with the heels worn down, work clothes that, after just six months in stretchy active-wear, feel as stiff and preposterous as whalebone.

News of vaccines fills us with hope. But the timing, the take-up, the roll-out to ordinary souls remain unresolved. The actual future still lies drearily in front of us, with the prospect of further lockdowns, overcrowded hospitals and ever greater financial losses. Days stretch on, each much the same as the last. One week blends into the next.

Amid these cancellations something else has also been lost. It won’t appear on any spreadsheet because it is not quantifiable. But it matters. So much of life, big and small, is about fleeting moments filled with hope. The prospect of an exciting Friday evening or Saturday afternoon used to make a dismal Tuesday morning bearable. So, too, did browsing online for your future self: the top that you’d always feel good in, the bag that would take both your laptop and book.

Hope hung everywhere in the old world, hovering in our peripheral vision – on the billboard that made us ponder our next holiday or reminded us to dig out dark glasses and sun cream; among the spices in the supermarket that conjured a conversation over curry with friends, chatting about things that didn’t feel like life and death.

Many moments of happiness are about anticipation, the joy of the imagined future – and distracting ourselves from the tedious, exhausting or difficult present. Yet even our small consumer choices or our musings about what to do this weekend now bring us back to the big, overpowering reality of the pandemic. We cannot escape it. Our daydreams have come crashing back to earth: 2020 is the year that the future was cancelled.

In recent decades the present has become rather more fashionable than the future. Living in the moment, being present in our present, is the desired mind-state of our age. There’s nothing new about the idea, of course – it forms the basis of Buddhism and there are elements of it in many religions. Long ago Horace commanded us to “carpe diem” and Seneca exhorted that the present is all we have: “All the rest of existence is not living but merely time.”

Over the past ten years the once-niche idea of “mindfulness” has gone mainstream. It has become an aspiration, an advertising opportunity and an overused adjective. You can practise not only mindful meditation but mindful breathing, mindful eating, mindful drinking, mindful walking, mindful parenting, even mindful birth. (As if childbirth were something that you might miss if you weren’t paying close enough attention.)

It isn’t always clear quite what mindfulness is. Despite its promise of mental clarity, its own origins are decidedly foggy. It seems to be a translation of a Buddhist term, sati, which itself is tricky to define – its meaning lies somewhere between memory and consciousness. The English version is neither a very good translation nor a particularly helpful word. The longer you think about it, the stranger the word “mindful” seems: that puzzling “-ful” feels odd when talking about emptying your thoughts. (And is its opposite “mindlessness”?)

If the definition of mindfulness is elusive, the practice is even more so. Its aim is to empty your mind by using your mind; to liberate it by restraining it. It is a puzzling and paradoxical thing, the mental equivalent of climbing up a ladder and removing it at the same time.

by Catherine Nixey, The Economist |  Read more:
Image: Anna+Elena Balbusso
[ed. See also: anhedonia (inability to feel pleasure/anticipation); a classic feature of depression.]

Thursday, December 3, 2020

California Plans Sweeping Stay-at-Home Orders


California plans sweeping stay-at-home order as Covid cases surge (The Guardian)
Image: Frederic J Brown/AFP/Getty Images


Ivan Kenneth Eyre, (Canadian, b.1935) Mountain Lines 1987 
via:

Weekend Warriors: Using the Homeless to Guard Empty Houses

Wandering around Northwest Pasadena, I pressed my face against the window of a dingy pink stucco house at 265 Robinson Road. It was April, 2019, and in two blocks I had passed thirteen bungalows, duplexes, and multifamily homes that had gone through foreclosure in the past fifteen years. Twelve of them were still unoccupied. No. 265 had been in foreclosure for a year and a half, and the two small houses on the property had long sat empty. But now, inside the rear house, there was a gallon jug of water and a bag of peanuts on a Formica kitchen counter. The walls were a mangy taupe, but African-print sheets hung over the windows. As I walked away, I heard a genteel Southern accent from behind me: “Can I help you?” A Black man with perfect posture, wearing loafers and a black T-shirt tucked into belted trousers, introduced himself as Augustus Evans.

I wasn’t the first person to wonder what Evans was doing there. A few weeks earlier, two sheriffs had knocked on the door around 11 p.m. and handcuffed him. In his car’s glove compartment, they found a letter of employment and the cell-phone number of a woman named Diane Montano, who runs Weekend Warriors, a company that provides security for vacant houses. Like many of Montano’s employees, Evans was homeless when he was hired. Now he lives in properties that are being flipped, guarding them through the renovation, staging, open-house, and inspection periods. In the past seven years, he has protected more than twenty-two homes, in thirteen neighborhoods around Los Angeles, almost all historically Black and Latino communities. A McMansion in Fontana; a four-unit apartment complex in Compton; a “baby mansion on the peak of the mountain” in East L.A., which had been left to a son who, according to the neighbors, borrowed so much against the equity of the house that he lost it to foreclosure. Before leaving, he poured liquid cement down the drains. Evans guarded the property as the plumbing system was replaced.

Empty houses are a strange sight in an area that has one of the most severe housing shortages in the United States. L.A. has the highest median home prices, relative to income, and among the lowest homeownership rates of any major city, according to the U.C.L.A. Center for Neighborhood Knowledge. Renting isn’t any easier. The area has one of the lowest vacancy rates in the country, and the average rent is twenty-two hundred dollars a month. On any night, some sixty-six thousand people there sleep in cars, in shelters, or on the street, an increase of thirteen per cent since last year.

The housing shortage was caused, in part, by restrictive zoning, rampant nimbyism, and the use of California’s environmental laws to thwart urban development. In 1960, Los Angeles was zoned to house some ten million people. By 1990, decades of downzoning had reduced that number to 3.9 million, roughly the city’s current population. Then, in 2008, the subprime-mortgage crisis struck, and in the years that followed thousands of foreclosed homes were sold at auction. Because they had to be purchased in cash, many of them were bought by wealthy investors, private-equity-backed real-estate funds, and countless other real-estate companies, leaving less inventory for individual buyers. In the end, the 2008 crash made housing in California even more expensive.

No. 265, along with thousands of other homes in L.A., was acquired by Wedgewood, a real-estate company, founded in 1983, that specializes in flipping homes, managing everything from lockouts and financing to renovation and staging. In gentrifying neighborhoods, empty houses are sitting ducks, so companies like Wedgewood hire Weekend Warriors and other house-sitting services for cheap security. (...)

One morning, a customer told Evans that he supplemented his Social Security income by house-sitting for Weekend Warriors. There were two types of gigs, he explained: 7 p.m. to 7 a.m., which paid five hundred dollars a month, and 24/7, which paid eight hundred dollars. All you needed was an I.D. Evans called Diane Montano at around 10 a.m., and at 2 p.m. a van picked him up and took him to a house in Riverside.

The rules were simple: don’t leave, don’t host guests, and don’t talk to anyone—not contractors, property managers, real-estate agents, or prospective buyers. If you were working a 24/7, only short trips to the market or the laundromat were allowed. The premises had to be kept clean at all times, or pay would be docked. The driver supplied Evans with a mini-fridge, a small microwave, an inflatable mattress, and plastic floor coverings to protect the carpet.

The driver came by to check on Evans occasionally, always unannounced, photographing each room and sending the pictures to Montano, so that she could monitor Evans’s cleanliness and track the progress of the renovations. By the time Evans was living at No. 265, he had learned the rhythms of the gig. He knew that the driver wouldn’t come by at night or on Sundays. When he could, he’d steal out to Moreno Valley, an hour and twenty minutes away, to visit his sons. He kept loose change in a coffee cup in his car, and he’d give his youngest son all the coins he’d collected since his last visit. “They know Daddy has to work away from the house,” he told me. “They’re big boys now.”

by Francesca Mari, New Yorker | Read more:
Image: Ricardo Nagaoka for The New Yorker

K.Kusunose, Milestone
via:

Wednesday, December 2, 2020

New Effort to Pass Emergency Covid-19 Relief Bill

A bipartisan group of senators and members of the House unveiled a new $908 billion plan for emergency Covid-19 relief funding on Tuesday to extend unemployment benefits and small business loans.

The proposal comes after months of stalemate on stimulus talks, and during a critical time in the Covid-19 crisis. About 14 million Americans receiving unemployment benefits will see those programs expire at the end of the month unless Congress takes action, and cities and states around the country are also facing massive budget shortfalls.

This new proposal is a $908 billion package that repurposes $560 billion in unused funds from the CARES Act, the $2.2 trillion stimulus package that passed in March, meaning that this new proposal adds only $348 billion in new spending. It’s much smaller than the $2.2 trillion revised HEROES Act that House Democrats passed in October, but larger than the $500 billion Senate Republicans were proposing in October.

Two large sticking points in negotiations have been whether there should be another round of stimulus checks (a priority for Pelosi and Trump) and liability protections for businesses worried about being sued for exposing customers and workers to Covid-19 (a priority of McConnell’s). Republicans came out on top on both of these issues (at least in this initial proposal) — stimulus checks are not included in this new bipartisan proposal and the framework provided by Sen. Manchin’s office notes that the proposal will “provide short term Federal protection from Coronavirus related lawsuits with the purpose of giving states time to develop their own response.”

Both Pelosi and Trump have signaled support for another round of stimulus payments going out to working Americans.

Here’s what actually made it into the proposal:
  • $160 billion for state, local, and tribal governments. For context, US cities alone are facing a $360 billion shortfall and are being forced to pursue austerity measures to balance their budgets. As Emily Stewart has reported for Vox, state budget shortfalls could exceed $500 billion. In other words, this money could be a drop in the bucket. State and local government woes have been lower on McConnell’s list of priorities — at one point he suggested that states declare bankruptcy.
  • $180 billion in unemployment insurance (UI). The CARES Act gave unemployed Americans a weekly $600 lifeline on top of state unemployment insurance, a move widely regarded as staving off catastrophe for the millions of Americans who lost their jobs this year. As Dylan Matthews has reported for Vox, research has shown that “the average UI recipient is getting 134 percent of their previous salary,” and it may have temporarily lowered the poverty rate. This program expired in August, so any relief will be welcome for those still unemployed. Congress originally estimated that the UI program would cost $260 billion which the Tax Policy Center viewed as an underestimate, so it’s likely this extension wouldn’t cover the full cost of unemployed workers’ needs. The Washington Post reported that this amount would cover an additional $300 a week for four months.
  • $288 billion in support for small businesses. This support will partially come through the Paycheck Protection Program (PPP) and Economic Injury Disaster Loans. As Forbes has reported, an August 2020 survey from the US census showed that almost 79 percent of small businesses reported being negatively affected by Covid-19.
  • $25 billion in rental assistance. Notably, the new bipartisan proposal only provides for $25 billion in rental assistance even as economists are predicting that tenants could owe nearly $70 billion in back rent by year’s end. Vox has reported that policy experts and advocates have been pushing for $100 billion to be included in stimulus negotiations in order to prevent an eviction crisis that could impact as many as 40 million Americans.
The framework also includes $45 billion for transportation including airlines and Amtrak, $16 billion for vaccine development and more Covid-19 testing and tracing, $82 billion in federal education funding, $10 billion for the struggling US Postal Service, and $10 billion for child care, among other things. (...)

What's up next for this proposal (...)

No new coronavirus aid package is going to get through the Senate without bipartisan support, so the new plan is a signal that Republicans and Democrats are indeed talking. Lawmakers supporting the plan emphasized on Tuesday that while each party is not going to get exactly what they want, their framework contains key points of agreement.

McConnell is circulating his own proposal among Senate Republicans, after he and House Minority Leader Kevin McCarthy met with White House officials on Tuesday to suss out what President Donald Trump wants to come out of a coronavirus relief deal. McConnell’s version of an emergency package is more limited, providing just a one-month extension of unemployment benefits, rather than the three-month extension in the bipartisan proposal.

by Ella Nilsen and Jerusalem Demsas, Vox | Read more:
Image: Tasos Katopodis/Getty Images
[ed. See also: The government’s failure to provide economic relief is killing people (Vox).]

What Lies Beyond Boredom: Post-Boredom

I have already started practising my small talk for Christmas. “Good, thanks. You?” I keep saying into a mirror, fully aware that in the past eight months I have more or less completely lost the ability to make conversation with humans. “What did I do with the time? Wow, the year has gone so quickly, hasn’t it?”

At this point I pause meaningfully because I know I have about two minutes of material to stretch over a five-day festive period with fewer people than usual, so I really need to make it last. “Let’s see, umm … got really into jigsaws for a bit. Rearranged the spare room into an office. Learned to make this one really good curry recipe from the BBC website. Uh … got 11 solo wins and about 24 duo wins on Fortnite.” Is that good?, they’ll ask, and I’ll have to admit that no, not particularly. “It’s a game for 12-year-olds that I play compulsively,” I’ll explain. “Every day I log in and let adolescents embarrass me in an online world that allows them to dance joyously on the remains of my corpse.” Oh, they’ll say. I think there’s something – I think there’s something happening in the other room. I really ought to…

I think it’s important to address the fact that I am bored. I am, to my bones, bored of this. I know that in the current climate, being bored is a high luxury, but it doesn’t make it any more thrilling. In fact, I am so deep into boredom that I have burrowed beneath the previously accepted boundaries of the concept, and have now emerged, apathetically, into post-boredom.

I never thought this would happen: if you had offered me, at the start of the year, the chance to sit inside for eight months chain-watching Netflix and not really going out or doing anything, and told me that being glued to my sofa would be reframed from a “sign of a life falling apart” to something I was doing “for the moral good of the country and the world as a whole”, I would have bitten your hand off for it.

I excel in inactivity. A squalid little part of me always imagined that I’d thrive in the ambient boredom of prison – not the gangs part of prison, or the crapping in a room with someone watching you part, or the shanking someone for some cigarette bit, or getting a pool cue cracked over me, but I really think I’d get some good letter-writing done. Lockdown has offered all the perks of prison (time) and none of the cons (prison), and yet what have I done with it? Watched part, but somehow still not all, of The Sopranos. That’s not really good enough.

This boredom is dangerous, because I’m not the only one experiencing it. Humans can only live in fear for so long, and I think, for a lot of us, being high-key scared of coronavirus wore off some time around June. Second lockdown has been a poor impersonation of the first one – no clapping, no supermarket queues, no Houseparty, The Undoing – but we wore through our boredom reserves and gnawed at the core of the human condition.

Though I think it’s psychologically ungreat for the biggest health threat of my lifetime to be reduced to a background hum of danger, an unseen force that just makes me swerve people in the corridors of my block of flats as I go downstairs for the post and not much else, it’s possibly even worse that we’ve worn boredom down to the bone. If we’ve worked through fear, and worked our way through boredom, what, really, is there left? Speaking only for myself – someone who mildly considered buying prescription orange-tinted glasses this week just to feel something – the answer can only be “chaos”.

by Joel Golby, The Guardian |  Read more:
Image: Matthew Horwood/Getty Images

Five Lessons From Dave Chappelle

... the first lesson from Dave Chappelle’s latest release on Instagram, Unforgiven, is that one best not compete with Chappelle when it comes to story-telling; the way in which the comedian weaves together multiple stories from his childhood on up to the present to make his argument about why he should be paid for the rights to stream Chappelle’s Show is truly extraordinary.


To that end, I thought a more prosaic approach might be in order: Chappelle’s 18-minute special, which I highly suggest you watch in full, is chock-full of insights about how the Internet has transformed the entertainment industry specifically, and business broadly; my goal is to, in my own clumsy way, highlight and expand on those insights. (...)

Lesson Two: Talent in an Analog World

Chappelle may have been preternaturally gifted, but that wasn’t enough to avoid being broke in the early 2000s when he signed that contract with Comedy Central. Granted, Chappelle was almost certainly scratching out a living doing standup, but to truly make it big meant signing up with a network (or, in the case of music, a label), because they controlled distribution at scale.

That’s the big difference between stand-up and something like Chappelle’s Show: when it comes to the former your income is directly tied to your output; if you do a live show, you get paid, and if you don’t, you don’t. A TV show or record, on the other hand, only needs to be made once, at which point it can not only be shown across the country or across the world, but can also be shown again and again.

It’s the latter that is the key to getting rich as a creator, but in the analog world there were two big obstacles facing creators: first, the cost of creating a show or record was very high, and second, it was impossible to get said show or record distributed even if you managed to get it made. The networks and labels were the ones that had actual access to customers, whether that be via theaters, cable TV, record stores, or whatever physical channel existed.

Over the last two decades, though, technology has demolished both obstacles: anyone with access to a computer has access to the tools necessary to create compelling content, and, more importantly, the Internet has made distribution free. Of course the Internet did exist when Chappelle signed that contract, but there are two further differences: first, the advent of broadband, which makes far richer content accessible, and second, social networks, which provide far more reach than traditional channels, for free. Today it is far more viable for talent to not only create content and distribute it, but also promote it in a way that has tangible economic benefits.

Lesson Three: The House Wins

What is noteworthy about Chappelle’s argument is that he is quite ready to admit that everyone involved is acting legally:

From the perspective of 2020, and Chappelle’s overall point about how he feels his content was taken from him, this seems blatantly unfair. At the same time, from a network’s perspective, Chappelle’s success pays for all of the other shows that failed. It’s the same idea as the music industry: yes, record companies claim rights to your recordings forever, but for the vast majority of artists those rights are worthless. In fact, for that vast majority of artists, they represent a loss, because the money the network or label spent on making the show or record, promoting it, and distributing it, is gone forever.

There is an analogy to venture capital here, which I made five years ago in the context of Tidal:
This is why, by the way, I’m generally quite unsympathetic to artists belly-aching about how unfair their labels are. Is it unfair that all of the artists who don’t break through are not compelled to repay the labels the money that was invested in them? No one begrudges venture capitalists for profiting when a startup IPOs, because that return pays for all the other startups in the portfolio that failed.
It’s not a perfect analogy, in part because the output is very different: a founder will typically only ever have one company, so of course they retain a much more meaningful ownership stake from the beginning; an artist, on the other hand, will hopefully produce new art, which they will be in a much stronger position to monetize if their initial efforts are successful. Chappelle, for example, earns around $20 million per stand-up special on Netflix; Taylor Swift, another artist embroiled in an ongoing controversy around rights to her original work, fully owns the rights for her two most recent records.

The lesson to be learned, though, is that for many years venture capitalists, networks, and record labels could ensure that the expected value of their bets was firmly in their favor. There were more entrepreneurs that wanted to start companies, more comedians that wanted to make TV shows, and more musicians that wanted to make records than there was money to fund them, which meant the house always came out ahead: sure, money was lost on companies, comedians, and musicians that failed, but the upside earned by those that succeeded more than made up for it.

Over the last two decades venture has been flooded with new sources of capital, resulting in far more founder-friendly terms than before; comedy, meanwhile, has been a particularly notable beneficiary of the podcast boom, as more and more artists create shows that are inexpensive to produce yet extremely lucrative for the artist. Music has seen its own independent artists emerge, although the labels, thanks in part to the power of their back catalogs, have retained their power longer than many expected. Still, the inevitable outcome of Lesson Two is that Lesson Three is shakier than ever.

Lesson Four: Aggregators and the Individual

The one company that comes out looking great is Netflix:

Technically speaking, Netflix did exist when Chappelle negotiated that contract with Comedy Central, but the company was a DVD-by-mail service; the streaming iteration that Chappelle is referring to wasn’t viable back then. Indeed, the entire premise of the streaming company is that it takes advantage of the changes wrought by the Internet to achieve distribution that is not simply equivalent to a TV network, but actually superior, both in terms of reaching the entire world and also in digitizing time. On Netflix, everything is available at anytime anywhere, because of the Internet.

Netflix’s integration of distribution and production also means that they are incentivized to care more about the perspective of an individual artist than a network; that is the optimal point of modularity for the streaming company. At the same time, it is worth noting that Netflix is actually claiming even more rights for their original content than networks ever did, in exchange for larger up-front payments. This makes sense given Netflix’s model, which is even more deeply predicated on leveraging fixed cost investments in content than networks ever were, not simply to retain users but also to decrease the cost of acquiring new ones.

by Ben Thompson, Stratechery |  Read more:
Image: YouTube

Tuesday, December 1, 2020


Ernst Haas, A Cloudy Night Sky over the Western Skies Motor Motel in Colorado, 1978

Ghost Kitchens: How Taxpayers are Picking Up the Bill for the Destruction of Local Restaurants

This past summer, Kroger, one of the nation’s largest grocery store chains, received a 15-year, 75 percent sales tax exemption for setting up two new data centers in Ohio. This is the definition of unnecessary. Kroger is not exactly poverty struck – it accrued profits of more than $2 billion last year. Moreover, subsidizing data centers is for suckers. Companies need to build that infrastructure, and they don’t create all that many positions. Municipalities and state governments that subsidize data centers sometimes literally pay upwards of seven figures per job.

Then it got worse: Kroger is using its data to move into what’s known as the “ghost kitchen” business, something that is a terrible development for local independent restaurants. So, Ohio taxpayers are helping a massive supermarket chain put other businesses out of business, including their favorite corner eatery. That Ohio is doing this in a year when small restaurant proprietors are under all but existential threat adds insult to injury.

Ghost kitchens are as spooky as they sound. Big corporations like gig companies, supermarkets, and fast food chains use the data they collect through their various lines of business to create delivery-only food operations. But here’s the catch: They often hide and disguise the fact that they aren’t actual restaurants. They give them homey sounding names, like Seaside or Lorenzo’s, and build out web pages that make them appear to be places you could drop in on. In fact, they are randomly located in warehouses and other industrial spaces, and backed by big investors and corporations whose participation is often hidden by a web of shell companies.

The poster children for this issue are the big delivery app companies — UberEats, GrubHub, and Doordash — which use the data they collect doing deliveries for restaurants, and which they don’t subsequently share with those restaurants, to see what sort of items sell best and when. Then, much like Amazon weaponizes the data it collects from small businesses that sell on its platform to create its own products, the delivery apps use the data to create their own, delivery-only food outlets, with the aim of cutting real restaurants out of the business entirely. (Amazon, of course, won’t miss this opportunity either: It has invested in a delivery and ghost kitchen company called Deliveroo.)

This model of operating a platform and then also competing on it should just be illegal, even though it’s widespread. Whether it’s Amazon using info gained from its third-party sellers to steal products, Google using data gleaned from its advertising technology to outbid publishers, or delivery apps cutting real restaurants out of the restaurant business, the issue is the same: The corporation that runs the infrastructure has an anticompetitive advantage over all of the other participants. As Sen. Elizabeth Warren, D-MA, succinctly puts it, “You can be an umpire, or you can be a player—but you can’t be both.”

But even if authorities woke up and banned Uber from going into the ghost kitchen business, that wouldn’t stop Kroger. It’s got the data too, and it doesn’t need to trick rival businesses into turning it over. It’s the largest grocer in the U.S., and the second-largest in-person retailer after Walmart. It runs stores under its own corporate name, as well as Harris Teeter and 14 other brands. They are using info gained from their own shoppers. Kroger is partnering with an outfit called ClusterTruck that uses algorithms to remove the so-called “pain points” of ordering food, which I suppose means orders showing up cold, or something. (...)

Think of it this way: taxpayers — in this case in Ohio — are subsidizing the destruction of small, local, independent businesses in order to benefit the biggest corporations in the country. (What makes this even more offensive: Kroger is also headquartered in Ohio. It doesn’t need incentives to build new facilities in the state, since the cost of starting from scratch in some other locale would probably be higher, even in the absence of subsidies.)

by Pat Garofalo, Public Seminar | Read more:
Image: uncredited via

Travel Industry Is Up Against a Psychological Make-or-Break


The Travel Industry Is Up Against a Psychological Make-or-Break (Bloomberg)
Image: Daniel Slim/AFP

A Successful U.S. Missile Intercept Ends the Era of Nuclear Stability

This month, an intercontinental ballistic missile was fired in the general direction of the Hawaiian islands. During its descent a few minutes later, still outside the earth’s atmosphere, it was struck by another missile that destroyed it.

With that detonation, the world’s tenuous nuclear balance suddenly threatened to come out of kilter. The danger of atom bombs being used again was already increasing. Now it’s grown once more.

The ICBM flying over the Pacific was an American dummy designed to test a new kind of interceptor technology. As it flew, satellites spotted it and alerted an Air Force base in Colorado, which in turn communicated with a Navy destroyer positioned northeast of Hawaii. This ship, the USS John Finn, fired its own missile which, in the jargon, hit and killed the incoming one.

At first glimpse, this sort of technological wizardry would seem to be a cause for not only awe but also joy, for it promises to protect the U.S. from missile attacks by North Korea, for example. But in the weird logic of nuclear strategy, a breakthrough intended to make us safer could end up making us less safe.

That’s because the new interception technology cuts the link between offense and defense that underlies all calculations about nuclear scenarios. Since the Cold War, stability — and thus peace — has been preserved through the macabre reality of mutual assured destruction, or MAD. No nation will launch a first strike if it expects immediate retaliation in kind. A different way of describing MAD is mutual vulnerability.

If one player in this game-theory scenario suddenly gets a shield (these American systems are in fact called Aegis), this mutual vulnerability is gone. Adversaries, in this case mainly Russia but increasingly China too, must assume that their own deterrent is no longer effective because they may not be able to successfully strike back.

For this reason defensive escalation has become almost as controversial as the offensive kind. Russia has been railing against land-based American interceptor systems in places like eastern Europe and Alaska. But this month’s test was the first in which a ship did the intercepting. This twist means that before long the U.S. or another nation could protect itself from all sides.

This new uncertainty complicates a situation that was already becoming fiendishly intricate. The U.S. and Russia, which have about 90% of the world’s nukes, have ditched two arms-control treaties in as many decades. The only one remaining, called New START, is due to expire on Feb. 5, a mere 16 days after Joe Biden takes office as president. The Nuclear Non-Proliferation Treaty, which has for 50 years tried to keep nations without nukes from acquiring them, is also in deep trouble, and due to be renegotiated next year. Iran’s intentions remain unknown.

At the same time, both the U.S. and Russia are modernizing their arsenals, while China is adding to its own as fast as it can. Among the new weapons are nukes carried by hypersonic missiles, which are so fast that the leaders of the target nation only have minutes to decide what’s incoming and how to respond. They also include so-called tactical nukes, with “smaller” (in a very relative sense) payloads that make them more suitable for conventional wars, thus lowering the threshold for their use.

The risk thus keeps rising that a nuclear war starts by accident, miscalculation or false alarm, especially when factoring in scenarios that involve terrorism, rogue states or conflicts in outer or cyberspace. In a sort of global protest against this insanity, 84 countries without nukes have signed a Treaty on the Prohibition of Nuclear Weapons, which will take effect next year. But neither the nine nuclear nations nor their closest allies will ever sign it.

by Andreas Kluth, Bloomberg | Read more:
Image: U.S. Navy/Getty Images