Monday, February 15, 2021
Move Fast and Break Things
Taking an expansive view of future prospects for the business, forward-thinking CEO Rick Benson was reportedly hoping Thursday that his company would be able to capture a new audience by making their signature product worse in every conceivable way. “Let’s face it, this industry moves fast and we have to be ready to cater to new users by altering every single thing that was popular in the first place,” said Benson, confirming his intention to court additional demographics by ensuring the next version was buggier, slower, and contained a more counterintuitive interface. “We can’t be afraid to innovate and push boundaries, which is why I’ve instructed our development team to get to work, taking a look at what’s been working for the last few years and then destroying from there.” Benson added that the company would be sure to retain their current audience by walking back some of the most horrific changes to their product in a couple years.
by The Onion | Read more:
Image: uncredited
Zero-Sum Thinking on Race and Wealth
By “we,” I mean America at-large. As for “nice things,” I don’t picture self-driving cars, hovercraft backpacks or laundry that does itself. Instead, I mean the basic aspects of a high-functioning society: well-funded schools, reliable infrastructure, wages that keep workers out of poverty, or a comprehensive public health system equipped to handle pandemics — things that equally developed but less wealthy nations seem to have.
In 2010, eight years into my time as an economic policy wonk at Demos, a progressive policy research group, budget deficits were on the rise. The Great Recession had decimated tax revenue, requiring more public spending to restart the economy.
But both the Tea Party and many in President Barack Obama’s inner circle were calling for a “grand bargain” to shrink the size of government by capping future public outlays and slashing Social Security, Medicaid and Medicare. Despite the still-fragile recovery and evidence that corporations were already paring back retirement benefits and ratcheting down real wages, the idea gained steam.
On a call with a group of all-white economist colleagues, we discussed how to advise leaders in Washington against this disastrous retrenchment. I cleared my throat and asked: “So where should we make the point that all these programs were created without concern for their cost when the goal was to build a white middle class, and they paid for themselves in economic growth? Now these guys are trying to fundamentally renege on the deal for a future middle class that would be majority people of color?”
Nobody answered. I checked to see if I was muted.
Finally, one of the economists breached the awkward silence. “Well, sure, Heather. We know that — and you know that — but let’s not lead with our chin here,” he said. “We are trying to be persuasive.”
The sad truth is that he was probably right. Soon, the Tea Party movement, harnessing the language of fiscal responsibility and the subtext of white grievance, would shut down the federal government, win across-the-board cuts to public programs and essentially halt the legislative function of the federal government for the next six years. The result: A jobless recovery followed by a slow, unequal economic expansion that hurt Americans of all backgrounds.
The anti-government stinginess of traditional conservatism, along with the fear of losing social status held by many white people, now broadly associated with Trumpism, have long been connected. Both have sapped American society’s strength for generations, causing a majority of white Americans to rally behind the draining of public resources and investments. Those very investments would provide white Americans — the largest group of the impoverished and uninsured — greater security, too: A new Federal Reserve Bank of San Francisco study calculated that in 2019, the country’s output would have been $2.6 trillion greater if the gap between white men and everyone else were closed. And a 2020 report from analysts at Citigroup calculated that if America had adopted policies to close the Black-white economic gap 20 years ago, U.S. G.D.P would be an estimated $16 trillion higher.
by Heather C. McGhee, NY Times | Read more:
Image: Margaret Bourke-White/The Life Picture Collection — Getty Images, via Art Resource, NY
[ed. It occurs to me that probably over half the country isn't old enough to remember a pro-active government, nimble enough to confront and solve the big problems of its day. Ever since Reagan in the 80s (government is not the solution to our problems; government is the problem), and hacks like Newt Gingrich and Grover Norquist (I'm not in favor of abolishing the government. I just want to shrink it down to the size where we can drown it in the bathtub) Republicans have been steadily strangling government's ability to function, increasing its inefficiencies, and thereby making their point retroactively. To see what good government can do, look no further than this example.]
Labels:
Economics,
Government,
history,
Politics,
Relationships
What Happens If China Makes First Contact?
Last January, the Chinese Academy of Sciences invited Liu Cixin, China’s preeminent science-fiction writer, to visit its new state-of-the-art radio dish in the country’s southwest. Almost twice as wide as the dish at America’s Arecibo Observatory, in the Puerto Rican jungle, the new Chinese dish is the largest in the world, if not the universe. Though it is sensitive enough to detect spy satellites even when they’re not broadcasting, its main uses will be scientific, including an unusual one: The dish is Earth’s first flagship observatory custom-built to listen for a message from an extraterrestrial intelligence. If such a sign comes down from the heavens during the next decade, China may well hear it first.
In some ways, it’s no surprise that Liu was invited to see the dish. He has an outsize voice on cosmic affairs in China, and the government’s aerospace agency sometimes asks him to consult on science missions. Liu is the patriarch of the country’s science-fiction scene. Other Chinese writers I met attached the honorific Da, meaning “Big,” to his surname. In years past, the academy’s engineers sent Liu illustrated updates on the dish’s construction, along with notes saying how he’d inspired their work.
But in other ways Liu is a strange choice to visit the dish. He has written a great deal about the risks of first contact. He has warned that the “appearance of this Other” might be imminent, and that it might result in our extinction. “Perhaps in ten thousand years, the starry sky that humankind gazes upon will remain empty and silent,” he writes in the postscript to one of his books. “But perhaps tomorrow we’ll wake up and find an alien spaceship the size of the Moon parked in orbit." In recent years, Liu has joined the ranks of the global literati. In 2015, his novel The Three-Body Problem became the first work in translation to win the Hugo Award, science fiction’s most prestigious prize. Barack Obama told The New York Times that the book—the first in a trilogy—gave him cosmic perspective during the frenzy of his presidency. Liu told me that Obama’s staff asked him for an advance copy of the third volume.
At the end of the second volume, one of the main characters lays out the trilogy’s animating philosophy. No civilization should ever announce its presence to the cosmos, he says. Any other civilization that learns of its existence will perceive it as a threat to expand—as all civilizations do, eliminating their competitors until they encounter one with superior technology and are themselves eliminated. This grim cosmic outlook is called “dark-forest theory,” because it conceives of every civilization in the universe as a hunter hiding in a moonless woodland, listening for the first rustlings of a rival.
Liu’s trilogy begins in the late 1960s, during Mao’s Cultural Revolution, when a young Chinese woman sends a message to a nearby star system. The civilization that receives it embarks on a centuries-long mission to invade Earth, but she doesn’t care; the Red Guard’s grisly excesses have convinced her that humans no longer deserve to survive. En route to our planet, the extraterrestrial civilization disrupts our particle accelerators to prevent us from making advancements in the physics of warfare, such as the one that brought the atomic bomb into being less than a century after the invention of the repeating rifle.
Science fiction is sometimes described as a literature of the future, but historical allegory is one of its dominant modes. Isaac Asimov based his Foundation series on classical Rome, and Frank Herbert’s Dune borrows plot points from the past of the Bedouin Arabs. Liu is reluctant to make connections between his books and the real world, but he did tell me that his work is influenced by the history of Earth’s civilizations, “especially the encounters between more technologically advanced civilizations and the original settlers of a place.” One such encounter occurred during the 19th century, when the “Middle Kingdom” of China, around which all of Asia had once revolved, looked out to sea and saw the ships of Europe’s seafaring empires, whose ensuing invasion triggered a loss in status for China comparable to the fall of Rome.
This past summer, I traveled to China to visit its new observatory, but first I met up with Liu in Beijing. By way of small talk, I asked him about the film adaptation of The Three-Body Problem. “People here want it to be China’s Star Wars,” he said, looking pained. The pricey shoot ended in mid-2015, but the film is still in postproduction. At one point, the entire special-effects team was replaced. “When it comes to making science-fiction movies, our system is not mature,” Liu said.
I had come to interview Liu in his capacity as China’s foremost philosopher of first contact, but I also wanted to know what to expect when I visited the new dish. After a translator relayed my question, Liu stopped smoking and smiled.
“It looks like something out of science fiction,” he said. (...)
Seti researchers have looked for civilizations that shoot outward in all directions from a single origin point, becoming an ever-growing sphere of technology, until they colonize entire galaxies. If they were consuming lots of energy, as expected, these civilizations would give off a telltale infrared glow, and yet we don’t see any in our all-sky scans. Maybe the self-replicating machinery required to spread rapidly across 100 billion stars would be doomed by runaway coding errors. Or maybe civilizations spread unevenly throughout a galaxy, just as humans have spread unevenly across the Earth. But even a civilization that captured a tenth of a galaxy’s stars would be easy to find, and we haven’t found a single one, despite having searched the nearest 100,000 galaxies.
Some seti researchers have wondered about stealthier modes of expansion. They have looked into the feasibility of “Genesis probes,” spacecraft that can seed a planet with microbes, or accelerate evolution on its surface, by sparking a Cambrian explosion, like the one that juiced biological creativity on Earth. Some have even searched for evidence that such spacecraft might have visited this planet, by looking for encoded messages in our DNA—which is, after all, the most robust informational storage medium known to science. They too have come up empty. The idea that civilizations expand ever outward might be woefully anthropocentric.
Liu did not concede this point. To him, the absence of these signals is just further evidence that hunters are good at hiding. He told me that we are limited in how we think about other civilizations. “Especially those that may last millions or billions of years,” he said. “When we wonder why they don’t use certain technologies to spread across a galaxy, we might be like spiders wondering why humans don’t use webs to catch insects.” And anyway, an older civilization that has achieved internal peace may still behave like a hunter, Liu said, in part because it would grasp the difficulty of “understanding one another across cosmic distances.” And it would know that the stakes of a misunderstanding could be existential.
First contact would be trickier still if we encountered a postbiological artificial intelligence that had taken control of its planet. Its worldview might be doubly alien. It might not feel empathy, which is not an essential feature of intelligence but instead an emotion installed by a particular evolutionary history and culture. The logic behind its actions could be beyond the powers of the human imagination. It might have transformed its entire planet into a supercomputer, and, according to a trio of Oxford researchers, it might find the current cosmos too warm for truly long-term, energy-efficient computing. It might cloak itself from observation, and power down into a dreamless sleep lasting hundreds of millions of years, until such time when the universe has expanded and cooled to a temperature that allows for many more epochs of computing.
In some ways, it’s no surprise that Liu was invited to see the dish. He has an outsize voice on cosmic affairs in China, and the government’s aerospace agency sometimes asks him to consult on science missions. Liu is the patriarch of the country’s science-fiction scene. Other Chinese writers I met attached the honorific Da, meaning “Big,” to his surname. In years past, the academy’s engineers sent Liu illustrated updates on the dish’s construction, along with notes saying how he’d inspired their work.
But in other ways Liu is a strange choice to visit the dish. He has written a great deal about the risks of first contact. He has warned that the “appearance of this Other” might be imminent, and that it might result in our extinction. “Perhaps in ten thousand years, the starry sky that humankind gazes upon will remain empty and silent,” he writes in the postscript to one of his books. “But perhaps tomorrow we’ll wake up and find an alien spaceship the size of the Moon parked in orbit." In recent years, Liu has joined the ranks of the global literati. In 2015, his novel The Three-Body Problem became the first work in translation to win the Hugo Award, science fiction’s most prestigious prize. Barack Obama told The New York Times that the book—the first in a trilogy—gave him cosmic perspective during the frenzy of his presidency. Liu told me that Obama’s staff asked him for an advance copy of the third volume.
At the end of the second volume, one of the main characters lays out the trilogy’s animating philosophy. No civilization should ever announce its presence to the cosmos, he says. Any other civilization that learns of its existence will perceive it as a threat to expand—as all civilizations do, eliminating their competitors until they encounter one with superior technology and are themselves eliminated. This grim cosmic outlook is called “dark-forest theory,” because it conceives of every civilization in the universe as a hunter hiding in a moonless woodland, listening for the first rustlings of a rival.
Liu’s trilogy begins in the late 1960s, during Mao’s Cultural Revolution, when a young Chinese woman sends a message to a nearby star system. The civilization that receives it embarks on a centuries-long mission to invade Earth, but she doesn’t care; the Red Guard’s grisly excesses have convinced her that humans no longer deserve to survive. En route to our planet, the extraterrestrial civilization disrupts our particle accelerators to prevent us from making advancements in the physics of warfare, such as the one that brought the atomic bomb into being less than a century after the invention of the repeating rifle.
Science fiction is sometimes described as a literature of the future, but historical allegory is one of its dominant modes. Isaac Asimov based his Foundation series on classical Rome, and Frank Herbert’s Dune borrows plot points from the past of the Bedouin Arabs. Liu is reluctant to make connections between his books and the real world, but he did tell me that his work is influenced by the history of Earth’s civilizations, “especially the encounters between more technologically advanced civilizations and the original settlers of a place.” One such encounter occurred during the 19th century, when the “Middle Kingdom” of China, around which all of Asia had once revolved, looked out to sea and saw the ships of Europe’s seafaring empires, whose ensuing invasion triggered a loss in status for China comparable to the fall of Rome.
This past summer, I traveled to China to visit its new observatory, but first I met up with Liu in Beijing. By way of small talk, I asked him about the film adaptation of The Three-Body Problem. “People here want it to be China’s Star Wars,” he said, looking pained. The pricey shoot ended in mid-2015, but the film is still in postproduction. At one point, the entire special-effects team was replaced. “When it comes to making science-fiction movies, our system is not mature,” Liu said.
I had come to interview Liu in his capacity as China’s foremost philosopher of first contact, but I also wanted to know what to expect when I visited the new dish. After a translator relayed my question, Liu stopped smoking and smiled.
“It looks like something out of science fiction,” he said. (...)
Seti researchers have looked for civilizations that shoot outward in all directions from a single origin point, becoming an ever-growing sphere of technology, until they colonize entire galaxies. If they were consuming lots of energy, as expected, these civilizations would give off a telltale infrared glow, and yet we don’t see any in our all-sky scans. Maybe the self-replicating machinery required to spread rapidly across 100 billion stars would be doomed by runaway coding errors. Or maybe civilizations spread unevenly throughout a galaxy, just as humans have spread unevenly across the Earth. But even a civilization that captured a tenth of a galaxy’s stars would be easy to find, and we haven’t found a single one, despite having searched the nearest 100,000 galaxies.
Some seti researchers have wondered about stealthier modes of expansion. They have looked into the feasibility of “Genesis probes,” spacecraft that can seed a planet with microbes, or accelerate evolution on its surface, by sparking a Cambrian explosion, like the one that juiced biological creativity on Earth. Some have even searched for evidence that such spacecraft might have visited this planet, by looking for encoded messages in our DNA—which is, after all, the most robust informational storage medium known to science. They too have come up empty. The idea that civilizations expand ever outward might be woefully anthropocentric.
Liu did not concede this point. To him, the absence of these signals is just further evidence that hunters are good at hiding. He told me that we are limited in how we think about other civilizations. “Especially those that may last millions or billions of years,” he said. “When we wonder why they don’t use certain technologies to spread across a galaxy, we might be like spiders wondering why humans don’t use webs to catch insects.” And anyway, an older civilization that has achieved internal peace may still behave like a hunter, Liu said, in part because it would grasp the difficulty of “understanding one another across cosmic distances.” And it would know that the stakes of a misunderstanding could be existential.
First contact would be trickier still if we encountered a postbiological artificial intelligence that had taken control of its planet. Its worldview might be doubly alien. It might not feel empathy, which is not an essential feature of intelligence but instead an emotion installed by a particular evolutionary history and culture. The logic behind its actions could be beyond the powers of the human imagination. It might have transformed its entire planet into a supercomputer, and, according to a trio of Oxford researchers, it might find the current cosmos too warm for truly long-term, energy-efficient computing. It might cloak itself from observation, and power down into a dreamless sleep lasting hundreds of millions of years, until such time when the universe has expanded and cooled to a temperature that allows for many more epochs of computing.
Labels:
Culture,
Fiction,
Literature,
Science,
Technology
Saturday, February 13, 2021
Clubhouse Is the Anti-Twitter
Clubhouse, the exclusive group-voice-chat app that launched last year to fanfare from the venture capital set, erupted into the headlines this week when Tesla CEO Elon Musk and Facebook CEO Mark Zuckerberg dropped in for conversations with other tech luminaries. “Elon Musk’s Clubhouse banter with Robinhood CEO triggers stampede for Clubhouse app,” Reuters reported. “Clubhouse’s Moment Arrives,” Platformer’s Casey Newton declared. Both cameos strained the app’s capacity; Zuckerberg’s apparently broke it, at least briefly.
There was also backlash: The Information editor-in-chief Jessica Lessin pointed out that these events’ organizers blocked many journalists from attending; the New York Times reporter Taylor Lorenz suggested they were excluding female journalists in particular.
Meanwhile, Twitter has been testing a rival feature called Spaces, in hopes that it will soon have a moment of its own. The stage is set for a showdown between two social media companies whose target audiences substantially overlap. But their founding ideas are fundamentally different, in ways that could shape how their respective products evolve.
The Pattern
Open vs. closed social networks
A decade ago, Twitter was hailed by some pundits as a democratizing force for its role in movements like the Arab Spring. That narrative has since been complicated, muddled, and contradicted many times over, and you’re more likely to hear today that Twitter, Facebook, and other social platforms are destroying democracy rather than fomenting it. But there’s another, broader sense in which Twitter has always been at least somewhat democratic. The structure of Twitter’s platform is essentially flat and open, in the sense that pretty much anyone can join, tweet, reply to anyone else, and have at least a remote chance to reach a massive audience.
Twitter is also loosely democratic in the sense that the platform runs in large part on the wisdom of the crowd — or mob rule, to take the darker view. Twitter amplifies the tweets that get the most engagement, regardless of who wrote them, and regardless of who’s doing the retweeting or favoriting. That means that a relative unknown with 42 followers can tweet a snarky reply to an account with 42 million followers and get more favorites than the original, at least in theory. It means a grassroots movement like #MeToo or Black Lives Matter can break through to mainstream audiences without the approval of official gatekeepers — and, on the flip side, that bots and trolls with frog avatars can run rampant with messages of racism and misogyny.
Having power and high social status in real life — commanding respect, deference, special privilege wherever you go — does not necessarily earn one the same treatment on Twitter. Yes, you’ll probably have more followers than ordinary folk, and you’ll have sycophants who fave your bad jokes or shower you with flattery in pursuit of your good graces. But you’ll also have a target on your back. Any misstatement on your part is likely to be ruthlessly dissected and mocked by people you’ve never met. You can get ratioed or even become the butt of a trending topic on the basis of a bad tweet, and there’s very little you can do to stop it.
None of this is to say that Twitter is truly democratic or egalitarian — nor that it would be entirely a good thing even if it were. Blue checkmarks, follower counts, and various forms of platform manipulation and bias all reinforce power dynamics and inequalities. And some of the same dynamics that make it conducive to activists speaking truth to power, or comedians dunking on a blowhard’s hypocrisy, are also part of what make it a breeding ground for targeted harassment, misinformation, and state-backed influence campaigns, among other ills. On a quotidian level, they simply make Twitter a stressful and divisive place, with lots of rude assholes and posturing and infighting.
I frame Twitter this way as a lens through which to view its contrasts with Clubhouse. Along many of the same axes on which Twitter can be characterized as flat and open, Clubhouse is hierarchical and closed — more oligarchic than democratic. That is almost surely intentional, and indeed a big part of its appeal to some.
Exclusivity has been a theme of Clubhouse from the outset. The app launched in April 2020 in a private beta-test mode, courting tech investors and celebrities as early adopters partly on the promise that they’d be able to talk to each other without the chaos and din of Twitter and other platforms. It’s based around user-generated groups and discussion panels, which happen live and exclusively via voice chat, sometimes in front of an audience. Nearly a year after its launch, Clubhouse is big, fast-growing, and making continual headlines — and yet it’s still private: You have to be invited by an existing member to get in, so just being on it remains something of a status symbol in some circles. It’s also still only available on iOS. (...)
But the exclusivity in this case is by no means an accident; it’s central to the platform’s dynamics. The app is built around “rooms,” which are group chats convened by specific users around a specific topic at specific times. There are also “clubs,” or private groups, whose founders are empowered to set and enforce membership terms. The rooms, in both concept and design, bear a striking resemblance to expert panels at an industry conference. (Too often, they’re all-male panels.) Clubhouse, at this juncture in its development, feels like the answer to the question, “What if SXSW, but an app?”
As with Discord, another fast-growing voice-based platform, this structure is conducive to conversation in a way that the leading social platforms — Facebook, Instagram, TikTok, Twitter — aren’t. It guards against what danah boyd calls “context collapse,” in which you think you’re talking to a certain group of people with a shared set of assumptions, but you’re also reaching different people who might interpret your words in a very different way. The medium of live voice interaction also lends itself to the rote social courtesies of normal human interaction, unlike Twitter.
Within each room, there is a hierarchical division of roles. It is run by one or more moderators, who own the “stage” and get to control who can speak, and when. If you’re in the audience, you have to raise your hand and hope they call on you if you want to say anything. They don’t have to call on anyone they don’t want to hear from; if they hear from you and decide they don’t like you, they can mute your microphone or even boot you from the room. There is a hierarchy even within the audience: Those who are followed by one or more moderators appear at the top, the equivalent of a front-row seat, and tend to be more likely to get called on. (It’s also possible to have rooms that function more like a group chat, like Houseparty without the video, but those haven’t been Clubhouse’s main draw so far.)
by Will Oremus, One Zero | Read more:
Image: Jakub Porzycki/NurPhoto via Getty Images
[ed. See also: Clubhouse Is Suggesting Users Invite Their Drug Dealers and Therapists; and The Case for Twitter Spaces (One Zero); and, Clubhouse, a Tiny Audio Chat App, Breaks Through (NYT).]
There was also backlash: The Information editor-in-chief Jessica Lessin pointed out that these events’ organizers blocked many journalists from attending; the New York Times reporter Taylor Lorenz suggested they were excluding female journalists in particular.
Meanwhile, Twitter has been testing a rival feature called Spaces, in hopes that it will soon have a moment of its own. The stage is set for a showdown between two social media companies whose target audiences substantially overlap. But their founding ideas are fundamentally different, in ways that could shape how their respective products evolve.
The Pattern
Open vs. closed social networks
A decade ago, Twitter was hailed by some pundits as a democratizing force for its role in movements like the Arab Spring. That narrative has since been complicated, muddled, and contradicted many times over, and you’re more likely to hear today that Twitter, Facebook, and other social platforms are destroying democracy rather than fomenting it. But there’s another, broader sense in which Twitter has always been at least somewhat democratic. The structure of Twitter’s platform is essentially flat and open, in the sense that pretty much anyone can join, tweet, reply to anyone else, and have at least a remote chance to reach a massive audience.
Twitter is also loosely democratic in the sense that the platform runs in large part on the wisdom of the crowd — or mob rule, to take the darker view. Twitter amplifies the tweets that get the most engagement, regardless of who wrote them, and regardless of who’s doing the retweeting or favoriting. That means that a relative unknown with 42 followers can tweet a snarky reply to an account with 42 million followers and get more favorites than the original, at least in theory. It means a grassroots movement like #MeToo or Black Lives Matter can break through to mainstream audiences without the approval of official gatekeepers — and, on the flip side, that bots and trolls with frog avatars can run rampant with messages of racism and misogyny.
Having power and high social status in real life — commanding respect, deference, special privilege wherever you go — does not necessarily earn one the same treatment on Twitter. Yes, you’ll probably have more followers than ordinary folk, and you’ll have sycophants who fave your bad jokes or shower you with flattery in pursuit of your good graces. But you’ll also have a target on your back. Any misstatement on your part is likely to be ruthlessly dissected and mocked by people you’ve never met. You can get ratioed or even become the butt of a trending topic on the basis of a bad tweet, and there’s very little you can do to stop it.
None of this is to say that Twitter is truly democratic or egalitarian — nor that it would be entirely a good thing even if it were. Blue checkmarks, follower counts, and various forms of platform manipulation and bias all reinforce power dynamics and inequalities. And some of the same dynamics that make it conducive to activists speaking truth to power, or comedians dunking on a blowhard’s hypocrisy, are also part of what make it a breeding ground for targeted harassment, misinformation, and state-backed influence campaigns, among other ills. On a quotidian level, they simply make Twitter a stressful and divisive place, with lots of rude assholes and posturing and infighting.
I frame Twitter this way as a lens through which to view its contrasts with Clubhouse. Along many of the same axes on which Twitter can be characterized as flat and open, Clubhouse is hierarchical and closed — more oligarchic than democratic. That is almost surely intentional, and indeed a big part of its appeal to some.
Exclusivity has been a theme of Clubhouse from the outset. The app launched in April 2020 in a private beta-test mode, courting tech investors and celebrities as early adopters partly on the promise that they’d be able to talk to each other without the chaos and din of Twitter and other platforms. It’s based around user-generated groups and discussion panels, which happen live and exclusively via voice chat, sometimes in front of an audience. Nearly a year after its launch, Clubhouse is big, fast-growing, and making continual headlines — and yet it’s still private: You have to be invited by an existing member to get in, so just being on it remains something of a status symbol in some circles. It’s also still only available on iOS. (...)
But the exclusivity in this case is by no means an accident; it’s central to the platform’s dynamics. The app is built around “rooms,” which are group chats convened by specific users around a specific topic at specific times. There are also “clubs,” or private groups, whose founders are empowered to set and enforce membership terms. The rooms, in both concept and design, bear a striking resemblance to expert panels at an industry conference. (Too often, they’re all-male panels.) Clubhouse, at this juncture in its development, feels like the answer to the question, “What if SXSW, but an app?”
As with Discord, another fast-growing voice-based platform, this structure is conducive to conversation in a way that the leading social platforms — Facebook, Instagram, TikTok, Twitter — aren’t. It guards against what danah boyd calls “context collapse,” in which you think you’re talking to a certain group of people with a shared set of assumptions, but you’re also reaching different people who might interpret your words in a very different way. The medium of live voice interaction also lends itself to the rote social courtesies of normal human interaction, unlike Twitter.
Within each room, there is a hierarchical division of roles. It is run by one or more moderators, who own the “stage” and get to control who can speak, and when. If you’re in the audience, you have to raise your hand and hope they call on you if you want to say anything. They don’t have to call on anyone they don’t want to hear from; if they hear from you and decide they don’t like you, they can mute your microphone or even boot you from the room. There is a hierarchy even within the audience: Those who are followed by one or more moderators appear at the top, the equivalent of a front-row seat, and tend to be more likely to get called on. (It’s also possible to have rooms that function more like a group chat, like Houseparty without the video, but those haven’t been Clubhouse’s main draw so far.)
by Will Oremus, One Zero | Read more:
Image: Jakub Porzycki/NurPhoto via Getty Images
[ed. See also: Clubhouse Is Suggesting Users Invite Their Drug Dealers and Therapists; and The Case for Twitter Spaces (One Zero); and, Clubhouse, a Tiny Audio Chat App, Breaks Through (NYT).]
How Vulnerable is the World?
One way of looking at human creativity is as a process of pulling balls out of a giant urn. The balls represent ideas, discoveries and inventions. Over the course of history, we have extracted many balls. Most have been beneficial to humanity. The rest have been various shades of grey: a mix of good and bad, whose net effect is difficult to estimate.
What we haven’t pulled out yet is a black ball: a technology that invariably destroys the civilisation that invents it. That’s not because we’ve been particularly careful or wise when it comes to innovation. We’ve just been lucky. But what if there’s a black ball somewhere in the urn? If scientific and technological research continues, we’ll eventually pull it out, and we won’t be able to put it back in. We can invent but we can’t un-invent. Our strategy seems to be to hope that there is no black ball.
Thankfully for us, humans’ most destructive technology to date – nuclear weapons – is exceedingly difficult to master. But one way to think about the possible effects of a black ball is to consider what would happen if nuclear reactions were easier. In 1933, the physicist Leo Szilard got the idea of a nuclear chain reaction. Later investigations showed that making an atomic weapon would require several kilos of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. However, imagine a counterfactual history in which Szilard realised that a nuclear bomb could be made in some easy way – over the kitchen sink, say, using a piece of glass, a metal object and a battery.
Szilard would have faced a dilemma. If he didn’t tell anyone about his discovery, he would be unable to stop other scientists from stumbling upon it. But if he did reveal his discovery, he would guarantee the further spread of dangerous knowledge. Imagine that Szilard confided in his friend Albert Einstein, and they decided to write a letter to the president of the United States, Franklin D Roosevelt, whose administration then banned all research into nuclear physics outside of high-security government facilities. Speculation would swirl around the reason for the heavy-handed measures. Groups of scientists would wonder about the secret danger; some of them would figure it out. Careless or disgruntled employees at government labs would let slip information, and spies would carry the secret to foreign capitals. Even if by some miracle the secret never leaked, scientists in other countries would discover it on their own.
Or perhaps the US government would move to eliminate all glass, metal and sources of electrical current outside of a few highly guarded military depots? Such extreme measures would meet with stiff opposition. However, after mushroom clouds had risen over a few cities, public opinion would shift. Glass, batteries and magnets could be seized, and their production banned; yet pieces would remain scattered across the landscape, and eventually they would find their way into the hands of nihilists, extortionists or people who just want ‘to see what would happen’ if they set off a nuclear device. In the end, many places would be destroyed or abandoned. Possession of the proscribed materials would have to be harshly punished. Communities would be subject to strict surveillance: informant networks, security raids, indefinite detentions. We would be left to try to somehow reconstitute civilisation without electricity and other essentials that are deemed too risky.
That’s the optimistic scenario. In a more pessimistic scenario, law and order would break down entirely, and societies would split into factions waging nuclear wars. The disintegration would end only when the world had been ruined to the point where it was impossible to make any more bombs. Even then, the dangerous insight would be remembered and passed down. If civilisation arose from the ashes, the knowledge would lie in wait, ready to pounce once people started again to produce glass, electrical currents and metal. And, even if the knowledge were forgotten, it would be rediscovered when nuclear physics research resumed.
In short: we’re lucky that making nuclear weapons turned out to be hard. We pulled out a grey ball that time. Yet with each act of invention, humanity reaches anew into the urn.
Suppose that the urn of creativity contains at least one black ball. We call this ‘the vulnerable world hypothesis’. The intuitive idea is that there’s some level of technology at which civilisation almost certainly gets destroyed, unless quite extraordinary and historically unprecedented degrees of preventive policing and/or global governance are implemented. Our primary purpose isn’t to argue that the hypothesis is true – we regard that as an open question, though it would seem unreasonable, given the available evidence, to be confident that it’s false. Instead, the point is that the hypothesis is useful in helping us to bring to the surface important considerations about humanity’s macrostrategic situation.
The above scenario – call it ‘easy nukes’ – represents one kind of potential black ball, where it becomes easy for individuals or small groups to cause mass destruction. Given the diversity of human character and circumstance, for any imprudent, immoral or self-defeating action, there will always be some fraction of humans (‘the apocalyptic residual’) who would choose to take that action – whether motivated by ideological hatred, nihilistic destructiveness or revenge for perceived injustices, as part of some extortion plot, or because of delusions. The existence of this apocalyptic residual means that any sufficiently easy tool of mass destruction is virtually certain to lead to the devastation of civilisation. (...)
It would be bad news if the vulnerable world hypothesis were correct. In principle, however, there are several responses that could save civilisation from a technological black ball. One would be to stop pulling balls from the urn altogether, ceasing all technological development. That’s hardly realistic though; and, even if it could be done, it would be extremely costly, to the point of constituting a catastrophe in its own right.
Another theoretically possible response would be to fundamentally reengineer human nature to eliminate the apocalyptic residual; we might also do away with any tendency among powerful actors to risk civilisational devastation even when vital national security interests are served by doing so, as well as any tendency among the masses to prioritise personal convenience when this contributes an imperceptible amount of harm to some important global good. Such global preference reengineering seems very difficult to pull off, and it would come with risks of its own. It’s also worth noting that partial success in such preference reengineering wouldn’t necessarily bring a proportional reduction in civilisational vulnerability. For example, reducing the apocalyptic residual by 50 per cent wouldn’t cut the risks from the ‘easy nukes’ scenarios in half, since in many cases any lone individual could single-handedly devastate civilisation. We could only significantly reduce the risk, then, if the apocalyptic residual were virtually entirely eliminated worldwide.
That leaves two options for making the world safe against the possibility that the urn contains a black ball: extremely reliable policing that could prevent any individual or small group from carrying out highly dangerous illegal actions; and two, strong global governance that could solve the most serious collective action problems, and ensure robust cooperation between states – even when they have strong incentives to defect from agreements, or refuse to sign on in the first place. The governance gaps addressed by these measures are the two Achilles’ heels of the contemporary world order. So long as they remain unprotected, civilisation remains vulnerable to a technological black ball. Unless and until such a discovery emerges from the urn, however, it’s easy to overlook how exposed we are.
Let’s consider what would be required to protect against these vulnerabilities.
by Nick Bostrom and Matthew van der Merwe, Aeon | Read more:
Image: Jonas Bendikson/Magnum
What we haven’t pulled out yet is a black ball: a technology that invariably destroys the civilisation that invents it. That’s not because we’ve been particularly careful or wise when it comes to innovation. We’ve just been lucky. But what if there’s a black ball somewhere in the urn? If scientific and technological research continues, we’ll eventually pull it out, and we won’t be able to put it back in. We can invent but we can’t un-invent. Our strategy seems to be to hope that there is no black ball.
Thankfully for us, humans’ most destructive technology to date – nuclear weapons – is exceedingly difficult to master. But one way to think about the possible effects of a black ball is to consider what would happen if nuclear reactions were easier. In 1933, the physicist Leo Szilard got the idea of a nuclear chain reaction. Later investigations showed that making an atomic weapon would require several kilos of plutonium or highly enriched uranium, both of which are very difficult and expensive to produce. However, imagine a counterfactual history in which Szilard realised that a nuclear bomb could be made in some easy way – over the kitchen sink, say, using a piece of glass, a metal object and a battery.
Szilard would have faced a dilemma. If he didn’t tell anyone about his discovery, he would be unable to stop other scientists from stumbling upon it. But if he did reveal his discovery, he would guarantee the further spread of dangerous knowledge. Imagine that Szilard confided in his friend Albert Einstein, and they decided to write a letter to the president of the United States, Franklin D Roosevelt, whose administration then banned all research into nuclear physics outside of high-security government facilities. Speculation would swirl around the reason for the heavy-handed measures. Groups of scientists would wonder about the secret danger; some of them would figure it out. Careless or disgruntled employees at government labs would let slip information, and spies would carry the secret to foreign capitals. Even if by some miracle the secret never leaked, scientists in other countries would discover it on their own.
Or perhaps the US government would move to eliminate all glass, metal and sources of electrical current outside of a few highly guarded military depots? Such extreme measures would meet with stiff opposition. However, after mushroom clouds had risen over a few cities, public opinion would shift. Glass, batteries and magnets could be seized, and their production banned; yet pieces would remain scattered across the landscape, and eventually they would find their way into the hands of nihilists, extortionists or people who just want ‘to see what would happen’ if they set off a nuclear device. In the end, many places would be destroyed or abandoned. Possession of the proscribed materials would have to be harshly punished. Communities would be subject to strict surveillance: informant networks, security raids, indefinite detentions. We would be left to try to somehow reconstitute civilisation without electricity and other essentials that are deemed too risky.
That’s the optimistic scenario. In a more pessimistic scenario, law and order would break down entirely, and societies would split into factions waging nuclear wars. The disintegration would end only when the world had been ruined to the point where it was impossible to make any more bombs. Even then, the dangerous insight would be remembered and passed down. If civilisation arose from the ashes, the knowledge would lie in wait, ready to pounce once people started again to produce glass, electrical currents and metal. And, even if the knowledge were forgotten, it would be rediscovered when nuclear physics research resumed.
In short: we’re lucky that making nuclear weapons turned out to be hard. We pulled out a grey ball that time. Yet with each act of invention, humanity reaches anew into the urn.
Suppose that the urn of creativity contains at least one black ball. We call this ‘the vulnerable world hypothesis’. The intuitive idea is that there’s some level of technology at which civilisation almost certainly gets destroyed, unless quite extraordinary and historically unprecedented degrees of preventive policing and/or global governance are implemented. Our primary purpose isn’t to argue that the hypothesis is true – we regard that as an open question, though it would seem unreasonable, given the available evidence, to be confident that it’s false. Instead, the point is that the hypothesis is useful in helping us to bring to the surface important considerations about humanity’s macrostrategic situation.
The above scenario – call it ‘easy nukes’ – represents one kind of potential black ball, where it becomes easy for individuals or small groups to cause mass destruction. Given the diversity of human character and circumstance, for any imprudent, immoral or self-defeating action, there will always be some fraction of humans (‘the apocalyptic residual’) who would choose to take that action – whether motivated by ideological hatred, nihilistic destructiveness or revenge for perceived injustices, as part of some extortion plot, or because of delusions. The existence of this apocalyptic residual means that any sufficiently easy tool of mass destruction is virtually certain to lead to the devastation of civilisation. (...)
It would be bad news if the vulnerable world hypothesis were correct. In principle, however, there are several responses that could save civilisation from a technological black ball. One would be to stop pulling balls from the urn altogether, ceasing all technological development. That’s hardly realistic though; and, even if it could be done, it would be extremely costly, to the point of constituting a catastrophe in its own right.
Another theoretically possible response would be to fundamentally reengineer human nature to eliminate the apocalyptic residual; we might also do away with any tendency among powerful actors to risk civilisational devastation even when vital national security interests are served by doing so, as well as any tendency among the masses to prioritise personal convenience when this contributes an imperceptible amount of harm to some important global good. Such global preference reengineering seems very difficult to pull off, and it would come with risks of its own. It’s also worth noting that partial success in such preference reengineering wouldn’t necessarily bring a proportional reduction in civilisational vulnerability. For example, reducing the apocalyptic residual by 50 per cent wouldn’t cut the risks from the ‘easy nukes’ scenarios in half, since in many cases any lone individual could single-handedly devastate civilisation. We could only significantly reduce the risk, then, if the apocalyptic residual were virtually entirely eliminated worldwide.
That leaves two options for making the world safe against the possibility that the urn contains a black ball: extremely reliable policing that could prevent any individual or small group from carrying out highly dangerous illegal actions; and two, strong global governance that could solve the most serious collective action problems, and ensure robust cooperation between states – even when they have strong incentives to defect from agreements, or refuse to sign on in the first place. The governance gaps addressed by these measures are the two Achilles’ heels of the contemporary world order. So long as they remain unprotected, civilisation remains vulnerable to a technological black ball. Unless and until such a discovery emerges from the urn, however, it’s easy to overlook how exposed we are.
Let’s consider what would be required to protect against these vulnerabilities.
by Nick Bostrom and Matthew van der Merwe, Aeon | Read more:
Image: Jonas Bendikson/Magnum
Labels:
Critical Thought,
Government,
Military,
Politics,
Psychology,
Security,
Technology
Friday, February 12, 2021
Fauci Says All Americans Could Start to Get Vaccinated in April
On Thursday, Dr. Anthony Fauci, the nation’s top infectious disease expert, made a prediction that was like music to the ears of millions of Americans who aren’t eligible for COVID-19 vaccination yet.
“If you look at the projection, I would imagine by the time we get to April, that will be what I would call, for [lack] of better wording, ‘open season,’” Fauci told NBC’s “Today” show. “Namely, virtually anybody and everybody in any category could start to get vaccinated.”
April? That’s less than 50 days away. The U.S. vaccination campaign started 60 days ago, on Dec. 14. Since then, just 11.3 million Americans — mostly health workers, with a few seniors sprinkled in — have received both doses of either the Pfizer or Moderna vaccine. Another 24 million Americans have gotten their first shot and are awaiting their second.
The news has been filled with headlines about crashing appointment websites, struggling seniors and governors complaining about supply shortages. Meanwhile, we’ve only just started vaccinating Americans 65 or older; most essential workers aren’t even eligible yet.
So is Fauci offering false hope when he says that “anybody and everybody in any category” will be able to sign up for vaccination starting in April? Or is his projection realistic?
The answer, if you actually examine the numbers, is surprising — and encouraging. It turns out April isn’t out of the question at all.
The first thing to consider is the current pace of vaccination, which is faster than you might think. “If you compare now to what we were doing just literally a month ago,” Fauci said Thursday, “the escalation has really been considerable.”
He’s right. On Jan. 11, the U.S. was administering an average of 632,000 doses per day. Now we’re averaging 1.6 million. That’s not just a two-and-a-half-fold increase. It’s also more, already, than the revised goal of 1.5 million doses per day President Biden set two short weeks ago after critics said his previous target of 1 million doses per day was too low.
The next thing to consider is where supply is heading next. (Hint: it’s heading upward.) “As we get into March and April, the number of available doses will allow for much more of a mass vaccination approach, which is really much more accelerated than what you’re seeing now,” Fauci said Thursday.
Initially, logistical bottlenecks were slowing vaccination; many states were administering less than half the doses they’d received. But now that some of those knots have been untangled, the national share of available doses administered has climbed to 68 percent, with several states clearing 80 or even 90 percent.
Supply, in contrast, is what’s holding us back today; at the moment, doses administered are consistently outpacing doses distributed for the first time since the rollout began. But as Fauci said, this should change soon. Since Biden took office, the number of doses being sent to states has increased by 28 percent to 11 million doses a week, according to White House COVID-19 coordinator Jeffrey Zients. Starting Thursday, the administration will boost that number by another 5 percent, with 1 million doses going directly to 6,500 retail pharmacies and another 1 million going directly to 250 community health centers serving hard-to-reach groups such as homeless people, migrant workers and public housing residents.
Production is picking up too. At first, Pfizer and Moderna promised to deliver 100 million doses each by the end of March. But Pfizer recently added 20 million doses to that pledge — then announced it could ship all 200 million doses purchased by the U.S. before the end of May, or two months earlier than expected, because vaccinators can squeeze six or even seven doses out of vials that were supposed to contain just five.
At the same time, Moderna is “asking U.S. regulators to approve what it says could be a remarkably simple proposal to speed up the immunization of Americans against the coronavirus: Fill empty space in its vials with as many as 50 percent more doses,” according to the New York Times. If the change is approved, which could happen this month, it would theoretically allow Moderna to ship tens of millions of additional doses by the end of March and another 150 million by June.
To put that in perspective, about 68 million doses have been distributed over the last 60 days. Over the next 50 days — that is, by April — the U.S. could be getting 175 million more.
And that’s not even counting the single-dose Johnson & Johnson vaccine, which the U.S. Food and Drug Administration is expected to approve this month, with 100 million doses to follow before July. Or the Novavax and Astra Zeneca vaccines, which could also be available by April. Or the fact that on Thursday, the Biden administration announced that it had secured another 200 million doses from Pfizer and Moderna to be delivered in "regular increments" by the end of July — bringing the grand total from just those two manufacturers to 600 million doses, or enough to fully vaccinate every adult in America (and then some).
Administering so many additional millions of doses will be a challenge, but Fauci sounded confident Thursday. “I would imagine, and in fact, I’m fairly certain, that as we get into and toward the end of April, you’ll see … pharmacies, community vaccine centers, mobile units really stepping up the pace of vaccination,” he said. “Hopefully as we get into the early spring we’ll have a much greater acceleration of dosage.”
[ed. Competence matters. Vote wisely.]
“If you look at the projection, I would imagine by the time we get to April, that will be what I would call, for [lack] of better wording, ‘open season,’” Fauci told NBC’s “Today” show. “Namely, virtually anybody and everybody in any category could start to get vaccinated.”
April? That’s less than 50 days away. The U.S. vaccination campaign started 60 days ago, on Dec. 14. Since then, just 11.3 million Americans — mostly health workers, with a few seniors sprinkled in — have received both doses of either the Pfizer or Moderna vaccine. Another 24 million Americans have gotten their first shot and are awaiting their second.
The news has been filled with headlines about crashing appointment websites, struggling seniors and governors complaining about supply shortages. Meanwhile, we’ve only just started vaccinating Americans 65 or older; most essential workers aren’t even eligible yet.
So is Fauci offering false hope when he says that “anybody and everybody in any category” will be able to sign up for vaccination starting in April? Or is his projection realistic?
The answer, if you actually examine the numbers, is surprising — and encouraging. It turns out April isn’t out of the question at all.
The first thing to consider is the current pace of vaccination, which is faster than you might think. “If you compare now to what we were doing just literally a month ago,” Fauci said Thursday, “the escalation has really been considerable.”
He’s right. On Jan. 11, the U.S. was administering an average of 632,000 doses per day. Now we’re averaging 1.6 million. That’s not just a two-and-a-half-fold increase. It’s also more, already, than the revised goal of 1.5 million doses per day President Biden set two short weeks ago after critics said his previous target of 1 million doses per day was too low.
The next thing to consider is where supply is heading next. (Hint: it’s heading upward.) “As we get into March and April, the number of available doses will allow for much more of a mass vaccination approach, which is really much more accelerated than what you’re seeing now,” Fauci said Thursday.
Initially, logistical bottlenecks were slowing vaccination; many states were administering less than half the doses they’d received. But now that some of those knots have been untangled, the national share of available doses administered has climbed to 68 percent, with several states clearing 80 or even 90 percent.
Supply, in contrast, is what’s holding us back today; at the moment, doses administered are consistently outpacing doses distributed for the first time since the rollout began. But as Fauci said, this should change soon. Since Biden took office, the number of doses being sent to states has increased by 28 percent to 11 million doses a week, according to White House COVID-19 coordinator Jeffrey Zients. Starting Thursday, the administration will boost that number by another 5 percent, with 1 million doses going directly to 6,500 retail pharmacies and another 1 million going directly to 250 community health centers serving hard-to-reach groups such as homeless people, migrant workers and public housing residents.
Production is picking up too. At first, Pfizer and Moderna promised to deliver 100 million doses each by the end of March. But Pfizer recently added 20 million doses to that pledge — then announced it could ship all 200 million doses purchased by the U.S. before the end of May, or two months earlier than expected, because vaccinators can squeeze six or even seven doses out of vials that were supposed to contain just five.
At the same time, Moderna is “asking U.S. regulators to approve what it says could be a remarkably simple proposal to speed up the immunization of Americans against the coronavirus: Fill empty space in its vials with as many as 50 percent more doses,” according to the New York Times. If the change is approved, which could happen this month, it would theoretically allow Moderna to ship tens of millions of additional doses by the end of March and another 150 million by June.
To put that in perspective, about 68 million doses have been distributed over the last 60 days. Over the next 50 days — that is, by April — the U.S. could be getting 175 million more.
And that’s not even counting the single-dose Johnson & Johnson vaccine, which the U.S. Food and Drug Administration is expected to approve this month, with 100 million doses to follow before July. Or the Novavax and Astra Zeneca vaccines, which could also be available by April. Or the fact that on Thursday, the Biden administration announced that it had secured another 200 million doses from Pfizer and Moderna to be delivered in "regular increments" by the end of July — bringing the grand total from just those two manufacturers to 600 million doses, or enough to fully vaccinate every adult in America (and then some).
Administering so many additional millions of doses will be a challenge, but Fauci sounded confident Thursday. “I would imagine, and in fact, I’m fairly certain, that as we get into and toward the end of April, you’ll see … pharmacies, community vaccine centers, mobile units really stepping up the pace of vaccination,” he said. “Hopefully as we get into the early spring we’ll have a much greater acceleration of dosage.”
by Andrew Romano, Yahoo News | Read more:
Image: Alex Wong/Getty ImagesChick Corea (1941-2021)
Chick Corea, Jazz Pianist Who Expanded the Possibilities of the Genre, Dead at 79 (Rolling Stone); and, Chick Corea, Jazz Fusion Pioneer, Has Died Of Cancer At 79 (NPR); Chick Corea: Hear 12 Essential Performances (NYT).]
[ed. Return to Forever was never far from my turntable. Some additional great videos in the articles.]
Thursday, February 11, 2021
Negative Interest Rates Are Coming, but There Is No Chance That They Will Work
We’ve sometimes addressed why negative interest rates are a bad idea. A corroborating data point: Sweden ditched its negative interest rate experiment in 2019, and said in mumbled economese that it didn’t do what it was supposed to do.
One of the many times we debunked the official rationale for negative interest rates was in a 2016 post, Economists Mystified that Negative Interest Rates Aren’t Leading Consumers to Run Out and Spend. We’ll hoist at length:
The one wee bit of good news is the Fed doesn’t think negative interest rates are a sound idea. In fact, since 2014 (the year of the taper tantrum), and probably a bit earlier, the central bank realized that it had driven interest rates too low and it was keen to get out of the corner it had painted itself in. But it has yet to work out how to do that without breaking a lot of china.
One of the many times we debunked the official rationale for negative interest rates was in a 2016 post, Economists Mystified that Negative Interest Rates Aren’t Leading Consumers to Run Out and Spend. We’ll hoist at length:
It been remarkable to witness the casual way in which central banks have plunged into negative interest rate terrain, based on questionable models. Now that this experiment isn’t working out so well, the response comes troubling close to, “Well, they work in theory, so we just need to do more or wait longer to see them succeed.”
The particularly distressing part, as a new Wall Street Journal article makes clear, is that the purveyors of this snake oil talked themselves into the insane belief that negative interest rates would induce consumers to run out and spend. From the story:
Two years ago, the European Central Bank cut interest rates below zero to encourage people such as Heike Hofmann, who sells fruits and vegetables in this small city, to spend more.Back to the current post. Aside from the effect on savings (that economists expected negative interest rates to induce savers to dip into their capital to preserve their lifestyles and make up for lost interest income), a second reason negative interest rates hurt, or at least don’t help spending is by sending a deflationary signal. If things might be cheaper in a year, why buy now?
Policy makers in Europe and Japan have turned to negative rates for the same reason—to stimulate their lackluster economies. Yet the results have left some economists scratching their heads. Instead of opening their wallets, many consumers and businesses are squirreling away more money.
When Ms. Hofmann heard the ECB was knocking rates below zero in June 2014, she considered it “madness” and promptly cut her spending, set aside more money and bought gold. “I now need to save more than before to have enough to retire,” says Ms. Hofmann, 54 years old.
Recent economic data show consumers are saving more in Germany and Japan, and in Denmark, Switzerland and Sweden, three non-eurozone countries with negative rates, savings are at their highest since 1995, the year the Organization for Economic Cooperation and Development started collecting data on those countries. Companies in Europe, the Middle East, Africa and Japan also are holding on to more cash.
The article then discusses that these consumers all went on a saving binge..because demographics! because central banks did a bad job of PR! Only then does it turn to the idea that the higher savings rates were caused by negative interest rates.
How could they have believed otherwise? Do these economists all have such fat pensions that they have no idea what savings are for, or alternatively, they have their wives handle money?
People save for emergencies and retirement. Economists, who are great proponents of using central bank interest rate manipulation to create a wealth effect, fail to understand that super low rates diminish the wealth of ordinary savers. Few will react the way speculators do and go into risky assets to chase yield. They will stay put, lower their spending to try to compensate for their reduced interest income. Those who are still working will also try to increase their savings balances, since they know their assets will generate very little in the way of income in a zero/negative interest rate environment.
It is apparently difficult for most economists to grasp that negative interest rates reduce the value of those savings to savers by lowering the income on them. Savers are loss averse and thus are very reluctant to spend principal to compensate for reduced income. Given that central banks have driven policy interest rates into negative real interest rate terrain, this isn’t an illogical reading of their situation. Ed Kane has estimated that low interest rates were a $300 billion per year subsidy taken from consumers and given to financial firms in the form of reduces interest income. Since interest rates on the long end of the yield curve have fallen even further, Kane’s estimate is now probably too low.
The one wee bit of good news is the Fed doesn’t think negative interest rates are a sound idea. In fact, since 2014 (the year of the taper tantrum), and probably a bit earlier, the central bank realized that it had driven interest rates too low and it was keen to get out of the corner it had painted itself in. But it has yet to work out how to do that without breaking a lot of china.
by Yves Smith, Naked Capitalism | Read more:
[ed. Let's hope this gets nipped in the bud. See also: All you need to know about negative interest rates (The Guardian).]
Covid Brought Booze to Your Door—and Made Delivery Worth Billions
The lockdowns early last year were like a cruel reversal of “a guy walks into a bar” jokes for the alcohol industry. Instead of fun scenarios where anything could happen, people were stuck at home, bars were closed, and in the U.S. most consumers had no idea how to buy booze online. Financial results for alcohol companies were constrained, and their supply chains had to be redirected away from bars, sporting events, and concerts to whatever homebound consumers they could reach. There was even a shortage of the aluminum cans needed for some beers as they scrambled to adjust.
Then, something funny did happen: Alcohol producers, held back from the e-commerce revolution in the U.S. by laws that date to the 1930s, suddenly saw online sales skyrocket. Beverage makers started to open up to technology platforms rolled out by startups such as Thirstie Inc. and Speakeasy Co., and consumers began to catch on that they could get alcohol without venturing out of lockdown. On Feb. 2 one of those upstarts, Drizly Inc., agreed to sell itself to Uber Technologies Inc.—the ride-hailing company that’s ventured into food delivery— for $1.1 billion. (...)
Alcohol may be a more natural fit for online sales than many consumer products. The weight of bottles, cans, and cartons means that some people would rather not carry them home themselves. And the large variety of products and packaging types—more than 1 million—means an online store can offer the kind of inventory a typical retailer can’t. “It’s like books with Amazon; you can build a store that can only exist on the internet,” Drizly CEO Rellas says.
Yet for years, the ability to buy alcohol online in the U.S. has been constrained. In 1933, when Prohibition was repealed, the 21st Amendment broke up the power of bootleggers by dictating that no one party would control producers, distributors, and retailers of alcohol. To complicate matters further, each state adopted its own version of the so-called three-tier system. To this day—with minor exceptions such as small breweries, distillers, and vineyards—alcohol producers can’t connect directly with U.S. consumers but must send their products through separately owned distributors and retailers. That’s caused the industry to miss out on the entire direct-to-consumer revolution that’s changed the way many brands connect to their customers.
“For the most part there’s no such thing as DTC alcohol,” says Franklin Isacson, founding partner of Coefficient Capital, a consumer-goods-focused venture capital firm. “Warby Parker can cut out the middleman and the distributor and give you a better price. But if you go to smirnoff.com, the sale still goes to a distributor, who takes it to a liquor store, who delivers it to you.” (...)
Still, brands such as Haus that can get around the three-tier system are the exception. So digital middlemen have cropped up to try to deliver those kinds of advantages without destroying the three-tier status quo. Aside from Drizly, Speakeasy, and Thirstie, there’s also Bevshop, Cask & Barrel Club, and Passion Spirits.
These companies have to jump through intricate legal hoops to comply with the three-tier regulations. With Drizly (referred to as “Uber for booze,” even before Uber bought it), consumers download its app and add products to a shopping cart, and Drizly appears to deliver them. But Drizly never touches the alcohol, because the drivers are mostly employed by the liquor stores. About 20% of Drizly’s deliveries are by third-party couriers, and Uber will now be one of them, Rellas says. Drizly operates in 32 states with legalized alcohol e-commerce and doesn’t deliver across state lines.
Thirstie, which works with major brands including Bacardi, Jagermeister, Macallan, and those owned by drinks giant Constellation Brands Inc., is known as the “Shopify for booze,” as it uses a model that allows a consumer to look up his favorite brands and creates the illusion that he’s buying from their websites. Instead, the customer is moved to a technology platform powered by Thirstie that orders the bottle from the supplier, which sends it to a distributor, which then sends it to a licensed retailer who makes the delivery to the consumer.
Then, something funny did happen: Alcohol producers, held back from the e-commerce revolution in the U.S. by laws that date to the 1930s, suddenly saw online sales skyrocket. Beverage makers started to open up to technology platforms rolled out by startups such as Thirstie Inc. and Speakeasy Co., and consumers began to catch on that they could get alcohol without venturing out of lockdown. On Feb. 2 one of those upstarts, Drizly Inc., agreed to sell itself to Uber Technologies Inc.—the ride-hailing company that’s ventured into food delivery— for $1.1 billion. (...)
Alcohol may be a more natural fit for online sales than many consumer products. The weight of bottles, cans, and cartons means that some people would rather not carry them home themselves. And the large variety of products and packaging types—more than 1 million—means an online store can offer the kind of inventory a typical retailer can’t. “It’s like books with Amazon; you can build a store that can only exist on the internet,” Drizly CEO Rellas says.
Yet for years, the ability to buy alcohol online in the U.S. has been constrained. In 1933, when Prohibition was repealed, the 21st Amendment broke up the power of bootleggers by dictating that no one party would control producers, distributors, and retailers of alcohol. To complicate matters further, each state adopted its own version of the so-called three-tier system. To this day—with minor exceptions such as small breweries, distillers, and vineyards—alcohol producers can’t connect directly with U.S. consumers but must send their products through separately owned distributors and retailers. That’s caused the industry to miss out on the entire direct-to-consumer revolution that’s changed the way many brands connect to their customers.
“For the most part there’s no such thing as DTC alcohol,” says Franklin Isacson, founding partner of Coefficient Capital, a consumer-goods-focused venture capital firm. “Warby Parker can cut out the middleman and the distributor and give you a better price. But if you go to smirnoff.com, the sale still goes to a distributor, who takes it to a liquor store, who delivers it to you.” (...)
Still, brands such as Haus that can get around the three-tier system are the exception. So digital middlemen have cropped up to try to deliver those kinds of advantages without destroying the three-tier status quo. Aside from Drizly, Speakeasy, and Thirstie, there’s also Bevshop, Cask & Barrel Club, and Passion Spirits.
These companies have to jump through intricate legal hoops to comply with the three-tier regulations. With Drizly (referred to as “Uber for booze,” even before Uber bought it), consumers download its app and add products to a shopping cart, and Drizly appears to deliver them. But Drizly never touches the alcohol, because the drivers are mostly employed by the liquor stores. About 20% of Drizly’s deliveries are by third-party couriers, and Uber will now be one of them, Rellas says. Drizly operates in 32 states with legalized alcohol e-commerce and doesn’t deliver across state lines.
Thirstie, which works with major brands including Bacardi, Jagermeister, Macallan, and those owned by drinks giant Constellation Brands Inc., is known as the “Shopify for booze,” as it uses a model that allows a consumer to look up his favorite brands and creates the illusion that he’s buying from their websites. Instead, the customer is moved to a technology platform powered by Thirstie that orders the bottle from the supplier, which sends it to a distributor, which then sends it to a licensed retailer who makes the delivery to the consumer.
by Tiffany Kary, Bloomberg | Read more:
Image: Brad Trone/Bloomberg Businessweek
Wednesday, February 10, 2021
Elton John, Michael Caine Audition For Covid Vaccine Ad
[ed. "... let the little fellow know he didn't get the job" : )]
Inside the Worst-Hit County in the Worst-Hit State in the Worst-Hit Country
In medicine, when patients face a difficult decision whether to seek aggressive treatment, they are often asked what they are and are not willing to sacrifice. When patients cannot speak for themselves, someone else has to answer for them. This task can tear families apart; there is, for instance, the well-recognized seagull syndrome—in which the family member who lives farthest away from the patient flies into town and craps all over the plan. Designating a decision-maker helps insure that choices will be guided by the patient’s priorities, not anyone else’s.
When an entire community must decide how to tackle a serious problem—must choose what it is and is not willing to sacrifice—matters get more complicated. In business, the decision-maker is generally clear, and, if you don’t like the decision, too bad. The boss can insist on obedience. But that’s not how democracy works. We designate decision-makers, but the community has to live with dissent. This is why businesspeople so often make terrible government leaders. They’ve never had to manage civic conflict and endure unending battles over priorities and limits.
Conflict is also why so many people say they hate politics. We want consensus—badly enough that we convince ourselves that it can be created if we only try hard enough. “Peace is not the absence of conflict, but the ability to cope with it,” Mahatma Gandhi said, getting closer to the truth. (Even Ronald Reagan repeated the sentiment.) Among the questions we now face is that of how our frayed democracy can cope with the conflict required to navigate the global pandemic.
As a country, we still face a long, potholed road. We will soon exceed half a million deaths from covid-19. It’s not inconceivable that we will reach three-quarters of a million or even a million deaths this year; the magnitude of certain dangers is difficult to predict. The world’s uncontrolled circulation of the virus has already bred mutant strains that are markedly more infectious than existing ones. Some have developed the ability to at least partially evade current vaccines, and further mutations may develop that more fully evade the vaccines, requiring updated formulations. Or—as has been our repeated pattern when public-health measures have succeeded in slowing the spread of the virus—we could simply take our foot off the brakes too soon.
These Precious Days
I can tell you where it all started because I remember the moment exactly. It was late and I’d just finished the novel I’d been reading. A few more pages would send me off to sleep, so I went in search of a short story. They aren’t hard to come by around here; my office is made up of piles of books, mostly advance-reader copies that have been sent to me in hopes I’ll write a quote for the jacket. They arrive daily in padded mailers—novels, memoirs, essays, histories—things I never requested and in most cases will never get to. On this summer night in 2017, I picked up a collection called Uncommon Type, by Tom Hanks. It had been languishing in a pile by the dresser for a while, and I’d left it there because of an unarticulated belief that actors should stick to acting. Now for no particular reason I changed my mind. Why shouldn’t Tom Hanks write short stories? Why shouldn’t I read one? Off we went to bed, the book and I, and in doing so put the chain of events into motion. The story has started without my realizing it. The first door opened and I walked through.
But any story that starts will also end. This is the way novelists think: beginning, middle, and end.
In case you haven’t read it, Uncommon Type is a very good book. It would have to be for this story to continue. Had it been a bad book or just a good-enough book, I would have put it down, but page after page it surprised me. Two days later, I sent an endorsement to the editor. I’ve written plenty of jacket quotes in my day, mostly for first-time writers of fiction whom I believed could benefit from the assistance. The thought of Tom Hanks benefiting from my assistance struck me as funny, and then I forgot about it.
Or I would have forgotten about it, except that I got a call from Tom Hanks’s publicist a few weeks later, asking whether I would fly to Washington in October to interview the actor onstage as part of his book tour. As the co-owner of a bookstore, I do this sort of thing, and while I mostly do it in Nashville, where I live, there have certainly been requests interesting enough to get me on a plane. I could have said I was busy writing a novel, and that would have been both ridiculous and true. Tom Hanks needs a favor? Happy to help.
“Do you even realize your life isn’t normal?” Niki said when I announced my trip. Niki works at the bookstore. She has opinions about my life. “You understand that other people don’t live this way?”
How other people live is pretty much all I think about. Curiosity is the rock upon which fiction is built. But for all the times people have wanted to tell me their story because they think it would make a wonderful novel, it pretty much never works out. People are not characters, no matter how often we tell them they are; conversations are not dialogue; and the actions of our days don’t add up to a plot. In life, time runs together in its sameness, but in fiction time is condensed—one action springboards into another, greater action. Cause and effect are so much clearer in novels than they are in life. You might not see how everything threads together as you read along, but when you look back from the end of the story, the map becomes clear. Maybe Niki was right about my life being different, but maybe that’s because I tend to think of things in terms of story: I pick up a book and read it late into the night, and because I like the book, I wind up on a flight to D.C.
I went by myself. I was going only for the night. I walked from my hotel to the theater and showed my ID to a guard who then led me to the crowded greenroom. I met the hosts of the event and a few people who worked for them. I was introduced to Tom Hanks’s editor, Tom Hanks’s agent, his publicist, his assistant, Tom Hanks himself. He was tall and slim, happily at ease, answering questions, signing books. Everyone was laughing at his jokes because his jokes were funny. The people around him arranged themselves into different configurations so that the assistant could take their pictures, each one handing over his or her cell phone. Audience questions arrived on index cards, were read aloud and sorted through. The ones Tom Hanks approved of were handed to me. I would ask them at the end of the event, depending on how much time we had. The greenroom crowd was then escorted to their seats, and we were ushered to the dark place behind the curtain—Tom Hanks, his assistant, and I. The assistant was a tiny woman wearing a fitted black-velvet evening coat embroidered with saucer-size peonies. “Such a beautiful coat,” I said to her. We’d been introduced when I arrived but I didn’t remember her name.
The experience of waiting backstage before an event is always the same. I can never quite hear what the person making the introduction is saying, and for a moment I wouldn’t be able to tell you the name of the theater or even the city I was in. There’s usually a guy working the light board and the mics who talks to me for a minute, though tonight the guy talking was Tom Hanks. He wanted to know whether I liked owning a bookstore. He was thinking about opening one himself. Could we talk about it sometime? Of course we could. We were about to go on.
“I don’t have any questions,” I whispered in the darkness. “I find these things go better if you just wing it.” Then the two of us stepped out into the blinding light.
As soon as the roaring thunder of approval eased, he pointed at me and said, “She doesn’t have any questions.”
“Come on, Sooki,” he said, his voice gone grand. “Let’s go back to the hotel. I need to find a Belvedere martini.”
I hoped he would ask me to join them. I’d spent two hours on a stage talking to Tom Hanks, and now I wanted to talk to Sooki. Sooki of the magnificent coat. She had said almost nothing and yet my eye kept going to her, the way one’s eye goes to the flash of iridescence on a hummingbird’s throat. I thought about how extraordinarily famous you would have to be to have someone like that working as your assistant.
Neither of them asked me out for drinks.
But any story that starts will also end. This is the way novelists think: beginning, middle, and end.
In case you haven’t read it, Uncommon Type is a very good book. It would have to be for this story to continue. Had it been a bad book or just a good-enough book, I would have put it down, but page after page it surprised me. Two days later, I sent an endorsement to the editor. I’ve written plenty of jacket quotes in my day, mostly for first-time writers of fiction whom I believed could benefit from the assistance. The thought of Tom Hanks benefiting from my assistance struck me as funny, and then I forgot about it.
Or I would have forgotten about it, except that I got a call from Tom Hanks’s publicist a few weeks later, asking whether I would fly to Washington in October to interview the actor onstage as part of his book tour. As the co-owner of a bookstore, I do this sort of thing, and while I mostly do it in Nashville, where I live, there have certainly been requests interesting enough to get me on a plane. I could have said I was busy writing a novel, and that would have been both ridiculous and true. Tom Hanks needs a favor? Happy to help.
“Do you even realize your life isn’t normal?” Niki said when I announced my trip. Niki works at the bookstore. She has opinions about my life. “You understand that other people don’t live this way?”
How other people live is pretty much all I think about. Curiosity is the rock upon which fiction is built. But for all the times people have wanted to tell me their story because they think it would make a wonderful novel, it pretty much never works out. People are not characters, no matter how often we tell them they are; conversations are not dialogue; and the actions of our days don’t add up to a plot. In life, time runs together in its sameness, but in fiction time is condensed—one action springboards into another, greater action. Cause and effect are so much clearer in novels than they are in life. You might not see how everything threads together as you read along, but when you look back from the end of the story, the map becomes clear. Maybe Niki was right about my life being different, but maybe that’s because I tend to think of things in terms of story: I pick up a book and read it late into the night, and because I like the book, I wind up on a flight to D.C.
I went by myself. I was going only for the night. I walked from my hotel to the theater and showed my ID to a guard who then led me to the crowded greenroom. I met the hosts of the event and a few people who worked for them. I was introduced to Tom Hanks’s editor, Tom Hanks’s agent, his publicist, his assistant, Tom Hanks himself. He was tall and slim, happily at ease, answering questions, signing books. Everyone was laughing at his jokes because his jokes were funny. The people around him arranged themselves into different configurations so that the assistant could take their pictures, each one handing over his or her cell phone. Audience questions arrived on index cards, were read aloud and sorted through. The ones Tom Hanks approved of were handed to me. I would ask them at the end of the event, depending on how much time we had. The greenroom crowd was then escorted to their seats, and we were ushered to the dark place behind the curtain—Tom Hanks, his assistant, and I. The assistant was a tiny woman wearing a fitted black-velvet evening coat embroidered with saucer-size peonies. “Such a beautiful coat,” I said to her. We’d been introduced when I arrived but I didn’t remember her name.
The experience of waiting backstage before an event is always the same. I can never quite hear what the person making the introduction is saying, and for a moment I wouldn’t be able to tell you the name of the theater or even the city I was in. There’s usually a guy working the light board and the mics who talks to me for a minute, though tonight the guy talking was Tom Hanks. He wanted to know whether I liked owning a bookstore. He was thinking about opening one himself. Could we talk about it sometime? Of course we could. We were about to go on.
“I don’t have any questions,” I whispered in the darkness. “I find these things go better if you just wing it.” Then the two of us stepped out into the blinding light.
As soon as the roaring thunder of approval eased, he pointed at me and said, “She doesn’t have any questions.”
***
When the event was over and more pictures had been taken and everyone had said how much they’d enjoyed absolutely everything, Tom Hanks and his assistant and I found ourselves alone again, standing at the end of a long cement hallway by a stage door, saying good night and goodbye. A car was coming to pick them up.“Come on, Sooki,” he said, his voice gone grand. “Let’s go back to the hotel. I need to find a Belvedere martini.”
I hoped he would ask me to join them. I’d spent two hours on a stage talking to Tom Hanks, and now I wanted to talk to Sooki. Sooki of the magnificent coat. She had said almost nothing and yet my eye kept going to her, the way one’s eye goes to the flash of iridescence on a hummingbird’s throat. I thought about how extraordinarily famous you would have to be to have someone like that working as your assistant.
Neither of them asked me out for drinks.
by Ann Patchett, Harper's | Read more:
Image: Sooki Raphael. Sparky Walks the Neighborhood with Ann, Nashville 2020
Image: Sooki Raphael. Sparky Walks the Neighborhood with Ann, Nashville 2020
The Next Act For Messenger RNA Could Be Bigger Than Covid Vaccines
On December 23, as part of a publicity push to encourage people to get vaccinated against covid-19, the University of Pennsylvania released footage of two researchers who developed the science behind the shots, Katalin Karikó and Drew Weissman, getting their inoculations. The vaccines, icy concoctions of fatty spheres and genetic instructions, used a previously unproven technology based on messenger RNA and had been built and tested in under a year, thanks to discoveries the pair made starting 20 years earlier.
In the silent promotional clip, neither one speaks or smiles as a nurse inserts the hypodermic into their arms. I later asked Weissman, who has been a physician and working scientist since 1987, what he was thinking in that moment. “I always wanted to develop something that helps people,” he told me. “When they stuck that needle in my arm, I said, ‘I think I’ve finally done it.’”
The infection has killed more than 2 million people globally, including some of Weissman’s childhood friends. So far, the US vaccine campaign has relied entirely on shots developed by Moderna Therapeutics of Cambridge, Massachusetts, and BioNTech in Mainz, Germany, in partnership with Pfizer. Both employ Weissman’s discoveries. (Weissman’s lab gets funding from BioNTech, and Karikó now works at the company.)
Unlike traditional vaccines, which use live viruses, dead ones, or bits of the shells that viruses come cloaked in to train the body’s immune system, the new shots use messenger RNA—the short-lived middleman molecule that, in our cells, conveys copies of genes to where they can guide the making of proteins.
The message the mRNA vaccine adds to people’s cells is borrowed from the coronavirus itself—the instructions for the crown-like protein, called spike, that it uses to enter cells. This protein alone can’t make a person sick; instead, it prompts a strong immune response that, in large studies concluded in December, prevented about 95% of covid-19 cases.
Beyond potentially ending the pandemic, the vaccine breakthrough is showing how messenger RNA may offer a new approach to building drugs.
In the near future, researchers believe, shots that deliver temporary instructions into cells could lead to vaccines against herpes and malaria, better flu vaccines, and, if the covid-19 germ keeps mutating, updated coronavirus vaccinations, too.
But researchers also see a future well beyond vaccines. They think the technology will permit cheap gene fixes for cancer, sickle-cell disease, and maybe even HIV.
For Weissman, the success of covid vaccines isn’t a surprise but a welcome validation of his life’s work. “We have been working on this for over 20 years,” he says. “We always knew RNA would be a significant therapeutic tool.”
Perfect timing
Despite those two decades of research, though, messenger RNA had never been used in any marketed drug before last year. (...)
Unlike most biotech drugs, RNA is not made in fermenters or living cells—it’s produced inside plastic bags of chemicals and enzymes. Because there’s never been a messenger RNA drug on the market before, there was no factory to commandeer and no supply chain to call on.
When I spoke to Moderna CEO Stéphane Bancel in December, just before the US Food and Drug Administration authorized his company’s vaccine, he was feeling confident about the shot but worried about making enough of it. Moderna had promised to make up to a billion doses during 2021. Imagine, he said, that Henry Ford was rolling the first Model T off the production line, only to be told the world needed a billion of them.
Bancel calls the way covid-19 arrived just as messenger RNA technology was ready an “aberration of history.”
In other words, we got lucky.
Human bioreactors
The first attempt to use synthetic messenger RNA to make an animal produce a protein was in 1990. It worked but a big problem soon arose. The injections made mice sick. “Their fur gets ruffled. They lose weight, stop running around,” says Weissman. Give them a large dose, and they’d die within hours. “We quickly realized that messenger RNA was not usable,” he says.
The culprit was inflammation. Over a few billion years, bacteria, plants, and mammals have all evolved to spot the genetic material from viruses and react to it. Weissman and Karikó’s next step, which “took years,” he says, was to identify how cells were recognizing the foreign RNA.
As they found, cells are packed with sensing molecules that distinguish your RNA from that of a virus. If these molecules see viral genes, they launch a storm of immune molecules called cytokines that hold the virus at bay while your body learns to cope with it. “It takes a week to make an antibody response; what keeps you alive for those seven days is these sensors,” Weissman says. But too strong a flood of cytokines can kill you.
The eureka moment was when the two scientists determined they could avoid the immune reaction by using chemically modified building blocks to make the RNA. It worked. Soon after, in Cambridge, a group of entrepreneurs began setting up Moderna Therapeutics to build on Weissman’s insight.
Vaccines were not their focus. At the company’s founding in 2010, its leaders imagined they might be able to use RNA to replace the injected proteins that make up most of the biotech pharmacopoeia, essentially producing drugs inside the patient’s own cells from an RNA blueprint. “We were asking, could we turn a human into a bioreactor?” says Noubar Afeyan, the company’s cofounder and chairman and the head of Flagship Pioneering, a firm that starts biotech companies.
If so, the company could easily name 20, 30, or even 40 drugs that would be worth replacing. But Moderna was struggling with how to get the messenger RNA to the right cells in the body, and without too many side effects. Its scientists were also learning that administering repeat doses, which would be necessary to replace biotech blockbusters like a clotting factor that’s given monthly, was going to be a problem. “We would find it worked once, then the second time less, and then the third time even lower,” says Afeyan. “That was a problem and still is.”
Moderna pivoted. What kind of drug could you give once and still have a big impact? The answer eventually became obvious: a vaccine. With a vaccine, the initial supply of protein would be enough to train the immune system in ways that could last years, or a lifetime. (...)
Pivoting to vaccines did have a drawback for Moderna. Andrew Lo, a professor at MIT’s Laboratory for Financial Engineering, says that most vaccines lose money. The reason is that many shots sell for a “fraction of their economic value.” Governments will pay $100,000 for a cancer drug that adds a month to a person’s life but only want to pay $5 for a vaccine that can protect against an infectious disease for good. Lo calculated that vaccine programs for emerging threats like Zika or Ebola, where outbreaks come and go, would deliver a -66% return on average. “The economic model for vaccines is broken,” he says.
On the other hand, vaccines are more predictable. When Lo’s team analyzed thousands of clinical trials, they found that vaccine programs frequently succeed. Around 40% of vaccine candidates in efficacy tests, called phase 2 clinical trials, proved successful, a rate 10 times that of cancer drugs.
Adding to mRNA vaccines’ chance of success was a lucky break. Injected into the arm, the nanoparticles holding the critical instructions seemed to home in on dendritic cells, the exact cell type whose job is to train the immune system to recognize a virus. What’s more, something about the particles put the immune system on alert. It wasn’t planned, but they were working as what’s called a vaccine adjuvant. “We couldn’t believe the effect,” says Weissman.
Vaccines offered Moderna’s CEO, Bancel, a chance to advance a phalanx of new products. Since every vaccine would use the same nanoparticle carrier, they could be rapidly reprogrammed, as if they were software. (Moderna had even trademarked the name “mRNA OS,” for operating system.) “The way we make mRNA for one vaccine is exactly the same as for another,” he says. “Because mRNA is an information molecule, the difference between our covid vaccine, Zika vaccine, and flu vaccine is only the order of the nucleotides.”
by Antonio Regalado, MIT Technology Review | Read more:
In the silent promotional clip, neither one speaks or smiles as a nurse inserts the hypodermic into their arms. I later asked Weissman, who has been a physician and working scientist since 1987, what he was thinking in that moment. “I always wanted to develop something that helps people,” he told me. “When they stuck that needle in my arm, I said, ‘I think I’ve finally done it.’”
The infection has killed more than 2 million people globally, including some of Weissman’s childhood friends. So far, the US vaccine campaign has relied entirely on shots developed by Moderna Therapeutics of Cambridge, Massachusetts, and BioNTech in Mainz, Germany, in partnership with Pfizer. Both employ Weissman’s discoveries. (Weissman’s lab gets funding from BioNTech, and Karikó now works at the company.)
Unlike traditional vaccines, which use live viruses, dead ones, or bits of the shells that viruses come cloaked in to train the body’s immune system, the new shots use messenger RNA—the short-lived middleman molecule that, in our cells, conveys copies of genes to where they can guide the making of proteins.
The message the mRNA vaccine adds to people’s cells is borrowed from the coronavirus itself—the instructions for the crown-like protein, called spike, that it uses to enter cells. This protein alone can’t make a person sick; instead, it prompts a strong immune response that, in large studies concluded in December, prevented about 95% of covid-19 cases.
Beyond potentially ending the pandemic, the vaccine breakthrough is showing how messenger RNA may offer a new approach to building drugs.
In the near future, researchers believe, shots that deliver temporary instructions into cells could lead to vaccines against herpes and malaria, better flu vaccines, and, if the covid-19 germ keeps mutating, updated coronavirus vaccinations, too.
But researchers also see a future well beyond vaccines. They think the technology will permit cheap gene fixes for cancer, sickle-cell disease, and maybe even HIV.
For Weissman, the success of covid vaccines isn’t a surprise but a welcome validation of his life’s work. “We have been working on this for over 20 years,” he says. “We always knew RNA would be a significant therapeutic tool.”
Perfect timing
Despite those two decades of research, though, messenger RNA had never been used in any marketed drug before last year. (...)
Unlike most biotech drugs, RNA is not made in fermenters or living cells—it’s produced inside plastic bags of chemicals and enzymes. Because there’s never been a messenger RNA drug on the market before, there was no factory to commandeer and no supply chain to call on.
When I spoke to Moderna CEO Stéphane Bancel in December, just before the US Food and Drug Administration authorized his company’s vaccine, he was feeling confident about the shot but worried about making enough of it. Moderna had promised to make up to a billion doses during 2021. Imagine, he said, that Henry Ford was rolling the first Model T off the production line, only to be told the world needed a billion of them.
Bancel calls the way covid-19 arrived just as messenger RNA technology was ready an “aberration of history.”
In other words, we got lucky.
Human bioreactors
The first attempt to use synthetic messenger RNA to make an animal produce a protein was in 1990. It worked but a big problem soon arose. The injections made mice sick. “Their fur gets ruffled. They lose weight, stop running around,” says Weissman. Give them a large dose, and they’d die within hours. “We quickly realized that messenger RNA was not usable,” he says.
The culprit was inflammation. Over a few billion years, bacteria, plants, and mammals have all evolved to spot the genetic material from viruses and react to it. Weissman and Karikó’s next step, which “took years,” he says, was to identify how cells were recognizing the foreign RNA.
As they found, cells are packed with sensing molecules that distinguish your RNA from that of a virus. If these molecules see viral genes, they launch a storm of immune molecules called cytokines that hold the virus at bay while your body learns to cope with it. “It takes a week to make an antibody response; what keeps you alive for those seven days is these sensors,” Weissman says. But too strong a flood of cytokines can kill you.
The eureka moment was when the two scientists determined they could avoid the immune reaction by using chemically modified building blocks to make the RNA. It worked. Soon after, in Cambridge, a group of entrepreneurs began setting up Moderna Therapeutics to build on Weissman’s insight.
Vaccines were not their focus. At the company’s founding in 2010, its leaders imagined they might be able to use RNA to replace the injected proteins that make up most of the biotech pharmacopoeia, essentially producing drugs inside the patient’s own cells from an RNA blueprint. “We were asking, could we turn a human into a bioreactor?” says Noubar Afeyan, the company’s cofounder and chairman and the head of Flagship Pioneering, a firm that starts biotech companies.
If so, the company could easily name 20, 30, or even 40 drugs that would be worth replacing. But Moderna was struggling with how to get the messenger RNA to the right cells in the body, and without too many side effects. Its scientists were also learning that administering repeat doses, which would be necessary to replace biotech blockbusters like a clotting factor that’s given monthly, was going to be a problem. “We would find it worked once, then the second time less, and then the third time even lower,” says Afeyan. “That was a problem and still is.”
Moderna pivoted. What kind of drug could you give once and still have a big impact? The answer eventually became obvious: a vaccine. With a vaccine, the initial supply of protein would be enough to train the immune system in ways that could last years, or a lifetime. (...)
Pivoting to vaccines did have a drawback for Moderna. Andrew Lo, a professor at MIT’s Laboratory for Financial Engineering, says that most vaccines lose money. The reason is that many shots sell for a “fraction of their economic value.” Governments will pay $100,000 for a cancer drug that adds a month to a person’s life but only want to pay $5 for a vaccine that can protect against an infectious disease for good. Lo calculated that vaccine programs for emerging threats like Zika or Ebola, where outbreaks come and go, would deliver a -66% return on average. “The economic model for vaccines is broken,” he says.
On the other hand, vaccines are more predictable. When Lo’s team analyzed thousands of clinical trials, they found that vaccine programs frequently succeed. Around 40% of vaccine candidates in efficacy tests, called phase 2 clinical trials, proved successful, a rate 10 times that of cancer drugs.
Adding to mRNA vaccines’ chance of success was a lucky break. Injected into the arm, the nanoparticles holding the critical instructions seemed to home in on dendritic cells, the exact cell type whose job is to train the immune system to recognize a virus. What’s more, something about the particles put the immune system on alert. It wasn’t planned, but they were working as what’s called a vaccine adjuvant. “We couldn’t believe the effect,” says Weissman.
Vaccines offered Moderna’s CEO, Bancel, a chance to advance a phalanx of new products. Since every vaccine would use the same nanoparticle carrier, they could be rapidly reprogrammed, as if they were software. (Moderna had even trademarked the name “mRNA OS,” for operating system.) “The way we make mRNA for one vaccine is exactly the same as for another,” he says. “Because mRNA is an information molecule, the difference between our covid vaccine, Zika vaccine, and flu vaccine is only the order of the nucleotides.”
Image: Selman Design
Labels:
Biology,
Drugs,
Medicine,
Science,
Technology
Tuesday, February 9, 2021
The Real Origins of the Religious Right
One of the most durable myths in recent history is that the religious right, the coalition of conservative evangelicals and fundamentalists, emerged as a political movement in response to the U.S. Supreme Court’s 1973 Roe v. Wade ruling legalizing abortion. The tale goes something like this: Evangelicals, who had been politically quiescent for decades, were so morally outraged by Roe that they resolved to organize in order to overturn it.
This myth of origins is oft repeated by the movement’s leaders. In his 2005 book, Jerry Falwell, the firebrand fundamentalist preacher, recounts his distress upon reading about the ruling in the Jan. 23, 1973, edition of the Lynchburg News: “I sat there staring at the Roe v. Wade story,” Falwell writes, “growing more and more fearful of the consequences of the Supreme Court’s act and wondering why so few voices had been raised against it.” Evangelicals, he decided, needed to organize.
Some of these anti- Roe crusaders even went so far as to call themselves “new abolitionists,” invoking their antebellum predecessors who had fought to eradicate slavery.
But the abortion myth quickly collapses under historical scrutiny. In fact, it wasn’t until 1979—a full six years after Roe—that evangelical leaders, at the behest of conservative activist Paul Weyrich, seized on abortion not for moral reasons, but as a rallying-cry to deny President Jimmy Carter a second term. Why? Because the anti-abortion crusade was more palatable than the religious right’s real motive: protecting segregated schools. So much for the new abolitionism.
When the Roe decision was handed down, W. A. Criswell, the Southern Baptist Convention’s former president and pastor of First Baptist Church in Dallas, Texas—also one of the most famous fundamentalists of the 20th century—was pleased: “I have always felt that it was only after a child was born and had a life separate from its mother that it became an individual person,” he said, “and it has always, therefore, seemed to me that what is best for the mother and for the future should be allowed.”
Although a few evangelical voices, including Christianity Today magazine, mildly criticized the ruling, the overwhelming response was silence, even approval. Baptists, in particular, applauded the decision as an appropriate articulation of the division between church and state, between personal morality and state regulation of individual behavior. “Religious liberty, human equality and justice are advanced by the Supreme Court abortion decision,” wrote W. Barry Garrett of Baptist Press.
This myth of origins is oft repeated by the movement’s leaders. In his 2005 book, Jerry Falwell, the firebrand fundamentalist preacher, recounts his distress upon reading about the ruling in the Jan. 23, 1973, edition of the Lynchburg News: “I sat there staring at the Roe v. Wade story,” Falwell writes, “growing more and more fearful of the consequences of the Supreme Court’s act and wondering why so few voices had been raised against it.” Evangelicals, he decided, needed to organize.
Some of these anti- Roe crusaders even went so far as to call themselves “new abolitionists,” invoking their antebellum predecessors who had fought to eradicate slavery.
But the abortion myth quickly collapses under historical scrutiny. In fact, it wasn’t until 1979—a full six years after Roe—that evangelical leaders, at the behest of conservative activist Paul Weyrich, seized on abortion not for moral reasons, but as a rallying-cry to deny President Jimmy Carter a second term. Why? Because the anti-abortion crusade was more palatable than the religious right’s real motive: protecting segregated schools. So much for the new abolitionism.
***
Today, evangelicals make up the backbone of the pro-life movement, but it hasn’t always been so. Both before and for several years after Roe, evangelicals were overwhelmingly indifferent to the subject, which they considered a “Catholic issue.” In 1968, for instance, a symposium sponsored by the Christian Medical Society and Christianity Today, the flagship magazine of evangelicalism, refused to characterize abortion as sinful, citing “individual health, family welfare, and social responsibility” as justifications for ending a pregnancy. In 1971, delegates to the Southern Baptist Convention in St. Louis, Missouri, passed a resolution encouraging “Southern Baptists to work for legislation that will allow the possibility of abortion under such conditions as rape, incest, clear evidence of severe fetal deformity, and carefully ascertained evidence of the likelihood of damage to the emotional, mental, and physical health of the mother.” The convention, hardly a redoubt of liberal values, reaffirmed that position in 1974, one year after Roe, and again in 1976.When the Roe decision was handed down, W. A. Criswell, the Southern Baptist Convention’s former president and pastor of First Baptist Church in Dallas, Texas—also one of the most famous fundamentalists of the 20th century—was pleased: “I have always felt that it was only after a child was born and had a life separate from its mother that it became an individual person,” he said, “and it has always, therefore, seemed to me that what is best for the mother and for the future should be allowed.”
Although a few evangelical voices, including Christianity Today magazine, mildly criticized the ruling, the overwhelming response was silence, even approval. Baptists, in particular, applauded the decision as an appropriate articulation of the division between church and state, between personal morality and state regulation of individual behavior. “Religious liberty, human equality and justice are advanced by the Supreme Court abortion decision,” wrote W. Barry Garrett of Baptist Press.
***
So what then were the real origins of the religious right? It turns out that the movement can trace its political roots back to a court ruling, but not Roe v. Wade. by Randall Balmer, Politico | Read more:
Image: uncredited
Subscribe to:
Posts (Atom)