Sunday, April 25, 2021

Recommended Readings

Prologue

When I finished writing “Some things I’ve learned in college,” I thought it was one of my least interesting posts to date. Surprisingly, it was one of my most viewed and has generated the most discussion of all (as measured by Reddit comments, here).

As has been noted by many a wise sage, page views and comments are not a perfect measure of a piece of writing’s quality, or overall value. However, the value of writing to someone who never sees it is zero, so the two do have something to do with each other.

In hindsight, this shouldn’t have been so surprising. Of course I already know what I’ve learned in college, so I’m not going to find it particularly interesting to write down. On the other hand, I often learn quite a bit by writing blog posts, both through object-level research and simply by spending time thinking about a topic.

But, of course, no one else knows I learned in college. So, I am currently trying to consider which aspects of my own life, despite being “obvious” to me, others might find interesting. The lowest hanging fruit is media recommendations. Tons of blogs have lists of favorite books, articles, or other blogs, and, as I’ve noted before, I spend a little too much time listening to audiobooks and podcasts.

So, here are some things I recommend you read or listen to. But first, how to listen to them:

Programs

Pay especially close attention if, like me, you prefer listening to reading.

Libby
  • A gem of the internet: tens of thousands of free books and audiobooks, and not just boring old ones in the public domain.
  • You need a library card, but it took me about five minutes to get for someone else when helping him set up the app.
  • Digital copies are limited for most books, so the popular ones can take a while. But there are always quite a few good nonfiction audiobooks available,and some have unlimited copies so there is never a wait.
  • Also, it lets you adjust the reading speed in 5% increments (1x, 1.05x, 1.1x, …), which is surprisingly useful. Any app that still limits you to 1.25x or 1.5x speed needs to learn this lesson.
  • Most of the books below I listened to on Libby, so I won’t bother finding the link to them. Search for “Libby” in the App Store, and then search for the book there.
I have no idea why more people don’t know about this. Spread the word!
  • Super clunky 90s-looking website that makes up for aesthetics with utility.
  • Input a list of books you like, and get an instant list of recommendations.
  • There are a million programs for saving links that you want to (i.e. will probably never) read later.
  • But its secondary function is awesome: the iPhone iOS app automatically generates an audio recording of any article you save. Not ideal for pages with lots of graphics or important formatting, but super convenient for walls of text.
  • For some reason, the mac version of the app doesn’t have this feature. Damn.
  • This is my go-to text-to-voice program for miscellaneous articles I don’t need to focus on super well.
Speechify
  • Nice, free text-to-voice software.
  • The Good: super fine-grained speed adjustment up to incomprehensibly fast, pretty good automated reading voice, highlights words as it reads them.
  • The bad: kinda glitchy, for me anyway. Just stops reading once in a while and sometimes hard to edit copy-and-pasted text.
  • I use this one for longer texts (long articles, entire books) that don’t have important graphics. Will often read and listen at the same time if it is important.
  • My go-to text-to-speech for reading shorter webpages or those with important graphics.
  • The good: reads directly from a webpage so you don’t have to switch into another program; much better for pages with important graphics. Highlights sentences as they are read.
  • The bad: the fastest reading speed isn’t that fast. It’s ok, but I could see someone with a better-oiled brain than my own unsatisfied. Also kinda glitchy - sometimes takes me back to the top of the article after I pause for more than a few seconds.
Books

Cream of the crop: my strongest recommendations

The Master and His Emissary: The Divided Brain and the Making of the Western World by Ian McGilchrist.
  • Perhaps my strongest recommendation on the list. Completely worldview-shifting, with implications for every facet of human life, psychology, culture, and society, not to mention philosophy and artificial intelligence. Deserves a careful read by virtually everyone.
  • Central claim: our two brain hemispheres process information in fundamentally different ways, which correspond to competing worldview, conscious experiences, ontologies and epistemologies.
  • Fair warning: a very dense book. I did not listen to this one, and doubt that I could have. Listening < reading << careful reading with notes.
The Precipice by Toby Ord
  • A last-minute addition to the list, I started reading after beginning to write this post and am now nearly finished.
  • Basic claim: humanity could have an awesome future, but there’s a substantial chance (around one in six, according to Ord) we’ll suffer an “existential catastrophe” — basically extinction or something similar — within the next century.
  • Ord is meticulous and rigorous, considering natural risks like supervolcano explosions, existing anthropogenic risks like nuclear war, and risks from future technologies like artificial intelligence. He considers weird anthropic principle observational biases, what conclusions we can draw from our past survival, the implications of risks being correlated or anti-correlated, and more.
  • It’s worth a read if only as an exemplar of what a really earnest (and IMO successful) effort to answer one particular question (namely, determining the probability of humanity losing its potential in the next century) looks like.
Amusing Ourselves to Death by Neil Postman
  • Written pre-internet and around the peak of television’s cultural dominance, the book is an unapologetic diatribe against TV as a source of information.
  • Main claim: whereas the written word is a medium optimized for transmission of a particular set of ideas, or “rational argumentation,” TV encourages information—including news and ‘educational’ programming—to be packaged as entertainment. The result is a culture with lower quality discourse and shorter attention spans.
  • Although TV has ceded its hegemony to the internet, the larger point remains as valid as ever: a medium of communication (writing, speaking, TV, Twitter) has not only first-order consequences on the information being transmitted, but far-reaching second-order consequences on society at large.
  • The most memorable part of the book is Postman’s description of the Lincoln-Douglas debates. Not that the politicians debated for 3 hours using complex sentence structures and forms of argumentation, but that completely normal people voluntarily and enthusiastically watched the whole thing! Not scholars or elites - just regular old farmers and blacksmiths or whatever. Not going to lie, this made me jealous. Like many of us in the internet age, I wish my attention was stronger and more robust. Despite being more educated than most of the debate audience, random 1860s farmers apparently had much stronger attention spans than my own.
The Uninhabitable Earth by David Wallace Wells
  • I don’t usually read books that are preaching to the choir (i.e. just convincing me of what I already believe), but even us follow-the-science liberals or progressives probably tend to underrate the importance of climate change. However bad you think it is going to be, it will probably be worse.
  • Also, Wallace-Wells’ writing itself is off-the-charts eloquent and poetic. Even if you think climate change is a Chinese hoax, this book is valuable if only as an exemplar of poetic prose.
by Aaron Bergman, Aaron's Blog | Read more:

College Math

via:
[ed. Good point.]

Saturday, April 24, 2021

Why Are Glasses So Expensive?

It’s a question I get asked frequently, most recently by a colleague who was shocked to find that his new pair of prescription eyeglasses cost about $800.

Why are these things so damn expensive?

The answer: Because no one is doing anything to prevent a near-monopolistic, $100-billion industry from shamelessly abusing its market power.

Prescription eyewear represents perhaps the single biggest mass-market consumer ripoff to be found.

The stats tell the whole story.
  • The Vision Council, an optical industry trade group, estimates that about three-quarters of U.S. adults use some sort of vision correction. About two-thirds of that number wear eyeglasses.
  • That’s roughly 126 million people, which represents some pretty significant economies of scale.
  • The average cost of a pair of frames is $231, according to VSP, the leading provider of employer eye care benefits.
  • The average cost of a pair of single-vision lenses is $112. Progressive, no-line lenses can run twice that amount.
  • The true cost of a pair of acetate frames — three pieces of plastic and some bits of metal — is as low as $10, according to some estimates. Check out the prices of Chinese designer knockoffs available online.
  • Lenses require precision work, but they are almost entirely made of plastic and almost all production is automated.
The bottom line: You’re paying a markup on glasses that would make a luxury car dealer blush, with retail costs from start to finish bearing no relation to reality. (...)

I reached out to the Vision Council for an industry perspective on pricing. The group describes itself as “a nonprofit organization serving as a global voice for eyewear and eyecare.”

But after receiving my email asking why glasses cost so much, Kelly Barry, a spokeswoman for the Vision Council, said the group “is unable to participate in this story at this time.”

I asked why. She said the Vision Council, a global voice for eyewear and eyecare, prefers to focus on “health and fashion trend messaging.”

And because it represents so many different manufacturers and brands, she said, it’s difficult for the association “to make any comments on pricing.”

Which is to say, don’t worry your pretty head.

What the Vision Council probably didn’t want to get into is the fact that for years a single company, Luxottica, has controlled much of the eyewear market. If you wear designer glasses, there’s a very good chance you’re wearing Luxottica frames.

Its owned and licensed brands include Armani, Brooks Brothers, Burberry, Chanel, Coach, DKNY, Dolce & Gabbana, Michael Kors, Oakley, Oliver Peoples, Persol, Polo Ralph Lauren, Ray-Ban, Tiffany, Valentino, Vogue and Versace.

Italy’s Luxottica also runs EyeMed Vision Care, LensCrafters, Pearle Vision, Sears Optical, Sunglass Hut and Target Optical.

Just pause to appreciate the lengthy shadow this one company casts over the vision care market. You go into a LensCrafters retail outlet, where the salesperson shows you Luxottica frames under various names, and then the company pays itself when you use your EyeMed insurance.

A very sweet deal.

by David Lazarus, LA Times |  Read more:
Image: Getty
[ed. Repost. I bought some sunglasses last week and wondered why nothing ever changed after this exposé. See also: this 60 Minutes segment on Luxottica.]

Nancy Wilson Tribute to Eddie Van Halen "4 Edward"

"Sleepminting" Exposes Vulnerability of the NFT Market (and Other Insights)

In the opening days of April, an artist operating under the pseudonym Monsieur Personne (“Mr. Nobody”) tried to short-circuit the NFT hype machine by unleashing “sleepminting,” a process that complicates, if not corrodes, one of the value propositions underlying non-fungible tokens. His actions raise thorny questions about everything from coding, to copyright law, to consumer harm. Most importantly, though, they indicate that the market for crypto-collectibles may be scaling up faster than the technological foundation can support.

Debuted as part of an ongoing project titled NFTheft, sleepminting serves as a benevolent but alarming crypto-counterfeiting exercise. It aims to show that an artist can be made to unconsciously assert authorship on the Ethereum blockchain just as surely as a sleepwalking disorder can compel someone to waltz out of their bedroom while in a deep doze.

Remember, to “mint” an NFT means to register a particular user as its creator and initial owner. Theoretically, this becomes the first link in a verified, unbreakable chain of custody tethered to an NFT for the life of the underlying blockchain network. Thanks to this perfectly complete, perfectly secure, and eternally checkable data record, the argument goes, potential buyers can trust non-fungible tokens without necessarily having to trust their owners or sellers. These traits add a valuable layer of security that traditional artworks could never rival with their eternally dubious off-chain certificates of authenticity and provenance documents.

Personne may have found a way to dynamite this argument for much of the art NFT market. Sleepminting enables him to mint NFTs for, and to, the crypto wallets of other artists, then transfer ownership back to himself without their consent or knowing participation. Nevertheless, each of these transactions appears as legitimate on the blockchain record as if the unwitting artist had initiated them on their own, opening up the prospect of sophisticated fraud on a mass scale.

To prove his point, on April 4, Personne sleepminted a supposed “second edition” of Beeple’s record-smashing Everydays: The First 5,000 Days, the digital work and accompanying token that sold for a vertigo-inducing $69.3 million via Christie’s less than a month earlier. (My emails to Beeple and his publicist about the situation went unanswered.)

In our ensuing email exchange, Personne claimed he then gifted the sleepminted Beeple (Token ID 40914, for the real crypto-heads) to a user with the suspiciously appropriate handle Arsène Lupin, an homage to the famous “gentleman thief” created by Maurice Leblanc and recently reincarnated in a hit Netflix show. (Personne denied he was Lupin to the blog Nifty News.) Lupin then turned around and offered the sleepminted Beeple for sale on Rarible and Opensea, two of the largest NFT marketplaces—both of which eventually deactivated the listings. (Neither Rarible nor Opensea replied to my emails seeking comment.)

Why publicize any of this, you ask? Personne essentially sees himself as a so-called white hat hacker, meaning an ethics-driven coder who exploits technological flaws strictly to demonstrate how they can be fixed. He is a staunch believer in the potential of NFTs and crypto. However, he believes major “security issues and vulnerabilities” in smart contracts have been glossed over to make way for the gold rush. He also claimed to have launched the NFTheft project only after the crypto-community largely ignored or derided his attempts to spark earnest conversation.

“The goal I want to achieve with this is to take the most expensive and historic NFT, and show that if it is not protected, how can we guarantee that any NFT is safe from intentional malice, fraud, forgeries, theft, etc.?” he wrote.

Although the sleepminting saga is hairier than a Haight-Ashbury commune, I think we can chop through the overgrowth using two questions with serious stakes for different participants in the NFT market.

1. What does sleepminting tell us about the technological vulnerabilities of art-related NFTs?

Short Answer

The main smart contract driving the market might not be smart enough to secure the frenzied level of buying and selling we’ve seen in 2021.

Longer Answer

What’s clear is that Personne is exploiting a flaw in the standard ERC721 smart contract, which is used by the overwhelming majority of art-related NFTs transacting on the Ethereum blockchain. But it is not an easy-to-see flaw, and the effect is not being faked by Photoshop wizardry or some other non-crypto chicanery; the sleepminted Beeple really is minted in Beeple’s wallet, it really is transferred elsewhere afterward—and both of those transactions are memorialized forever on the blockchain.

by Tim Schneider, ArtNet | Read more:
Image: Beeple, Everydays – The First 5000 Days NFT

Maggie Cheung with Tony Leung - Wong Kar Wai
via:

The Journalist Who Fell Asleep on Prince

I remember when Neal Karlen was invited to interview Prince, the most famous rock star on the planet, for a Rolling Stone cover story in 1985. Neal had been invited by Prince himself, who at that point had maintained a three-year media silence, a veritable epoch in rock ’n’ roll time. The era began with the release of Prince’s first top 10 hit, “Little Red Corvette,” followed by “When Doves Cry,” the first No. 1 single from the movie Purple Rain, then the release of the movie itself, whose soundtrack took only eight days to reach No. 1, where it stayed for an astonishing 24 weeks. It continued with another No. 1 hit from the movie, “Let’s Go Crazy,” by Prince and The Revolution, and if that weren’t enough, shortly after Prince picked up his Oscar for best original soundtrack for Purple Rain, his seventh album, Around the World in a Day, featuring “Raspberry Beret” and “Pop Life,” also went to No. 1 in the United States.

By then, press was possibly beside the point. After all, in 1980, Robert Christgau, in his review of Prince’s Dirty Mind for the Village Voice, declared: “Mick Jagger should fold up his penis and go home.”

In any event, the reason I knew about Neal’s summoning by “the dynamic funk enigma that is Prince” is that we were both then toiling at Newsweek, cub reporters who had tumbled out of our respective college dorm rooms and into the skyscraper offices at 444 Madison Ave., where we were variously biding our time over savagely tedious tasks and battling the fires of Friday-night closings. Moreover, we had formed an exclusive club of two as the only Newsweekers who had just written cover stories for the far groovier Rolling Stone, eight blocks north on Fifth Avenue but 100 million cosmic miles away. Mine had been a story on the genre-busting TV show Miami Vice. And Neal had written about Jamie Lee Curtis, then starring alongside John Travolta in Perfect, the ‘80s self-referential paean to Rolling Stone, love, and gym life.

But, this. This was big, even for the glitzoid, over-the-top ’80s. The story that would emerge, “Prince Talks: The Silence is Broken,” would be Neal’s first of three cover stories for Rolling Stone about Prince. And it would mark the beginning of a friendship that would last 31 years until Prince’s death alone in an elevator in Paisley Park, his compound in Chanhassen, Minnesota, in a manner that Prince had predicted in that very first interview, while speaking about the tunes and tones in his head that caused him to make music all night, sometimes for 20 hours straight, with the help of studio engineers who worked in shifts. “‘One of my friends worries that I’ll short-circuit. We always say I’ll make the final fade on a song one time and ...’” stretching out on a recording console, as Karlen wrote: “limbs awkwardly splayed like a body ready to be chalked by the coroner, he crossed his eyes and stuck his arm out.”

Which is pretty much what happened, give or take a few details.

And now, Neal has published a memoir, This Thing Called Life, a compulsively readable reconstruction of the relationship between the two Minneapolis natives, alternately middle-of-the-night confessional, hysterical, and athletic. “You want to play ping pong?” Prince would ask during lulls in the action. On one side of the ring was “the universally acclaimed genius” and “personification of hip” to whom other “intergalactic hipsters offered unashamed gush.” Prince was the “cultural icon who defied and cross-bred genres from fashion to funk … and whose death made the Eiffel Tower, the cover of the New Yorker, the front-page above-the-fold headline picture of the New York Times, and all of downtown Minneapolis glow purple.”

On the other, pencil ever behind his ear, was the “flamboyantly engaging” motor mouth, Brown graduate, and former bar mitzvah tutor (Hebrew name Natan Shmuel), whose Yiddish-speaking grandfather made Shabbos wine under special governmental dispensation during Prohibition for his local Minneapolis synagogue. As a special Christmas gesture, he carried Mason jars of the runoff—“let’s be honest, several illegal batches”—to neighbors in his mostly Black neighborhood of North Minneapolis. This earned him the moniker “the Wine Jew,” a designation that “guaranteed my grandmother and him, both octogenarian foreigners, safety and popularity in the neighborhood.” Years later, a check of his grandfather’s logs revealed that one regular recipient of the Yuletide wine was Prince’s father, John Nelson.

The magna carta of the tale is the initial RS interview, birthed out of the same sorts of betrayals, double-crosses, and acts of true faith that characterized post-WWII international agreements—and most of Prince’s life, which Karlen followed and participated in, to a degree, until the musician’s death from a fentanyl overdose five years ago today.

In the summer of 1985, Neal was already working on a story about Prince’s women—Wendy Melvoin and Lisa Coleman—his musical collaborators and bandmates in The Revolution. Eager for a vehicle to promote his new album, Around the World in a Day, since he would not tour it, Prince had agreed to speak to Rolling Stone through Wendy and Lisa, and to appear on the cover with them. But the two apparently liked Neal and mentioned this to Susannah Melvoin, Wendy’s identical twin sister and Prince’s then fiancée, who apparently recommended that Prince come out of his cone of silence and speak as well.

And suddenly, the cover interview would feature the man himself.

“By recommending to Susannah Melvoin (and perhaps positing the idea to Prince personally) that the Little Big Man talk to me,” Neal writes, “Wendy and Lisa did the unthinkably generous: They gave up their own cover story. True, they would make it the following year [a story Neal also wrote], but they didn’t know that, and once you give up the cover of any major magazine, it’s always doubtful that the wheel will come around again. People in rock and roll just don’t give up being on the cover of Rolling Stone.”

Shockingly, not more than a year later, after the galaxy of hits that had put Prince and The Revolution on a par with the Beatles, Prince would disband the group, forever affecting his relationship with the two women. Yet Prince could also express unshakable loyalty. When Neal arrived in Minneapolis to meet Prince for the interview, he was shocked to learn that Rolling Stone planned to yank the rug out from under him and send the music editor to poach the blockbuster interview with Prince for himself. But Prince whispered nyet and shut that idea right down.

Neal wrote: “Prince lived a life where nobody—nobody!—knew more than 15 percent of what was going on in his life and brain. Still, that 15 percent could encompass a lot of surprises.”

For two days after Neal arrived for the interview, (“Just get enough,” Jann Wenner had told him, “that we can put ‘Prince Talks’ on the cover,”) Prince observed him, but wouldn’t speak to him. Neal gamely sweated out this further indignity, until, finally, Prince greeted the reporter and invited him to go for a spin in his car. Before turning on the ignition, Prince stared straight ahead through the windshield of his 1965 bone-white Thunderbird, murmuring, “I’m not used to this, I really thought I’d never do interviews again.”

by Emily Benedek, Tablet |  Read more:
Image: Neal Karlen
[ed. See also: Karlen's 1985 Rolling Stone interview - Prince Talks: The Silence Is Broken]


via:

PGA's Player Impact Program Rewards Most Popular Players

There are 244 players listed in the current FedEx Cup points standings. The PGA Tour’s new $40 million bonus pool for high-profile players, dubbed the Player Impact Program, will impact only 10 of them. It is, in theory, a way to reward the biggest movers of the proverbial needle without taking anything away from the “other” 234 players in the tour ecosystem.

“It doesn’t really matter to me,” said one top-50 player of the program, details of which were first reported on Tuesday by Golfweek. “Good for the big guys, doesn’t matter to the little guys. Maybe if I win a major, I’ll have a chance.”

This, essentially, is the best reaction the tour could hope for. But the highly unusual formula to determine the players who will receive the money, and the unprecedented nature of the tour paying members for what is only tangentially related to their on-course results, drew mixed reactions from players across the Q-score and Meltwater Mention spectrum.

Those terms, by the way? Get used to them. They’re part of an algorithm the tour will use to rank players on their “Impact Score.” The goal, a PGA Tour spokesman told Golf Digest, is to “recognize and reward players who positively move the needle.”

The five criteria to identify these players are as follows:
  • Popularity on Google search
  • Nielsen Brand Exposure rating, which measures the value a player delivers to sponsors via his total time featured on broadcasts
  • Q-rating, a metric of the familiarity and appeal of player’s brand
  • MVP rating, a measure of how much engagement a player’s social media and digital channels drive
  • Meltwater mentions, or the frequency with which a player is mentioned across a range of media channels.
Noticeably absent from the criteria is any direct measure of on-course success. The initial report mentioned FedEx Cup standing would be incorporated into the calculations, but the PGA Tour confirmed it was not actually part of the formula.

A program of this kind had been in discussion for multiple years, and the Player Advisory Council always understood the value in rewarding the tour’s highest-profile players.

“I had no issue with it,” Billy Horschel, a PAC member, told Golf Digest. “When you look at it, there’s maybe 10 to 30 guys that really push the tour and bring in the money, have a transcendent personality, get a lot of attention. They’re the reason we play for as much as we do. We don’t reward mediocrity.”

The actual implementation of the program is widely seen as a response to the Premier Golf League, a potential rival to the PGA Tour that garnered significant attention in early 2020 with the promise of offering a guaranteed-money structure to entice away top players. But the upstart venture, which was backed by Saudi Arabian financiers, lost steam when several stars—led by current PAC chairman Rory McIlroy, the first marquee player to publicly denounce the PGL—pledged loyalty to the PGA Tour.

“There's a little bit of envy [among the rank and file membership],” said one multiple-time PGA Tour winner about the program, which has been in place since January. “That it's not fair, that it's using $40 million not to better our game or our sport or the tour, that they're just giving $40 million to the top 10 players to prevent them from playing in another league, which is the absolute worst reason to do it. If you want to give it to them because they deserve it that’s one thing. To do it to prevent them from making an irrational decision, I feel like is the wrong reason to do it."

by Daniel Rapaport, Golf Digest | Read more:
Image: WL
[ed. Bad idea. Everything's becoming transactional these days.]

Friday, April 23, 2021

Lesson From the Old New Deal: What Economic Recovery Might Look Like in the 21st Century

When the Green New Deal reemerged into headlines in November 2018, unemployment in the US sat at 3.7 percent. Even supporters of the program voiced warranted skepticism about its viability. Sure, the climate crisis is important, but the government hardly ever spends huge sums on big social programs anymore—least of all when the economy appears to be doing relatively well, by conventional accounting. The window for a massive stimulus opens when there’s a recession, and we weren’t in one. Times have obviously changed since then, although the path to an ambitious climate response remains far from certain.

Joe Biden will be the president by the time this book is released, having won decisively in an election that should have been by all accounts—given the blood on Trump’s hands—a blowout. Instead, Trump collected ten million more votes than in 2016, Democrats lost seats where they were expected to gain them. After run-off elections in Georgia, the party managed to win back control of the Senate, held by the narrowest of margins. Biden was pushed by movements during his campaign to adopt a climate platform more ambitious than the one he ran on in the primary. But his administration will be hard-pressed to get any of that through Congress, left mainly to find creative uses for the executive branch—that is, if he decides to treat his $2 trillion commitment to a green-tinted stimulus as anything more than lip service to progressives and isn’t completely shut down by the 6-3 right-wing majority on the Supreme Court. Democrats’ underwhelming performance in 2020, moreover, doesn’t bode well for winning back more power in upcoming elections. If anything, there’s much more to be lost.

Understanding what the road toward anything like a Green New Deal looks like now, when all manner of crises are boiling over, means taking its namesake seriously. The New Deal—in all its deep flaws and contradictions—was more than just a big spending package that helped to drag the US out of the Great Depression. It reimagined what the US government could do, what it was for, and who it served. To effect such a drastic sea change in this country’s politics, it did something climate policy in the US has historically struggled with: it made millions of people’s lives demonstrably better than they would have been otherwise. That, in turn, helped solved the other big dilemma facing a Green New Deal and just about any major progressive legislative priority: the tangible mark New Deal programs left in nearly every county in the US helped to build a sturdy Democratic electoral coalition that could bat off challenges from the right and endure for decades. Even as many of its gains have been clawed back by a revanchist right, hallmarks like Social Security remain so broadly popular that even the GOP has stopped trying to go after them. A Green New Deal should aim even higher.

Like today, the bar for successful leadership some 90 years ago was pretty low. A very rich man with even richer friends, Herbert Hoover was mostly blind to the effects of the Great Depression on working people and for a while denied there was any unemployment problem at all. Before becoming president, Hoover had made his fortune in mining, transforming himself from poor Quaker boy to lowly engineer to magnate. He gave away large chunks of his fortune to charity and fancied himself both a man of the people and a magnanimous captain of industry. Hoover assumed his fellow businessmen were philanthropic types, too. As he would find out in the waning days of his administration, America’s businessmen might fund libraries and museums, but they had neither the will nor the ability to fix the problem they had helped create. The Depression defined and destroyed his administration and nearly took down the whole concept of liberal democracy with it.

In May 1930—with an unemployment rate screeching past 20 percent—Hoover assured the US Chamber of Commerce that “I am convinced we have now passed the worst. The depression is over.” That December, his State of the Union address promised that “the fundamental strength of the Nation’s economic life is unimpaired,” blaming the Depression on “outside forces” and urging against government action.

“Economic depression,” he said then, “can not be cured by legislative action or executive pronouncement. Economic wounds must be healed by the action of the cells of the economic body—the producers and consumers themselves.” Ideologically opposed to the idea of state intervention in business, Hoover that year had convened a compromise: the Emergency Committee for Employment, to gently nudge the private sector into putting 2.5 million people back to work through local citizens’ relief committees, comprised mostly of local officials and business executives. After several months it hadn’t worked; members of the committee could point to no evidence that it had created any jobs at all. Committee head Arthur Woods petitioned the White House to create a public works program with federal funding instead. Hoover refused, and the committee withered away shortly afterward as unemployment continued to skyrocket. Its replacement was an advertising campaign coaxing individuals to give to charity. Announcing the plan via radio address, Hoover bellowed that “no governmental action, no economic doctrine, no economic plan or project can replace that God-imposed responsibility of the individual man and woman to their neighbors.” Just before the 1932 election, Hoover warned that a New Deal—what Franklin Roosevelt was campaigning on—would “destroy the very foundations of our American system” through the “tyranny of government expanded into business activities.”

Hoover had a relatively successful career up until the crash, with a well-regarded run as secretary of commerce that included his successful management of the Great Mississippi Flood of 1927 by marshaling public and private resources toward recovery. That Hoover is widely remembered as a loser is thanks mostly to who and what he lost to. Roosevelt’s blowout victory in the 1932 election—where he won 42 of 48 states—ushered in a profound change in American life. With it came 14 years of uninterrupted, one-party control over the White House and both chambers of Congress, secured not by the kinds of authoritarianism that were common through that era, and which well-heeled American elites mused might be needed, but by democratically elected Democratic majorities. Accounting for two brief interruptions just after the end of World War II, Democratic control would extend on for a total of 44 years in the Senate and 58 years in the House.

Until he left office, Hoover refused to budge on his overall approach, as he would through the rest of his life. He pleaded with Roosevelt to denounce the agenda he had just run on, which included such things as widespread unemployment insurance, a job guarantee for the unemployed, tackling soil erosion, and putting private electric utilities into public hands. As the financial system collapsed, the unemployment rate floated around 25 percent, and fascism was on the march in Europe, Hoover did nothing. Federal Reserve chairman Eugene Meyer begged him to reconsider and declare the bank holiday he knew that Roosevelt was already planning as president-elect. “You are the only one with the power to act. We are fiddling while Rome burns,” he told Hoover. The president was unmoored: “I have been fiddled at enough and I can do some fiddling myself.”

Hours after taking the oath of office, Roosevelt and his top advisers embarked on a marathon session to save and restore faith in a banking system on the verge of collapse. Within 36 hours, the administration declared a nationwide bank holiday. Before it ended, on the afternoon of March 9, Roosevelt spent two hours presenting one of the earliest New Deal programs to his closest advisers. It would be a jobs program, he explained, that would “take a vast army of these unemployed out to healthful surroundings,” doing the “simple work” of forestry, soil conservation, and food control. By that evening, the program’s final report explains, the proposal was drafted “into legal form” and placed on the president’s desk. At ten, he convened with congressional leaders who brought it to Congress on March 21. It was signed into law on March 31, and the first recruits of the Civilian Conservation Corps (CCC) were taking physicals by April 7 before being bused from their homes in New York City to Westchester County, freshly issued clothes in hand.

By July, the program had established 1,300 camps for its 275,000 enrollees. Between 1933 and its end in 1942, the CCC’s workers—average age 18.5, serving between 6 months and 2 years—built 125,000 miles of road, 46,854 bridges, and more than 300,000 dams; they strung 89,000 miles of telephone wire and planted 3 billion trees. Among the most expansive and maligned of New Deal programs, the Works Progress Administration—derided as full of boondoggles and government waste—built 650,000 miles of roads, 78,000 bridges, and 125,000 civilian and military buildings; WPA workers served 900 million hot lunches to schoolchildren, ran 1,500 nursery schools, and put on 225,000 concerts. They produced 475,000 works of art and wrote at least 276 full-length books. From 1932 to 1939, the size of the federal civil service grew from 572,000 to 920,000.

The WPA’s predecessor, the Civil Works Administration, created 4.2 million federal jobs over the course of a single winter. Much of that work was in construction, but the program also employed 50,000 teachers so that rural schools could remain open, rewilded the Kodiak Islands with snowshoe rabbits, and excavated prehistoric mounds, the results of which ended up in the Smithsonian. In the first year of its operation, 1939, the Civil Aeronautics Board built 300 airports. They did it all without so much as a cell phone or computer.

Like the original, a Green New Deal won’t—if it’s successful—be a discrete set of policies so much as an era and style of governance. It will be the basis of a new social contract that sets novel terms for the relationship between the public and private sector and what it is that a government owes its people. Likewise, the New Deal was designed—learning as it went—to solve a problem the United States had no blueprint for: creating a welfare state capable of supporting millions of people essentially from scratch and with a wary eye toward those countries abroad that were handling a catastrophic economic meltdown in very different, far crueler ways. The New Deal might be best described by a spirit of what Roosevelt referred to as “bold, persistent experimentation”: flawed, contradictory, ever-evolving, and very, almost impossibly big. “It is common sense,” he said in the same speech, “to take a method and try it: if it fails, admit it frankly and try another. But above all, try something.” More than giving bureaucrats carte blanche to move fast and break things, the New Deal crafted a container in which innovation and experimentation could take place, providing a combination of ample public funds and rigorous standards, all to be overseen by a set of dogged administrators. As Paul Krugman would write some 75 years later, the “New Deal made almost a fetish out of policing its own programs against potential corruption,” well aware of the hostility its new order would face from those invested in continuing on with business as usual.

by Kate Aronoff, LitHub |  Read more:
Image: uncredited


Pascal Verzijl, ‘Little Things’ Analogue collage 2021
via:

Therapy Without Therapists

Americans have been getting sadder and more anxious for decades, and the economic recession and social isolation from COVID-19 have accelerated these trends. Despite increased demand for mental health services, those who seek treatment can’t get it. Most people seeking care overwhelmingly prefer psychotherapy over medication, yet they are more likely to be prescribed an antidepressant, often from their primary care provider.

The reasons are fairly obvious. Therapy is expensive. Private insurance companies don’t want to pay for unprofitable, long-term services provided by highly skilled (i.e., high-priced) professionals. When insurance companies do reimburse therapists for their services, they do not pay a living wage. Nor can therapists afford the prohibitive barriers to managing insurance claims—therapists report that most of their patients pay out-of-pocket for therapy or receive minimal insurance coverage for mental health services. When healthcare is privatized, socially useful services are scarce or nonexistent. The solution is equally obvious. Healthcare should be a universally-available public good.

Unsurprisingly, the healthcare industry has reframed this straightforward problem and its straightforward solution to turn a profit. According to industry leaders, the problem is not that a market-driven healthcare system unequally distributes much-needed care. Rather, the problem for them is that the provision of mental health services is not entirely subsumed by capital’s law of motion. Mental healthcare, by their logic, ought to be further scientifically managed to cut costs and increase efficiency.

Due to the economic imperatives of the system, clinical scientists and health service researchers have done their part to rationalize this logic. Designing brilliant studies, these scholars tell industry leaders what they want to hear—that the future of mental healthcare means fewer clinicians, less care, and more automation. At the National Institute of Mental Health Services Research Conference in 2018, Gregory Simon, a psychiatrist and public health scholar for Kaiser Permanente, warned of the coming transformations in the delivery of mental healthcare:
While the fourth industrial revolution has been transforming commerce and industry, and most of science, mental health services remain confidently ensconced in 19th century Vienna [displays an image of Sigmund Freud]. But not for long. The revolution is coming to us.
According to Simon, the fourth industrial revolution will involve the intensification of the division of labor through methods such as task-shifting and the widespread use of digital technologies. Dr. Simon prophesied that mental health “consumers” will soon ask their voice-activated devices: “Alexa, should I increase my dose of Celexa?” Dr. Simon needn’t have looked too far into the future. The transformations he anticipated have already radically reshaped the provision of mental healthcare—a revolution that has transpired behind the backs of both therapist and patient alike.

The Division of Labor in Mental Healthcare

In the past several decades, healthcare in the US has increasingly resembled an assembly line, with the labor process atomized into its component parts and assigned to different workers. Task-shifting is the preferred term by health service researchers for this increasing division of labor. It refers specifically to the process by which tasks from professionals with higher qualifications are delegated to those with fewer qualifications or to a new cadre of employees trained for the specific healthcare service. Recently clinical tasks have not just been passed on to lesser-skilled workers, but also to lay people and even to patients themselves. (...)

Task-shifting is already the norm in medicine and is only increasing as the US faces a shortage of physicians. It is common for patients to visit their doctors and have their body weight and blood pressure measured by medical assistants, to have their blood drawn by phlebotomists or nurses, and to have their responses to physicians’ questions be recorded by medical scribes. This increased division of labor means that physicians only work at the top of their degree qualifications and lesser-skilled workers perform simple clinical tasks at a lower cost. For fairly routine visits, like yearly check-ups, physicians are increasingly being replaced by physician assistants. According to the US Bureau of Labor Statistics, the median annual salary of a physician assistant is $112,260 whereas the median salary of a physician is $208,000. It is no wonder that as health systems Taylorize medicine, physician assistants are one of the fastest growing professions in the country.

To further deskill laborers and make them appendages to machines, biotechnology firms have developed products that automate these routine clinical tasks (e.g., blood pressure monitors, automatic brain scan image processors, etc.). Under a scientifically managed healthcare system, healthcare services are spread across many hands, reducing continuity of care. The proliferation of non-physician medical roles decreases total compensation for healthcare workers, but most importantly this increased fragmentation often reduces the quality of care, putting patients at risk. (...)

Due to the financial incentives introduced by the managed care system, psychiatrists—who earn an average of $220,430 per year after eight years of medical training—rarely conduct psychotherapy and devote most of their time to disseminating psychopharmacological treatments. They have been replaced by a cheaper labor force of lesser-educated clinicians. The majority of psychotherapy is now provided by clinical social workers, who receive two years of graduate training and earn an average annual salary of $50,470, followed by a long distance by clinical psychologists, who attend five to seven years of school and earn an average annual salary of $87,450. (...)

The Rise of Community Health Workers

The latest “innovation” to deskill mental healthcare workers has been to displace professionals entirely. Researchers have increasingly propagated the effectiveness of training lay people to provide brief therapy in lieu of licensed mental health providers. Though the stated rationale for training non-professionals, termed community health workers, is to integrate knowledge from traditional healers and communities to provide culturally competent care, their real function is to cut labor costs and put money back in the hands of corporate hospital chains. (...)

Community health worker models often draw inspiration from volunteer programs formed in resource scarce, low-and-middle-income countries in response to the lack of public or private infrastructure for mental healthcare. For example, one of the most revered volunteer community health worker models, Nepal’s Female Community Health Volunteer (FCHV) program, has been widely lauded for its expansive base of over 50,000 volunteers who offer counseling and necessary health services to women and families across the country. The FCHV is partly responsible for Nepal’s sharp declines in child and maternal mortality rates, and the public hospital system has integrated these exemplary volunteers into their service model.

However impressive the work of these women, it should go without saying that they should be adequately remunerated. Further, if they are providing essential health services, the care they provide should be incorporated into the public health system, not contingent upon a reserve army of volunteers. As several social scientists have noted, attempting to import public health models from resource-scarce contexts to high-income countries is ethically dubious, particularly if the model hinges on the exploitation of an unpaid workforce. The US has the necessary infrastructure and resources to adequately hire and compensate professionals. The imposition of scarcity and cheap labor in the US is a policy decision, not a rational response to real material constraints.

by Briana Last, Damage |  Read more:
Image: Getty via

Toward a Better Understanding of Systemic Racism

As an academic librarian in the United States, I have watched with dismay as Critical Race Theory (CRT) has become the dominant framing of the continuing impact of America’s terrible racial history on group well-being metrics. CRT has not only spawned jargon-filled institutional diversity, equity and inclusion policies, but affects individual academic departments and libraries. The way in which it constrains inquiry and pre-biases our research is not only evident in the classroom, but is beginning to influence how we academic librarians provide resources and teach research skills. CRT framing has even found its way into our job descriptions and library policies, and has taken on the character of a political or religious litmus test. Its slippery discourse carelessly uses loaded terms such as white supremacy and racism to describe downstream outcomes, rather than intentions and attitudes. It is increasingly hostile to the fundamentals of effective research.

Perhaps even worse, it risks obscuring the actual ways in which the shameful racial history of the US set in motion the present day observed racial disparities and prevents us from formulating the policies that might best address such disparities in the present. Both free inquiry and unbiased research and the ability to help groups disproportionately impacted by our history are going to become increasingly difficult if CRT continues to be the only way of thinking about systemic racism.

CRT makes two central claims. The first contains a crucial insight from the civil rights movement, without which we could make little sense of our cultural and social reality. The second, however, asserts that disparities themselves constitute racism and are evidence of and perpetuate white supremacy and must therefore be targeted by policies. This logical sleight-of-hand threatens both the cohesion of any pluralistic society and prevents us from addressing the actual problems that lead to racial disparities.

CRT approaches, then, rest on two claims, the second of which is believed to flow from the first.

Claim One: Systemic Racism

The first claim is that blacks suffered not only two hundred and fifty years of slavery, resulting in a direct and massive group-level difference in wealth, but another subsequent one hundred years of official subjugation and segregation and denial of the public goods that underwrite flourishing. This has led to group-level disparities in human capital development, resulting in, among other things, disparate outcomes. This is an inescapable fact. The modern racial landscape is not caused by something fundamentally wrong with black people—as true a white supremacist or racist would claim.

For example, the higher crime and victimisation rates among black communities could, as James Foreman Jr. has argued, be the product of an honour culture put in motion by Jim Crow-era underpolicing of any crime that did not disrupt the then racial and economic hierarchy. Higher poverty rates can be traced in large part to the economic legacy of slavery, as well as to various racist policies that prevented the acquisition of wealth.

Rerun the same multifaceted group immiseration experiment with any group, and you will get largely the same results. If blacks had immigrated to the US and been treated like, say, Norwegian immigrants, these massive developmental disparities would probably be largely absent. Although immigrants can certainly arrive with different cultural and economic averages that can manifest in some group-level differences, given the particular traits needed to succeed under different cultural circumstances, the massive differences in flourishing between black and white Americans are certainly impacted by our history around race.

In the US, discrimination against blacks has historically been orders of magnitude more profound than discrimination against other ethnic groups. Even without the racist post-hoc justifications of the practice, slavery would have had group-level ramifications on its own, given the near total lack of wealth held by blacks in 1865. Add a century of segregation and racism and you have a situation unmatched in its capacity to reproduce group-level generational misery.

This empirical claim about upstream group-level causation does not necessarily imply specific downstream personal or policy solutions. In fact, we need to consider a wider range of possibilities for reducing group-level suffering.

Where CRT runs into serious conceptual trouble, though, is in its second central claim.

Claim Two: All Disparities Are the Result of Continuing Racism

The second claim is that, because these disparities were set in motion by America’s reprehensible racial history, each of them is literally caused by this history in both the group and individual instance. Every disparity observed today stems from racism and white supremacy. Those who fail to seek a forced repair of the disparity are guilty of racism and perpetuating white supremacy. Any judgement, system or policy that perpetuates a disparity that can be traced to a racist past is itself white supremacist and racist. Since racism is the underlying cause of all disparities, large and small, insufficient alarm and concern at these disparities is also racist.

This second claim allows anti-racist ideology to be weaponised by both moralists and authoritarians.

This presents a dilemma: if racist policies have resulted in disparate flourishing metrics, why not address these disparities in every arena in which they exist?

The error here is imagining that group disparities continue to be neatly tied to the racism that set them in motion. This leads to a strange obsession with the disparities themselves and not their upstream, proximate causes, which at the individual level are not racially unique.

Conservative economist Glenn Loury has convincingly argued that present disparities are the result of developmental challenges that may have arisen as a consequence of racism, but no longer depend on it. Leftist political scientist Adolph Reed Jr. has reached a similar conclusion, from a Marxist perspective: the developmental problems of the black community are simply the result of greater exposure to a destructive political economy that can handicap anyone’s flourishing. While this greater exposure owes its origins to racism, Reed argues that the political economy itself, not black identity, should be the focus of policy efforts, since that same political economy can be the source of misery for anyone.

Despite their ideological differences, Loury and Reed have hit on an important point: disparities, rather than being independent variables that prove racism, are the result of experiences that can cause anyone suffering. The fact that blacks suffer more from them originated in racism but no longer tied to it.

Imagine a university that sincerely wants to reflect American demographics by having 14% of its faculty and students be the descendants of slaves. What do we do with the fact that being a successful student or faculty member requires human capital that our racial history has distributed unequally? How do you address a disparity in flourishing when there is a disparity in the human capital required for flourishing? Do we simply nullify those requirements and denounce them as racist, as CRT advocates do? Or do we give up entirely and say it’s all in the past and there’s nothing we can do, and focus solely on individual merit, as staunchly colour-blind meritocrats and opportunistic racists do?

A Better Definition of Systemic Racism

The unique history of blacks in the United States has left them more exposed to political, economic and developmental problems that can immiserate anyone. The best way to address this is to concentrate on the economic and developmental problems more broadly, and in so doing address the racial disparity without overtly racializing either problems or solutions.

by Brian Erb, Areo |  Read more:
Image: uncredited
[ed. See also: Creating an Inhabitable World for Humans Means Dismantling Rigid Forms of Individuality (Time).]

Saturday, April 17, 2021


via:

Manoucher Yektai, Untitled (Still Life), 1969 

The Blood-Clot Problem Is Multiplying

For weeks, Americans looked on as other countries grappled with case reports of rare, sometimes fatal blood abnormalities among those who had received the AstraZeneca vaccine against COVID-19. That vaccine has not yet been authorized by the FDA, so restrictions on its use throughout Europe did not get that much attention in the United States. But Americans experienced a rude awakening this week when public-health officials called for a pause on the use of the Johnson & Johnson vaccine, after a few cases of the same, unusual blood-clotting syndrome turned up among the millions of people in the country who have received it.

The world is now engaged in a vaccination program unlike anything we have seen in our lifetimes, and with it, unprecedented scrutiny of ultra-rare but dangerous side effects. An estimated 852 million COVID-19 vaccine doses have been administered across 154 countries, according to data collected by Bloomberg. Last week, the European Medicines Agency, which regulates medicines in the European Union, concluded that the unusual clotting events were indeed a side effect of the AstraZeneca vaccine; by that point, more than 220 cases of dangerous blood abnormalities had been identified. Only half a dozen cases have been documented so far among Americans vaccinated with the Johnson & Johnson vaccine, and a causal link has not yet been established. But the latest news suggests that the scope of this problem might be changing.

Whether the blood issues are ultimately linked to only one vaccine, or two vaccines, or more, it’s absolutely crucial to remember the unrelenting death toll from the coronavirus itself—and the fact that COVID-19 can set off its own chaos in the circulatory system, with blood clots showing up in “almost every organ.” That effect of the disease is just one of many reasons the European Medicines Agency has emphasized that the “overall benefits of the [AstraZeneca] vaccine in preventing COVID-19 outweigh the risks of side effects.” The same is true of Johnson & Johnson’s. These vaccines are saving countless lives across multiple continents.

But it’s also crucial to determine the biological cause of any vaccine-related blood conditions. This global immunization project presents a lot of firsts: the first authorized use of mRNA vaccines like the ones from Pfizer and Moderna; the first worldwide use of adenovirus vectors for vaccines like AstraZeneca’s, Johnson & Johnson’s, and Sputnik V; and the first attempt to immunize against a coronavirus. Which, if any, of these new frontiers might be linked to serious side effects? Which, if any, of the other vaccines could be drawn into this story, too? How can a tiny but disturbing risk be mitigated as we fight our way out of this pandemic? And what might be the implications for vaccine design in the years to come?

To answer these questions, scientists will have to figure out the biology behind this rare blood condition: what exactly causes it; when and why it happens. This is not an easy task. While the evidence available so far is fairly limited, some useful theories have emerged. The notions listed below are not all in competition with one another: Some are overlapping—or even mutually reinforcing—in important ways. And their details matter quite a bit. A better understanding of the cause of this condition may allow us to predict its reach.

by Roxanne Khamsi, The Atlantic |  Read more:
Image: DeAgostini/Getty/ Katie Martin/The Atlantic

Whose Feelings Count Most in a Pandemic?

If an alien or visitor happened to take a gander at lifestyle journalism over the past six months, they might assume that even though a lot of people are losing their jobs, waiting endlessly for unemployment, or even being evicted, the majority of the country has passed the pandemic baking bread, moving out of cities, and gazing out the window wondering if every day is Wednesday. For every story about the truly devastating impact the pandemic has had on normal life, it seems that there have been countless others that do little more than document every single possible concern of the upper-middle-class.

Lifestyle journalism catered specifically to the needs, wants, and desires of the beans and sourdough crowd: the same affluent workers whose jobs afforded them the flexibility to do their jobs from their homes. During the long, dark months of the spring, while many Americans were contending with lives lived mostly indoors, countless other people were doing the work that afforded the WFH class the freedom to worry only about how to occupy their time now that they were trapped inside.

The New York Times
quickly gathered their resources to create At Home, a section of gentle lifestyle content meant to quell the anxieties of their core audience, many of whom might have already escaped New York City during the worst spring months. The landing page for the section collects the various articles written for the express purpose of soothing the frazzled nerves of its readers and states its intended purpose: “We may be venturing outside, tentatively or with purpose, but with the virus still raging, we’re the safest inside,” the copy at the top of the page reads. Of course, inside was the “safest” place to be for a good long time, but even acknowledging that is a privilege. For all the Times readers who spent the spring worriedly disinfecting the groceries delivered to them by DoorDash or FreshDirect employees, there were countless other people working to make sure that the people locked in their homes, fearful of the out of doors, had food to eat. This divide was rarely noted in the lifestyle content that proliferated, most likely because it is not soothing to readers to think about the minimum wage employee riding a bicycle through rain and sleet to deliver them a pizza.

As the pandemic unfolded, I turned to the Times for recipes like many of my other peers did, but quickly developed a one-sided adversarial relationship with the What to Cook This Week email newsletter, written mostly by Food section editor Sam Sifton. Cataloging the innermost anxieties of the upper class has always been the hidden directive of the paper’s Style section, but witnessing that bleed over into the Cooking newsletter became tiresome after a while.

Consider this dispatch from the July 24 newsletter, some six months into the pandemic:
Good morning. I caught a fat porgy on a home-tied fly the other day, a blind cast into clear ocean water, streaming past boulders on an outgoing tide. It wasn’t the striped bass I was looking for, but I thought it might be good for a few tacos for dinner and that hauled me out of the rut I’ve found myself in these last few weeks. It’s been freestyle mapo tofus with ground beef and chile crisp; skillet pastas with Italian sausages and plenty of kale; crema-marinated chicken grilled and doused in lime; repeat. It gets boring, frankly.
For thousands of people who have yet to leave their neighborhoods or who have been working and running the household in a capacity that does not allow for leisurely casting a line into a clear blue ocean, Sifton’s missives are comically out of touch with other, more pressing realities like juggling childcare and a full-time job. What he and so many other writers have been working against since the pandemic started is nothing more than an exploration of what it means to be bored. Sourdough, an affectation that has largely been abandoned, was an effective way to channel anxieties about an airborne virus, but also, baking bread is nothing more than a hobby that adequately fills empty stretches of time while also making people feel productive. Baking bread for leisure is an activity that I imagine those who do it for a living, in industrial kitchens and the like, would rather not undertake. The gap between leisure and labor here is wide.

Other, more esoteric “hobbies,” like growing scallions in jam jars, was rebranded as “novel frugality” in a piece that now feels typical of the sort written during the spring and early summer. Habits like saving Ziploc bags, regrowing the aforementioned scallions, and eating the heel from a loaf of bread were the sort of penny-pinching habits reserved for the generation that survived the Great Depression, not the rest of us who have long luxuriated in the great American pastimes of consumerism and consumption, the April story at Vox implied. These habits, which are fairly normal and do not really deserve any special mention, were documented on social media and in pieces like the one that ran in Vox. Framed as an upper-class panic about safety and minimizing trips out of the house, these behaviors are unusual only because the people in question never really had to think about frugality in a concrete way. (...)

Paying close attention to lifestyle journalism over the past six months revealed that the anxieties, concerns, and fears that are being documented are purely those of Richard Florida’s “creative class”—upwardly-mobile individuals working in vaguely creative sectors who mostly congregate in cities like New York and San Francisco. These individuals value the sorts of amenities that make a city feel superior to a suburb: museums, bars, restaurants, and the ability to find a decent heirloom tomato at the height of summer. It’s worth noting that these concerns are, in the grand scheme of things, first-world problems. The trouble is that when these issues are given top billing, they appear to be the only issues that really matter. Carefully documenting the vagaries of the upper class and expecting their anxieties, hobbies, and worries to be representative for the entirety of society is a tale as old as time.

Giving space to the weird quarantine quirk that you and maybe three other people you’re friends with isn’t self-aware—it’s simply elevating an inside joke or observation made between friends by using the platform afforded to you and presenting it as a matter of course rather than an anomaly. Much like the case of the Amazon coat, which appeared in the Times Style section in November 2019, the small observations in and around the writer’s friend groups are not representative of the experiences of others and it is presumptuous to assume that just because something is happening to you, that the experience is universal.

by Megan Reynolds, Jezebel |  Read more:
Image: Chelsea Beck

Friday, April 16, 2021

Making Sense of the ‘Border Crisis’

You may have heard in the news recently that there is a Crisis At The Border. Huge numbers of people are now clamoring at the southern border, many of them unaccompanied children. As described by people on the right, this is a crisis caused by lax enforcement. Republican politicians like Tom Cotton and “centrist” commentators like Fareed Zakaria have argued that these increased migration numbers are due to the Biden administration’s softening of (as Zakaria puts it) Trump’s “practical policies” at the border. The examples they cite include:

  • The Migrant Protection Protocols (MPP)/Remain in Mexico program—required tens of thousands of asylum-seekers to wait in dangerous Mexican border towns, without housing, healthcare, or legal help, constantly vulnerable to a booming kidnapping-for-ransom industry, while their cases proceeded before U.S. border judges
  • The Safe Third Country Transit Ban—blocked virtually all migrants at the southern border from obtaining asylum if they had passed through any third country on their way to the U.S.
  • Various short-lived agreements with countries like Guatemala and Honduras—incentivized places designated by our government as “safe third countries” for asylum-seekers to accept planeloads of migrants apprehended at our southern border, despite the large numbers of asylum-seekers fleeing those same countries.
This narrative portrays a Biden administration that has invited an uncontrollable tsunami of immigration by breaking radically with the enforcement policies of his predecessor.

Meanwhile, many people on the left have agreed that there is currently a “crisis,” not because of the increased border numbers in and of themselves, but because of the cruel and unsafe conditions under which the arriving migrants are being detained. New images have emerged of children huddled inside foil wrappings at the Donna tent facility in Texas, packed into cages made of chain-link fencing, with little apparent regard for social distancing. These photos of “kids in cages” under Biden are visually identical to the photos of “kids in cages” that once whipped up Democrats into a righteous fury against Trump: some people have denounced the Biden administration as no better than Trump, while others have tried to distinguish Biden’s policies from Trump’s. Alexandria Ocasio-Cortez, for example, has been taking heat from the left for putting out a video message warning against drawing “false equivalencies” between the Trump administration’s systematic separation of children from their parents at the border from April-June 2018, and the Biden administration’s detention of children under deplorable conditions at the border now. Among non-Republicans, we thus have competing narratives that Biden is managing the crisis as well as he can under difficult circumstances, and that Biden is in fact cynically employing the exact same enforcement tactics as Trump, knowing that partisan hypocrisy will cause his supporters to make excuses for him.

Let’s first ask ourselves: is there a Crisis At The Border? On the one hand—yes. There is always a crisis at the border, in the sense that there are always people trying to migrate across the border, and we always have huge amounts of state firepower directed at making that process as miserable and unsafe for migrants as possible. But “crisis” isn’t really the most accurate word to describe the situation, because it implies that we’re talking about a sudden, alarming deviation from a status quo. In fact, these conditions are the status quo, and have been for several decades. When the border is suddenly in the news, there is usually some weird manufacturing of consent going on, and I don’t think it’s always easy for even well-intentioned people to understand the trajectory of the opinions that these crisis narratives drive them to reflexively adopt.

To illustrate what I mean, let’s take a couple examples of Border Crises in relatively recent memory. People may remember the media frenzy about a migration “surge” at the border in 2014, during Obama’s second term. In fact, numbers-wise, 2014 wasn’t really a remarkable year. There were 486,651 apprehensions at the border, which was somewhat higher than the previous year’s total of 414,397, but considerably below the annual averages for 2000-2009, when border apprehensions of 1 million a year or more were typical. What was different was that of those 2014 apprehensions, an atypically high percentage were children and families, mostly from Central America. Not wanting to deal with the logistical, legal, and political hassle of increased numbers of children at the border, the Obama administration began capturing and interning migrant families en masse, for the express purpose of deporting them as rapidly as possible, in what President Obama called “an aggressive deterrence strategy.” Characterizing a demographic shift within otherwise typical border numbers as a “crisis” or a “surge” was a conscious political choice by the Obama administration, allowing them to justify draconian enforcement against asylum-seeking families as a necessary evil, even as the administration continued to claim that its overall enforcement strategy was aimed at “felons, not families.” Even though the Obama administration’s intended policy of indefinite detention of families at the border was ultimately blocked, detaining families who presented at the border to seek asylum nevertheless became normalized. This has resulted in a family internment system at the border that’s lasted up to the present day.

A more recent “border crisis” took place under the Trump administration in late 2018 into the spring of 2019, when the Department of Homeland Security (DHS) repeatedly claimed that the numbers of people at the border were so huge and unmanageable that they had no place to safely house people while they were processed. DHS forced suffering migrants to wait in highly visible public locations, like beneath the port of entry bridge in El Paso, while loudly proclaiming that they lacked the resources to humanely deal with the problem. These repeated claims that DHS facilities lacked bedspace were actually lies. As advocates at the border pointed out, the Trump administration temporarily emptied out numerous detention centers during this exact period, and CBP officials have since admitted that they were instructed to falsely tell people approaching the border that they had no space to process them for asylum. At the time, however, mainstream media outlets were entirely credulous toward DHS’s self-serving statements about a “crisis” throughout the fall and spring, and ran stories uncritically regurgitating this narrative. In fact, the Trump administration was deliberately inflating this “crisis” in order to set the stage for the rollout of some of its most ambitiously cruel policies in the name of “border control”—like the Remain in Mexico program, the asylum ban, and the safe third country agreements. (The systematic family separations that people associate most strongly with Trump was an experiment that lasted a few months in 2018 and then ceased; these other policies, although they made less of a splash in the news, had much longer lifespans and affected tens of thousands more migrants).

This is all to say that Crisis At The Border narratives are often pure media creations for specific political purposes, and we should always be wary of unconsciously accepting that framing when it’s presented to us. For a good illustration of why the language of border crisis can be unhelpful even when used by well-intentioned people, we have only to look to the summer of 2019, where—hard on the heels of about eight months of crisis messaging by the Trump administration—the public became extremely angry about the horrific conditions under which migrants, including children, were being detained after apprehension at the border. This, they proclaimed, was the real border crisis! But because a crisis is imagined to be an atypical, short-term phenomenon, requiring quick and decisive action in order to return to a “normal” state of affairs, political energy quickly coalesced around just throwing a bunch of “emergency” money at DHS to improve detention conditions at the border. This having been accomplished, the moment of rage quickly faded from public consciousness; DHS got a nice fat payout, which it used to buy Border Patrol agents some sick new dirtbikes and ATVs; and nothing else changed.

So what should we make of the current Border Crisis? First, the right-wing narrative that there’s currently a “surge” caused by the Biden administration’s rollback of Trump’s asylum-restricting policies doesn’t seem to add up. It’s true that Biden has taken a couple of initial steps to roll back some of the worst parts of the Trump administration’s pre-pandemic border agenda, but the numbers of people approaching the border appear to have started rising back in April 2020, well before the election. DHS currently anticipates it will apprehend 2 million immigrants at the border in 2021, which would be a record high since 2006; but this is a speculative number based on current apprehension rates (March was an extremely high month) during a time when summary expulsions from the border have been going on for months and have stranded lots of migrants in border areas. The pandemic, together with a devastating sequence of droughts and hurricanes in Central America, has also exacerbated difficult conditions in sending countries. It’s hard to imagine a universe in which this wouldn’t affect the numbers of people seeking to migrate, regardless of who is president.

I do, however, think that the recently increased numbers of unaccompanied kids can be more directly tied to Biden’s enforcement choices. Currently, the Biden administration is continuing to deploy the Centers for Disease Control and Prevention (CDC) “public health” order wherever it sees fit, in order to bounce people back summarily from the border with zero due process. But unlike the Trump administration, the Biden administration has publicly stated that they won’t use the CDC order to block unaccompanied children. This is the most plausible explanation for why unaccompanied kids are now coming in higher numbers. Because single adults and even family units run the risk of being expelled directly from the border, it makes sense that kids would come to the border alone if they and their families want to ensure that they’re actually allowed in. If the Biden administration announced that it wouldn’t be applying the CDC order to anyone, I imagine we would see fewer “unaccompanied” kids. It’s true that kids who come to the border alone pose some unique challenges—the law requires the government to place unaccompanied kids in the custody of the Office of Refugee Resettlement until they can be connected to their family members in the U.S., and it does stand to reason that you can’t just release a child onto the street without identifying a caregiver—but the Biden administration’s choice to continue applying the CDC order to adults has likely played a role in increasing the numbers of kids in this situation. Changes in migration numbers and demographic composition are influenced by a whole host of push and pull factors, one component of which are the government’s own enforcement policies (as publicly stated) and practices (as actually observed by prospective border-crossers).

by Brianna Rennix, Current Affairs | Read more:
Image: David Peinado Romero (Shutterstock)