Thursday, April 8, 2021

Bruce Wrighton, Basement, Holy Trinity Greek Orthodox Church, 1986
via:


Isaiah Gulino, Rose Gold, 2020
via:

Mass Incarceration Was Always Designed to Work This Way

As of 2019, the United States has less than 5 percent of the world’s population but 25 percent of its prison population.

In 2019, 2.2 million people were locked in the country’s (adult) jails and prisons. If you add in people locked in juvenile detention, immigrant detention, and military prisons, that number rises to approximately 2.3 million people locked behind bars.

Then there are the people under correctional supervision, which means they’re under some form of surveillance and restriction either instead of or in addition to a jail or prison sentence. These forms of supervision include house arrest, electronic monitoring, parole, and probation. Individuals are not locked behind bars, but their movements are narrowly circumscribed and any violation of the myriad rules can result in jail or prison. If you count them, the total number of people under some form of correctional control rises to 6.7 million.

At least 4.9 million people cycle through the nation’s 3,163 jails each year. The majority of people in jail have not been convicted.

Some will spend a day or two in jail before being released either on bail, meaning that someone paid money for their release pending trial, or on their own recognizance, meaning that a judge allowed them to go home so long as they promise to return to court. Others remain in jail because they cannot afford to post bail.

How did we get to this point? Some might assume it’s because our criminal legal system is broken and in need of repair. But if we look at the history of prisons in the United States, we can see that the system of mass incarceration isn’t merely flawed or broken but is operating as it was designed: to sweep society’s problems (and people seen as problematic) behind gates and walls where few have to see them.

The modern-day prison is a relatively new phenomenon. Before 1773, people were typically jailed while awaiting judgment; their punishments were generally physical and vicious—floggings, time in the stocks, and executions. In the United States, imprisonment as punishment began with the opening of Philadelphia’s Walnut Street Jail in 1773.

by Victoria Law, LitHub |  Read more:
Image: uncredited
[ed. We should also discuss widespread privatization and the economic incentives (to corporations, communities, etc.) that perpetuate the problem.]

Alone, Together

In a nondescript building on West 26th Street in New York City, you’ll find Paddles, “the friendly S&M club.” It’s an after-hours space that bills itself as “a playground for sane people who are into: whipping, spanking, bondage, domination, submission, foot fetishes, cross dressing, and all other fetishes,” where once a month, closer to happy hour than last call, the fetish du jour is mutual masturbation.

The event is organized by New York Jacks, a group that hosts regular meetings for men to gather in relative public to do something nearly every man does in relative private. They take over Paddles on Tuesdays, and on Sundays host a meeting on the third floor of a building on West 38th Street.

I first attended a New York Jacks meeting with a friend on a Tuesday a few weeks ago. After fortifying ourselves with a beer around the corner and waiting until what felt like a New York–appropriate hour after the event’s official start time, we walked through the open door and down a twisting concrete staircase, listening for any indication of a meeting in progress. We passed a few men coming the other way, looking flushed and conspiratorial, and opened a door into a short hallway with a ticket window, where a young, fully clothed man asked languorously, “Here for Jacks?”

Before arriving, we’d discussed our apprehensions. What if it’s empty? What if we see someone we know? Based on stories from friends, this is a not-uncommon occurrence at some other gay sex events in the city, and the fear amounts to one of being exposed somehow — not just physically, but, in the case of New York Jacks, as a “bator”: someone who seeks out mutual masturbation as a discrete sexual experience. But as with being seen on Grindr, or in the backyard of a gay bar in Brooklyn, the “exposure” would be mutual.

As in every fetish community, the fear of being outed as a participant in something deemed weird or pervy keeps many people, including bators, in the closet about their interests. All things considered, mutual masturbation is pretty tame — masturbation is something most people already do, albeit alone — but the popular notion of masturbation as somehow being failed sex, the purview of lonely internet trolls, leads many would-be avid mutual masturbators to keep their sexual cards close to their chests.

Among men who have sex with men, mutual masturbation is often seen as sex-adjacent, rather than a sex act in its own right. In some cases, this is a heterosexual fig leaf (“It’s not sex, we’re just being dudes”); in others, it’s treated as an hors d’oeuvre. More than one friend has told me about engaging in mutual masturbation as a sort of compromise in a disappointing hookup situation, as in, “I went home with him but I was tired, so we just jerked off.” But for many men, myself included, mutual masturbation is not merely the “I’m not really hungry; I’ll just have a salad” of sex; rather, it’s an experience to be sought for its own sake. (...)

In an effort to better understand the appeal of mutual masturbation and the community that seeks it, I went looking for other men for whom jerking off together is not merely sex-adjacent, but an important part of a balanced sex life. I found these men among my friends, on social media, as well as in pockets of the internet I hadn’t yet reached into: notably Kik, a text-messaging app, and BateWorld, a global social network for masturbators that functions as a kind of Grindr–Facebook hybrid, with all the HTML sparkle of classic MySpace. I talked to men who associated their attraction to mutual masturbation with some of their earliest memories of queer desire; men who started seeking j/o buds in an effort to hook up with “straight” guys; men who simply consider it a safer, simpler alternative to other kinds of sex. Entering these spaces opened them up for me personally in a way I hadn’t thought I needed.

by John Sherman, BuzzFeed News |  Read more:
Image:John Taggart for BuzzFeed News
[ed. Seems like a good safe sex alternative. See also: Why Straight Men Are Joining Masturbation Clubs (GQ).]

Paul Simon

Wednesday, April 7, 2021

Golf's Surge in Popularity in 2020: Even Better Than Predicted

The National Golf Foundation hinted throughout the summer and fall that 2020 was showing a remarkable surge in both participation and rounds played despite the shutdowns and uncertainties of the Covid-19 pandemic. Its final report for the year might actually be more positive than even predicted, what the golf industry group called “a year of resurgence.”

How good was 2020 historically? Tiger Woods good.

Specifically, the growth numbers in many cases set all-time records in many categories, and the number of golfers coming to the game in 2020 were only benchmarked against some of the greatest moments in Woods’ career, including his debut major title in 1997 and his epic U.S. Open win on a broken leg in 2008.

“There hasn’t been this much optimism and new activity in the golf business since the turn of the century,” said Joe Beditz, NGF president and CEO, in a recent email to the golf industry group, noting “spring shutdowns gave way to an unprecedented summer and fall in terms of play, golfer introductions and reintroductions, and robust, late-season spending.”

The NGF count showed 24.8 million golfers in the U.S. in 2020, an increase of 500,000 and 2 percent over 2019. It is the largest net increase in 17 years. New players (both beginners playing their first round and lapsed golfers coming back to the game for the first time in years) numbered 6.2 million, the highest that number has ever been. Last year also saw the largest percentage increase in beginning golfers and the biggest gain in youth golfers coming to the game since Tiger’s 1997 Masters win.

Women golfers also were part of the 2020 surge, jumping 450,000 or 8 percent year over year and making up nearly a quarter of all golfers with a count of some six million. That is the highest number over the last five years.

The NGF also counts total golf participants by factoring in off-course experiences as well, and that number swelled the overall count to 36.9 million, up 8 percent year-over-year and a near 20 percent gain in the last five years.

Of course, the pandemic’s challenges still took a toll with a larger-than-usual volume of players who opted out of the game for concerns over the pandemic or economic challenges. Still, 2020 marked the third straight year more golfers came to the game than left it, and the NGF’s study of those who opted out of golf in 2020 suggests they’re more eager than ever to opt back in. The number of what the NGF survey calls “very interested non-golfers” reached 17 million, a 1.5 million increase compared to 2019 and 4.2 million more than in 2016.

The net gain in golfers also contributed to a healthy boost in rounds played, despite many states restricting or even banning play for weeks or months. The NGF estimated a loss of 20 million rounds in the spring with course closures and restrictions, but by year’s end, 502 million rounds were recorded. That was 61 million more than in 2019, nearly a 14 percent increase and the largest one-year gain other than in 1997 when Tiger’s booming popularity saw a 63-million-round increase.

The NGF research indicates that the biggest driver of the rounds played surge wasn’t exclusively new golfers. Rather, it was the “core golfers” (more than eight rounds a year) who really upped their games. The report cites “a passionate cohort of existing players (roughly 20 percent of the core-golfer population)” who fueled the boost in the average rounds played per golfer to 20.2. It marked an all-time high since the statistic started being tracked in 1998. Despite being the age groups most at risk during the pandemic, older players still played the most golf. Those aged 60-69 logged an average of 29 rounds in 2020, while those golfers 70 and over played an average of 40 times last year. However, millennials (those aged 18-34) increased their rounds played by 13 percent compared to 2019, and 44 percent of all those who played golf at least once on a golf course in 2020 were under the age of 40—with as many under the age of 30 as over the age of 60. (...)

There was undeniable but measured enthusiasm across the golf equipment industry, too. Equipment sales mirrored the energy in rounds played, recovering from a negative trendline in March and April that saw dollars dip by 31 percent. By year’s end, fueled by the biggest July in history and the second highest quarter ever (behind only the quarter after Tiger’s riveting 2008 U.S. Open win), total sales of clubs and balls were at $2.9 billion in 2020, matching 2019’s numbers. David Maher, president and CEO of Acushnet, the parent company of the Titleist and FootJoy brands, noted in his recent summary of the company’s full-year earnings that the extraordinary gains might be an unrealistic standard for 2021, but the general direction is telling.

“We are still in a massive transition,” he said. “2020 was a massive transition year, 2021 will be a massive transition year. When the dust settles, hopefully sooner versus later, [the way] I tend to look at it is, ‘Okay, what's the world going to look like, 2022 versus 2019?’ And I think the golf landscape is going to have more energy, more momentum, more golfers.”

by Mike Stachura, Golf Digest |  Read more:
Image: Chris Sattlberger
[ed. As a regular golfer (just trying to get a tee time, and get around in a reasonable time) this is not good news.]

Lexington Lab Band

[ed. See also: here for a bunch of other great LLB covers. Plus, forgot this this one.]

How mRNA Technology Could Change the World

Synthetic mRNA, the ingenious technology behind the Pfizer-BioNTech and Moderna vaccines, might seem like a sudden breakthrough, or a new discovery. One year ago, almost nobody in the world knew what an mRNA vaccine was, for the good reason that no country in the world had ever approved one. Months later, the same technology powered the two fastest vaccine trials in the history of science.

Like so many breakthroughs, this apparent overnight success was many decades in the making. More than 40 years had passed between the 1970s, when a Hungarian scientist pioneered early mRNA research, and the day the first authorized mRNA vaccine was administered in the United States, on December 14, 2020. In the interim, the idea’s long road to viability nearly destroyed several careers and almost bankrupted several companies.

The dream of mRNA persevered in part because its core principle was tantalizingly simple, even beautiful: The world’s most powerful drug factory might be inside all of us.

People rely on proteins for just about every bodily function; mRNA—which stands for messenger ribonucleic acid—tells our cells which proteins to make. With human-edited mRNA, we could theoretically commandeer our cellular machinery to make just about any protein under the sun. You could mass-produce molecules that occur naturally in the body to repair organs or improve blood flow. Or you could request our cells to cook up an off-menu protein, which our immune system would learn to identify as an invader and destroy.

In the case of the coronavirus that causes COVID-19, mRNA vaccines send detailed instructions to our cells to make its distinctive “spike protein.” Our immune system, seeing the foreign intruder, targets these proteins for destruction without disabling the mRNA. Later, if we confront the full virus, our bodies recognize the spike protein again and attack it with the precision of a well-trained military, reducing the risk of infection and blocking severe illness.

But mRNA’s story likely will not end with COVID-19: Its potential stretches far beyond this pandemic. This year, a team at Yale patented a similar RNA-based technology to vaccinate against malaria, perhaps the world’s most devastating disease. Because mRNA is so easy to edit, Pfizer says that it is planning to use it against seasonal flu, which mutates constantly and kills hundreds of thousands of people around the world every year. The company that partnered with Pfizer last year, BioNTech, is developing individualized therapies that would create on-demand proteins associated with specific tumors to teach the body to fight off advanced cancer. In mouse trials, synthetic-mRNA therapies have been shown to slow and reverse the effects of multiple sclerosis. “I’m fully convinced now even more than before that mRNA can be broadly transformational,” Özlem Türeci, BioNTech’s chief medical officer, told me. “In principle, everything you can do with protein can be substituted by mRNA.”

In principle is the billion-dollar asterisk. mRNA’s promise ranges from the expensive-yet-experimental to the glorious-yet-speculative. But the past year was a reminder that scientific progress may happen suddenly, after long periods of gestation. “This has been a coming-out party for mRNA, for sure,” says John Mascola, the director of the Vaccine Research Center at the National Institute of Allergy and Infectious Diseases. “In the world of science, RNA technology could be the biggest story of the year. We didn’t know if it worked. And now we do.”

by Derek Thompson, The Atlantic |  Read more:
Image: Adam Maida/The Atlantic


Joaquim Sunyer - Paisaje de Ceret, 1911
via:


Chris Austin
via:

Socialism Is as American as Apple Pie

One of the strengths of the Republican Party is its message discipline. When it finds an issue that works, it beats that issue to death, flogging it long after it stops working. Thus after the Civil War, the party waved the “bloody shirt” by attacking Democrats for opposing the war, which created a continuous run of Republican presidents between 1868 and 1912, punctuated only by a single Democrat, Grover Cleveland.

Another bloody shirt that Republicans have waved forever and plan to wave again this election cycle is “socialism.” I put the term in quotation marks because to hear Republicans tell it, virtually everything government does is socialism; it is utterly foreign to the United States, and it cannot be implemented without imposing tyranny on the American people, along with poverty and deprivation such as we see today in Venezuela, where socialism allegedly destroyed the country.

On July 17, Vice President Mike Pence gave a preview of the coming socialism-addled Republican strategy rather than the actual policies of Joe Biden. Said Pence (emphasis added):

Before us are two paths: one based on the dignity of every individual, and the other on the growing control of the state. Our road leads to greater freedom and opportunity. Their road leads to socialism and decline. (...)

The plan to run against some mythical threat of socialism has been underway for some time. As early as October 2018, the White House Council of Economic Advisers issued a report attacking it, with a follow-up chapter in the 2019 Economic Report of the President. More recently, well-known right-wing crackpot Dinesh D’Souza published a screed on the subject, the gist of which is that all liberals, progressives, and Democrats are socialists, as were the Nazis. Senator Rand Paul, Republican of Kentucky, has also published The Case Against Socialism, which one reviewer said “does not make a case against socialism, but it does make a convincing case against nepotism.” (Senator Paul is the son of former Congressman Ron Paul of Texas, for whom I worked in the 1970s.)

The essence of the Republican attack is to lie about the nature of socialism, grossly exaggerating its negative excesses while completely ignoring its positive effects. When they are forced to concede that some socialistic government programs–such as disease prevention or temporarily higher unemployment benefits–may be valuable, they will nevertheless insist that it must be resisted because it’s the first step on the slippery slope to totalitarianism. As Senator Tom Cotton, a Republican who represents the Confederate state of Arkansas, put it in a tweet: “Socialism may begin with the best of intentions, but it always ends with the Gestapo.”

Republicans assert, endlessly, that the Austrian economist F.A. Hayek proved that the welfare state leads inevitably to socialism and tyranny in his 1944 book, The Road to Serfdom. While Hayek’s theory may have been plausible in the midst of World War II, all the evidence since then thoroughly contradicts it. There is no evidence whatsoever that welfare states morph into total state control of the economy and produce a concomitant loss of freedom and prosperity. There is not a single case of this happening anywhere. Nor is there anything in Hayek’s theory to explain why socialism collapsed in the Soviet Union or why privatization rolled it back in places like Britain. (Ironically, Hayek’s relatively expansive view of government’s legitimate functions make him a virtual socialist to some of today’s right-wingers.) (...)

Like Smith, the Founding Fathers understood that government has functions that go far beyond the night watchman state favored by those on the right today. Thomas Paine, whose pamphlet Common Sense underpinned the ideology of the American Revolution, was a virtual socialist. His most radical work, Agrarian Justice, proposed the revolutionary idea of a wealth tax to fund payments to citizens reaching maturity, a precursor to today’s idea of a basic income.

James Madison, principal author of the Constitution, agreed that providing income to the indigent was a core government function. He wrote in an 1820 letter:

To provide employment for the poor and support for the indigent is among the primary, & at the same time not least difficult cares of the public authority. In very populous Countries the task is particularly arduous. In our favored Country where employment & food are much less subject to failures or deficiencies the interposition of the public guardianship is required in a far more limited degree. Some degree of interposition nevertheless, is at all times and everywhere called for. (...)

Since they’re unable to run against the actual expansion of the American welfare state, GOP propagandists retreat into fantasy. Always missing from the Republican critique is any clear definition of socialism. This is intentional. Republicans know that the term “socialism” is unpopular with many Americans—although a growing percentage embrace it. Republicans also know that numerous programs they view as socialistic are nevertheless very popular with voters. President Harry Truman often made this point in his speeches. He said in 1952:

Socialism is a scare word [Republicans] have hurled at every advance the people have made in the last 20 years. Socialism is what they called public power. Socialism is what they called social security. Socialism is what they called farm price supports. Socialism is what they called bank deposit insurance. Socialism is what they called the growth of free and independent labor organizations. Socialism is their name for almost anything that helps all the people.

Conversely, Republicans never call their many tax giveaways to favored industrialists like Elon Musk “socialism.” In her brilliant book, The Entrepreneurial State, the economist Mariana Mazzucato demonstrated that the entire tech sector rests on a foundation of government-funded research and development that is almost never acknowledged.

In truth, Republicans aren’t opposed to socialism per se but only socialism that benefits poor people and minorities. Socialism for farmers and industrialists is just fine as far as they are concerned.

by Bruce Bartlett, The Big Picture |  Read more:
Image: uncredited
[ed. Mr. Bartlett is a former Republican who served as a domestic policy adviser to Ronald Reagan and as a Treasury official under George H. W. Bush.]

Did the Boomers Ruin America? A Debate

EZRA KLEIN: I’m Ezra Klein, and this is “The Ezra Klein Show.” (...)

I’ve been fascinated by the fight over the baby boomers. You maybe remember OK, boomer, this dismissal of boomer politics that got popular on the internet for a minute and drove boomers totally crazy. That came, of course, during Donald Trump’s presidency. And it reflected frustration in having our fourth boomer president.

And then it’s not like there was — well, there was a bit of a generational handover, actually. Joe Biden — he’s not a boomer. He’s born a few years before the boomers. But I don’t think that’s the kind of generational handover a lot of young people were looking for, which I think gets to the point of this generational frustration. There is a sense — and not just a sense, a reality — that America’s elder generations have kept a hammerlock on power. (...)

EZRA KLEIN: So I’ve been wanting to do a show on this. First, is it useful to talk about this at all? Generations are big and diverse. What’s the point in talking in categories of that size? But then also, what is the critique at its core? I mean, you don’t get a lot out of OK boomer. Whenever there’s this much anger, though, lasting for this much time and emerging in this many cultural forms, you got to assume there’s something real there, something worth trying to understand on its own terms.

But one thing about it is, it’s not just one critique of the boomers. There’s a left critique that’s more about economics and power, and then a right critique that, at least usually, is more about cultural libertinism and individualism and institutional decay. So I wanted to put these critiques together to see if they added up to something coherent. Or maybe it’s just a bunch of carping millennials. And I say that as an often carping millennial.

Jill Filipovic is a writer, commentator, a lawyer, and she’s the author of the book “OK, Boomer, Let’s Talk How My Generation Got Left Behind,” which is a very nice encapsulation of the economic case for millennial rage. Helen Andrews is a senior editor at The American Conservative and author of “Boomers, The Men and Women Who Promised Freedom and Delivered Disaster,” which is a pretty searing critique from the right. As always, my email is ezrakleinshow@nytimes.com. Here we go.

So welcome, both of you, to the show. Helen, I want to begin with you. Why is generational analysis valuable? I mean, we’re dealing with pretty arbitrary time periods. Generations, they contain multitudes. So why are boomers or any other age cohort a useful descriptive category for understanding American society?

HELEN ANDREWS: The clue that first got me thinking that the boomers might be worth analyzing as a generation rather than through historical events that they happened to be around for was that the 1960s was a global phenomenon. A lot of people attribute ’60s protests in the United States to the various issues that they centered around, things like the anti-war movement. But you saw the same kinds of student protests in countries that didn’t have a draft or in countries where they had completely different records in World War II. And the parent-child dynamic was just totally alien to what it was in the United States.

So that got me thinking that the ’60s might have been a product just of the youth generation having such demographic heft, there being just so many more young people around. And that was the reason the ’60s protests took the form they did and were so universal across the civilized world. And then I started following the boomers through their political career. And you saw the same kinds of coincidences across the globe as they came into power in the 1990s. You saw neoliberal triangulators, who were trying to reconcile the left and capitalism, and the same types of leaders like Tony Blair and Bill Clinton in different places.

So any phenomenon that is happening in countries that have very different histories and issues sets, but similar demographic bulges, I thought was an indication that generations were worth looking at as generations.

EZRA KLEIN: So I can buy that. So then, Jill, let’s say I’m a boomer who thinks my generation wasn’t really that bad. And I’m tired of everybody yelling at me. I mean, sure, every generation, we make a few mistakes. But ultimately, we boomers, we left the world better than we found it. And the problem is that millennials are just particularly self-pitying, and they just want to blame the fact that life is hard on everyone else. Convince me I’m wrong.

JILL FILIPOVIC: Well, that’s pretty much the same thing that people said about the boomers when they were young, right? There is a whole book written about them called “The Culture of Narcissism.” If you read Helen’s book, it certainly draws on a lot of the descriptions of boomers when they were young people. I think one thing that’s very poorly understood about the boomer generation — and perhaps this is me being slightly defensive of them — is that they’re an incredibly politically polarized generation.

So boomers, much more so than millennials, much more so than the silent generation, more so even than Gen Xers, are really split politically down the middle between liberals and conservatives. And I think what we’ve actually seen and what I hear, especially from liberal boomers, is the sense of, well, wait a minute. We were trying to make the world a better place. And then there were political forces who we didn’t vote for who may have been part of our cohort, who now you’re using to blame our entire generation.

There’s some fairness to that defensiveness. That said, I would say, liberal boomers kind of won the culture. Conservative and more moderate boomers won American politics. And so the generation wide legacy, yes, does have some positives. But overwhelmingly, we’re now living on a planet that’s flooding and burning. So I think it’s a little hard to say that boomers left it better than they found it.

EZRA KLEIN: On the flooding and burning point — and I guess I’ll send this one to Helen, but it’s really for both — one thing I thought about, reading both of your books, is how much the boomers are actually a stand-in for technological change, some of which they generate and some of which they didn’t. I mean, the planet is burning. But the driving of fossil fuels as the way you power economies, I mean, that predates the boomers. And then, obviously, it grows during their heyday.

But a lot of the things that I think they get tagged for come from scientific advances that they weren’t even the ones to necessarily create. I mean, a lot of the sexual politics changes come from the pill. A lot of — and this is a theme of your book, Helen — a lot of social changes come from television. I mean, how much are boomers, Helen, simply the generation that happened to be largest and then in power when a lot of the electricity revolutions innovations came into full flower?

HELEN ANDREWS: I don’t think you can blame technology for the way the world is today and the wreckage that the boomers left us. For example, when we talk about the world today being a lot tougher for millennials than it was for the boomers, one of the things we’re talking about is the loss of power on the part of the working class. Their wages are not growing the way that they used to in the days of the boomers. A one-income family can’t make it the way that they could in the time of the boomers.

Some of that is attributable to technology, but a lot of that is due to changes in what the boomers did to the left. That is, the boomers were the generation of the new left. And the reason they called themselves that is because they were rebelling against the old left. They deliberately wanted the left-wing party in the western democracies not to stand for working class people and unions, but rather to stand for identity politics type interests. The hinge moment in America for that is the reforms to the Democratic National Convention in 1972, when they nominated George McGovern.

The way that delegates were chosen was then tilted toward or to favor identity politics. So the boomers made a choice to have their left-wing party champion identity politics, rather than working class people and unions. And so that’s the reason why the working class was then so vulnerable to these technological changes. The technological changes would have happened either way. But I think they would have had better defenders in the left-wing parties if the boomers hadn’t replaced the old left with their new left.

EZRA KLEIN: Is this what you think they did wrong? I mean, my understanding of your take on the boomers is that they unleashed a kind of cultural reckoning on America. And you do talk about the new left questions in the book, but you’re a Trump supporter. He’s not a huge fan of unions himself. Or is it your view that we should go back to a much stronger union and redistribution style politics? Is that the politics you want to see return?

HELEN ANDREWS: No, I would like to see the Republicans become the working class party of the future. Because I don’t think the Democratic Party as currently constituted is going to turn around and start championing their interests. And so I think there’s room for Republicans to be a little bit nicer to labor and to unions as a part of that realignment.

But realistically, Republicans protecting working class interests may have a different issue set than it did when the old left was championing unions in the 1940s and ’50s. It may look like having different positions on things like trade and immigration, which are actually areas where Trump and the working class were quite together.

EZRA KLEIN: Jill, give me your economic critique of the boomer legacy.

JILL FILIPOVIC: Yeah, so it’s interesting hearing what Helen says because it just strikes me as entirely ahistorical and the kind of polar opposite conclusion that I came to in researching my book. If you look at the political decisions that were made that really did gut the American middle class and working class, yes, Democrats are certainly not innocent parties here. But many of those decisions and many of those huge changes came about when boomers field the election of Ronald Reagan in 1980 and then again in 1984.

So you had Reagan who came in, increased tax havens for corporations, refused to increase the federal minimum wage, which we’re still arguing about, which has certainly damaged working class earning power, gutted union power and union membership. When you look at the ways in which the American economic landscape has changed, comparing boomers to millennials, one of the biggest differences is that when boomers were young, they saw their future as invested in.

So, when boomers were young adults, the federal government was spending $3 in investments into the future, things like infrastructure, education, research for every dollar it spent on entitlements. Now that’s flipped. So the federal government is spending $3, and as soon as boomers all are retired, that number will have ticked up to closer to $5 for every dollar it spends on future investments. So as boomers have gone through the course of their lives, they’ve seen the government work for them. Millennials really haven’t. We’ve been the ones stuck footing the bill.

And when it comes to this gap between the middle class and the working class and the degree to which working class earnings have really seen the bottom fall out, which is what’s happened over the past several decades, that’s been a pretty direct result of a systemic dismantlement of the kind of L.B.J. Great Society policies, of F.D.R.’s social welfare policies, of strong protections for unions. I mean, a tax on union memberships and right to work laws are not Democratic inventions. Those were coming from Republicans, and often boomer Republicans.

So, from my view, it really is this shift to conservatism among baby boomers, and sort of Reagan conservatism in particular, that was then bolstered by this kind of ’90s Clinton era centrist Democratic Party that really saw it, I think, to compete on the cultural issues that Republicans made salient and did kind of cede ground on a lot of the most important economic issues that Americans needed to thrive.

by Ezra Klein, NY Times (Transcript - Ezra Klein Show Podcast) |  Read more:
Image: Ezra Klein Show

Tuesday, April 6, 2021


Chi Chi Rodriguez at the Masters - 70′s
[ed. It's Masters week.]

Predicting the Future of Prediction Markets

Sometimes economists are just flat-out wrong. According to economic theory, annuities and reverse mortgages should be very popular for managing risk and liquidity — yet both products struggle for mainstream acceptance. Another favorite of economists is prediction markets: contracts with payoffs contingent on some real-world event. Their future is also highly uncertain.

In essence, prediction markets let people “bet” on some feature of the economy, thereby creating a new financial derivative. A prediction market in gross domestic product, or perhaps in local rates of unemployment, could be a useful means of hedging risk. If you are afraid that GDP will fall, you could “short” GDP in a prediction market and thus protect your overall economic position, because your bet would pay out if GDP came in lower than expected.

Prediction markets are also a useful means of discovering information about what is likely to happen next. If you want to know who is likely to win the Super Bowl, is there any better place to look than the published betting odds? By the same reasoning, various interest rate futures markets offer clues about what the Federal Reserve might be planning. The value of having more and better public information is another reason to encourage prediction markets.

The big puzzle is why prediction markets haven’t taken off, at least not since the earlier 19th-century history of “bucket shops.” Part of the reason is regulatory constraints, but prediction markets have not succeeded in some other parts of the world without such constraints. Intrade.com, now defunct, was based in Ireland and created active and successful markets in sporting events and presidential elections. But most of their prediction markets remained fairly illiquid, due to lack of customer interest. (...)

A skeptic might say that demand is limited because there are already so many good and highly informative markets in other assets. In 2009, for instance, was a market necessary to predict how well the iPhone was going to do? The share price of Apple might have served to perform a broadly similar function.

The question, then, is which prediction markets might prove most useful. Nobel Laureate economist Robert J. Shiller has promoted the idea of prediction markets in GDP, but most people face major risks at a more local, less aggregated level. One of the risks I face, for example, concerns the revenue of the university where I teach. This year enrollments rose slightly even though U.S. GDP fell sharply. So a GDP-based hedge probably is not very useful to me.

How about a prediction market in local real-estate prices, so that home buyers and real-estate magnates may hedge their purchases? Maybe, but then the question is whether enough professional traders would be attracted to such markets to keep them liquid. So-called binary options, particularly when the bet is on the price of a financial asset, often have remained unfairly priced or manipulated, and are viewed poorly by regulators.

For a prediction market to take off, it probably has to satisfy a few criteria: general enough to attract widespread interest; important enough to matter; and unusual enough not to be replicable by trading in existing assets. The outcomes also need to be sufficiently well-defined that contract settlement is not in dispute. (...)

For all the obstacles facing prediction markets, there is cause for optimism about their long-run viability. There are many more financial assets and contracts today than a few decades ago, and such markets can be expected to increase. The internet lowers trading and monitoring costs, and that should make prediction markets easier to create.

by Tyler Cowen, Bloomberg |  Read more:
Image: Sébastien Thibault via Nature
[ed. See also: Tales from Prediction Markets (Misinformation Underload); and, The Power of Prediction Markets (Nature).]

The Therapy-App Fantasy

Talkspace is part of a growing field of services that promise mental-health care via smartphone. And unlike many of the problems tech start-ups have set out to solve, this one actually exists: It’s hard to find a therapist. Maybe you have insurance, so you look up a list of in-network providers, start cold-calling, and hope to reach someone with an opening. Maybe you ask for recommendations from friends and hope someone they know takes your insurance or has out-of-pocket rates you can afford. Maybe you don’t know anybody with a therapist and the prospect of getting one yourself seems risky or shameful. Maybe you don’t know anyone with a therapist because there aren’t any therapists around to see — approximately 33 percent of counties have no records of licensed psychologists.

Geographic distribution is just one of the ways the mental-health profession fails to match the people in need of care: Doing so would also require more therapists who speak Spanish, more therapists of color, more therapists with LGBTQ expertise. Even in a therapist-rich environment like New York City, intangibles intervene. How do you find someone to whom you feel comfortable saying things you may feel uncomfortable saying at all? People seeking therapy face all these challenges even in the best of times, and these are not the best of times. According to a CDC report released last summer, 40 percent of American adults were dealing with mental-health or substance-abuse issues in late June, with younger adults, people of color, essential workers, and unpaid caregivers disproportionately hard-hit.

Therapists have long faced the question of how to provide their care to more people without diminishing its quality. In 1918, amid the catastrophe of the First World War, Sigmund Freud gave a lecture in which he proposed using free clinics for mass mental-health care — even as he acknowledged that doing so might require his fellow psychoanalysts to “alloy the pure gold” of their usual methods. “We’ve been in a crisis of access to mental-health care really since mental-health care professionalized,” said Hannah Zeavin, a professor at UC Berkeley whose forthcoming book, The Distance Cure, traces the history of remote therapy from Freud’s letters to crisis hotlines and up through today’s apps.

Accelerated by the pandemic, Zeavin’s subject has gone from an academic curiosity to a growth sector. Businesses in the “digital behavioral health” space raised $1.8 billion in venture-capital funding last year, compared to $609 million in 2019. In January, Talkspace announced plans to go public this year in a $1.4 billion SPAC deal. A presentation for investors managed to be simultaneously grim and upbeat in outlining the “enormous” market for its services: More than 70 million Americans suffer from mental illness, according to Talkspace, and the country has seen a 30 percent increase in the annual suicide rate since 2001. Talkspace says 60 percent of its users are in therapy for the first time. (...)

Much of what appears if you search “therapy” in the App Store does not provide the services of a human therapist. Some of it does not address mental health at all, in the strict sense: It is the digital equivalent of a scented candle, wafting off into coloring apps and relaxation games. Many services occupy an area somewhere in between professional care and smartphone self-soothing. Reflectly, for example, bills itself as “the World’s First Intelligent Journal” and promises to use the principles of positive psychology, mindfulness, and cognitive behavioral therapy to help users track their moods and “invest in” self-care. “Just like a therapist!! But free!!” reads one review. (Reflectly costs $9.99 a month.) Sayana, an AI chatbot, is personified as a pastel illustration with a dark bob and cutoff jeans; she also tracks the user’s mood and offers tips (“Observe your thoughts as they flow, just like the river”) to guide users on a journey through “the world of you.” “This is like your own little therapist and I love it!” reads one five-star review. Youper (mood tracking, chatbot, lessons) sells “Self-Guided Therapy”; Bloom (mood tracking, chatbot, lessons) is “the world’s first digital therapist.”

But chatbots and mood scores aren’t generally what people are imagining when they say, for example, that their ex needs therapy. “Therapy” here conjures an intervention to fix the personality and save the soul. Different people want different things from therapy. They want to break bad habits, work through trauma, vent about their boss, their boyfriend, their mom. They want to feel better (always easier said than done). They want someone to talk to, and they want some tools. When I resumed seeing my longtime therapist over video, I wanted her to tell me whether the problem was my brain or the pandemic — I needed someone I trusted to judge the situation. That is to say, I wasn’t sure what I needed, but I wanted the help of someone who knew better. And this — expert counsel in the palm of your hand — is what the high end of an emerging class of therapy apps claims to deliver.

“In 2021, mental health is finally cool,” declares a podcast ad for BetterHelp, one of the apps promising access to trained therapists that has promoted itself to consumers most aggressively. “But therapy doesn’t have to be just sitting around talking about feelings. Therapy can be whatever you want it to be.”

With a therapy app, more blatantly than in most health-care transactions, the patient is a customer, and the customer is always right. But this assumes patients know what they want and need and that getting it will make them feel better. These are not expectations most therapists would necessarily share — nor are they ones therapy apps are reliably prepared to fulfill.

A D.C.-area Talkspace user named Cait remembered getting off to a more auspicious start. “I was so excited because they give you all these therapists,” she said. “It was almost like a dating app.” Cait had signed up for the service after talking to a satisfied friend with a supportive Talkspace therapist who texted her all the time. Cait had recently started medication for depression; it helped, but she wanted to speak with someone regularly, and even with her insurance, she was worried about cost. She saw that Talkspace was offering a New Year’s deal at the beginning of 2021. If she used that and paid for six months up front, she could get half a year of therapy for $700. This seemed to her like quite a deal — far cheaper than paying out of pocket for conventional therapy but also far cheaper than what Talkspace might otherwise have been. While mood trackers and mindfulness apps can cost $10 or $15 a month, therapy apps like Talkspace, BetterHelp, Brightside, and Calmerry — ones that connect users to an actual licensed human therapist — cost hundreds of dollars. Without discounts (or subscribing for months at a time), a one-month Talkspace plan that includes weekly video sessions runs nearly $400. Particularly because the standard length of these visits is just 30 minutes, users are paying hourly rates that can approach those of in-person care.

Of course, many users aren’t paying out of pocket because, for many apps, users aren’t the customer at all. These apps, like Ginger and Lyra, focus on selling their services to employers or insurance companies.

by Molly Fischer, The Cut | Read more:
Image: Pablo Rochat

Monday, April 5, 2021


Kurt Otto-Wasow, Ile de la Cité, Paris 1950s
via:

Deconstructing That $69 Million NFT

“NFTs” have hit the mainstream news with the sale of an NFT based digital artwork for $69 million. I thought I’d write up an explainer. Specifically, I deconstruct that huge purchase and show what actually was exchanged, down to the raw code. (The answer: almost nothing).

The reason for this post is that every other description of NFTs describe what they pretend to be. In this blogpost, I drill down on what they actually are.

Note that this example is about “NFT artwork”, the thing that’s been in the news. There are other uses of NFTs, which work very differently than what’s shown here.

tl;dr

I have long bit of text explaining things. Here is the short form that allows you to drill down to the individual pieces.
  • Beeple created a piece of art in a file
  • He created a hash that uniquely, and unhackably, identified that file
  • He created a metadata file that included the hash to the artwork
  • He created a hash to the metadata file
  • He uploaded both files (metadata and artwork) to the IPFS darknet decentralized file sharing service
  • He created, or minted a token governed by the MakersTokenV2 smart contract on the Ethereum blockchain
  • Christies created an auction for this token
  • The auction was concluded with a payment of $69 million worth of Ether cryptocurrency. However, nobody has been able to find this payment on the Ethereum blockchain, the money was probably transferred through some private means.
  • Beeple transferred the token to the winner, who transferred it again to this final Metakovan account
Each of the link above allows you to drill down to exactly what’s happening on the blockchain. The rest of this post discusses things in long form.

Why do I care?

Well, you don’t. It makes you feel stupid that you haven’t heard about it, when everyone is suddenly talking about it as if it’s been a thing for a long time. But the reality, they didn’t know what it was a month ago, either. Here is the Google Trends graph to prove this point — interest has only exploded in the last couple months:


The same applies to me. I’ve been aware of them (since the CryptoKitties craze from a couple years ago) but haven’t invested time reading source code until now. Much of this blogpost is written as notes as I discover for myself exactly what was purchased for $69 million, reading the actual transactions.

So what is it?

My definition: “Something new that can be traded on a blockchain that isn’t a fungible cryptocurrency”.

In this post, I’m going to explain in technical details. Before this, you might want to pause and see what everyone else is saying about it. You can look on Wikipedia to answer that question, or look at the following definition from CNN (the first result when I google it):
Non-fungible tokens, or NFTs, are pieces of digital content linked to the blockchain, the digital database underpinning cryptocurrencies such as bitcoin and ethereum. Unlike NFTs, those assets are fungible, meaning they can be replaced or exchanged with another identical one of the same value, much like a dollar bill.
You can also get a list of common NFT systems here. While this list of NFT systems contains a lot of things related to artwork (as described in this blogpost), a lot aren’t. For example, CryptoKiddies is an online game, not artwork (though it too allows ties to pictures of the kitties).

What is fungible?

Let’s define the word fungible first. The word refers to goods you purchase that can be replaced by an identical good, like a pound of sugar, an ounce of gold, a barrel of West Texas Intermediate crude oil. When you buy one, you don’t care which one you get.

In contrast, an automobile is a non-fungible good — if you order a Tesla Model 3, you won’t be satisfied with just any car that comes out of the factory, but one that matches the color and trim that you ordered. Art work is a well known non-fungible asset — there’s only one Mona Lisa painting in the world, for example.

Dollar bills and coins are fungible tokens — they represent the value printed on the currency. You can pay your bar bill with any dollars.

Cryptocurrencies like Bitcoin, ZCash, and Ethereum are also “fungible tokens”. That’s where they get their value, from their fungibility.

NFTs, or non-fungible tokens, is the idea of trading something unique (non-fungible, not the same as anything else) on the blockchain. You can trade them, but each is unique, like a painting, a trading card, a rare coin, and so on.

This is a token — it represents a thing. You aren’t trading an artwork itself on the blockchain, but a token that represents the artwork. I mention this because most descriptions about NFTs are that you are buying artwork — you aren’t. Instead, you are buying a token that points to the artwork.

The best real world example is a receipt for purchase. Let’s say you go to the Louvre and buy the Mona Lisa painting, and they give you a receipt attesting to the authenticity of the transaction. The receipt is not the artwork itself, but something that represents the artwork. It’s proof you legitimately purchased it — that you didn’t steal it. If you ever resell the painting, you’ll probably need something like this proving the provenance of the piece.

Show me an example!

So let’s look an at an example NFT, the technical details, to see how it works. We might as well use this massive $69 million purchase as our example. Some news reports describing the purchase are here: [1] [2] [3].

None of these stories say what actually happened. They say the “artwork was purchased”, but what does that actually mean? We are going to deconstruct that here. (The answer is: the artwork wasn’t actually purchased).

by Robert Graham, Security Boulevard |  Read more:
Image: Security Boulevard
[ed. FYI: the cryptographic hash for the Beeple painting is (apparently): 6314b55cc6ff34f67a18e1ccc977234b803f7a5497b94f1f994ac9d1b896a017]

Shameless

In an episode of the 11th season of Shameless, Ian Gallagher (played by Cameron Monaghan) experiences wage theft at his new “shitty ass minimum wage job hauling boxes around a warehouse.” He discovers that management has paid him for 39 hours when he worked for 45, deducting money for locker fees, safety vests, and bathroom breaks. “Welcome to the working man’s America,” a co-worker tells him. Two episodes later, Ian quits the job to run drug money instead.

Shameless, Showtime’s tragicomic study of the ever-expanding American lumpenproletariat, ends this year after over a decade of being the only series of its kind on television, an anti-Horatio Alger tale that lays bare the gaping maw where our social safety net should be. The Gallaghers, a dysfunctional South Side Chicago family, are different from other working-class TV families like the Bunkers from All in the Family, or the Connors from Roseanne. As show creator Paul Abbot told the New York Times, “it’s not blue collar, it’s no collar.”

The show’s characters are driven by their basic material needs—nobody has the luxury of becoming self-actualized. They rotate between manual labor and food service jobs, prisons and homeless shelters, supplementing minimum wage or government aid—both insufficient—with mooching, begging, theft, fraud, and several varieties of sex work. Shameless is uncouth, grotesque, and abject where other premium cable hits of the past 10 years have been slick, aspirational fantasias populated by the wealthy. The series is an indictment of a system in which both political parties have abandoned a desperate underclass. When it ends, we’ll lose a realistic reflection of the ways that a growing number of Americans get by.

Such reflections have always been hard to find in mainstream mass media. Early Hollywood’s system of self-censorship, known as the Motion Picture Production Code, forbade depictions of crime on film, unless the criminals were unsympathetic characters who suffered for their sins. Code rules also discouraged portrayals of illicit drugs and the “selling of a woman’s virtue.” While the code officially ended in 1968 and didn’t apply to television, its spirit haunted the small screen for decades, clashing with audience appetites for stories about drugs, prostitution, and other types of crime—the vicarious experience of watching characters violate social norms. The challenge for TV writers has been to give viewers what they want, while simultaneously reinforcing the traditional middle-class values that have undergirded the genre since the 1950s. How to do this? Make crime ubiquitous, but poverty invisible.

At the dawn of the 21st century, a study revealed that one-third of all prime-time TV shows revolved around crime. Curiously, it also found that TV writers gave the majority of criminal characters professional or managerial jobs, or else left unknown the matter of their work and socioeconomic status. The same holds true today. The fictional criminals we watch on shows like Billions or various “copaganda” programs are frequently white collar, or they’re rendered somehow class-less. The narratives focus much more on individual pathology than on material conditions. Television is also more likely to portray violent crime—e.g., rape and murder—over the more quotidian crimes of survival, like shoplifting, petty drug dealing, or offering the occasional hand job for cash, the small informal ways of making ends meet in an era of suppressed wages and skyrocketing rents. Few television programs have engaged, politically and ideologically, with issues of class and labor. Fewer still have explored how (and why) poor people are pushed into criminalized underground economies.

Enter Shameless, which debuted in 2011, adapted from a British series of the same name. When showrunner John Wells pitched network executives, he had to fight suggestions to set the series in the American South or in a trailer park, those well-worn tropes used to telegraph deprivation in pop culture. Instead, Shameless takes place in a multi-racial city whose denizens have been left behind by gentrification but who see survival—against all odds and by any means necessary—as a point of familial pride. The Gallagher offspring are neglected by their chaotic parents, a mother with bipolar disorder and a father with chronic alcoholism, and so must fend for themselves in a crowded ramshackle home. Nearly every move they make in hopes of bettering their lot is thwarted by a byzantine public welfare system and an indifferent neoliberal state. They have no bootstraps, no boots, and for one character who suffers a workplace accident followed by a DIY amputation, not even a full set of toes.

You couldn’t find a better illustration of the concept, theorized by Marx and Engels, of the lumpenproletariat, the prefix of which translates to “ragged” or “rabble.” In 19th century Europe, the lumpen were those with “questionable means of support and of dubious origin,” like beggars, gamblers, pickpockets, and prostitutes, who inhabited the streets or urban slums. This so-called “surplus population” lived on the “crumbs of society” in part because their drug and alcohol dependencies made them unsuitable for traditional labor. In contemporary America, members of the lumpen fall through the cracks of a system that happily starves people with disabilities and deprives the poor of mental health and drug treatment services, leaving them cycling in and out of formal employment and underfunded institutions. They’re depraved, as the Sondheim lyric goes, on account of they’re deprived.

by Sascha Cohen, Current Affairs |  Read more:
Image: Shutterstock

Big Chunks of Corporate Tax Cuts End Up in Executive's Pockets

"This new law will provide tax incentives for companies to expand and create jobs by investing in plants and equipment,” proclaimed President George W. Bush in 2002 as he signed the Job Creation and Worker Assistance Act. “This measure will mean more job opportunities for workers in every part of our country.”

As Bush promised, the bill included significant corporate tax cuts. Further reductions in corporate taxes would follow with the American Jobs Creation Act of 2004 and the Tax Cuts and Jobs Act of 2017. The rhetoric in each instance was the same: Purportedly, these tax cuts were not for the sake of enriching corporate management but for employing American workers — hence the word “jobs” in all three titles. These companies would take the extra money and invest it in the workforce, creating new and better opportunities for regular people.

That is not what happened. In reality, a new academic study finds, a significant fraction of recent corporate tax breaks simply went to increased pay for top corporate executives. The paper, currently undergoing peer review before publication, is the first comprehensive academic examination of its kind. Its author, Grinnell College assistant professor of economics Eric Ohrn, used a database of top-level compensation at publicly traded U.S. firms to analyze the tax cuts’ impact on executive pay.

If Ohrn is correct, the reductions in the corporate income tax over the past two decades will reward America’s corporate royalty with hundreds of billions of dollars between now and 2030. Ohrn attributes this extraordinary payday to executives’ successful use of “rent-seeking,” an economic concept that describes individual and corporate use of power to capture wealth without adding any new value themselves.

Ohrn examined the pay of 31,879 executives at 2,794 publicly traded companies from 1998 to 2012. His results showed that for every dollar in reduced corporate taxes from two types of tax cuts, compensation for the top five executives at the companies increased by 15 to 19 cents.

Though there is little data readily available on lower-ranking executive pay, higher pay at the top almost certainly pulls up executive compensation on the corporate rungs just underneath. Dean Baker, senior economist at the Center for Economic and Policy Research in Washington, D.C., points out that if the next 20 executives at each company in Ohrn’s study received in aggregate half the increased pay of the top five executives, it would mean that “between 22% and 37% of the money gained from a tax break went to 25 of the highest-paid people in the corporate hierarchy.”

The amount of money at stake is gigantic.

American business has for decades been conducting a war against corporate taxation. The rationale for cuts is always the same: Companies and their favored politicians claim that they have opportunities to make investments in new technologies and plants that would lead to better jobs and higher pay for workers. Unfortunately, thanks to overbearing corporate taxes, they simply can’t afford to do it.

Neither part of this story is true. There is no discernible connection between levels of corporate profits and investment. Moreover, even if there were, it would likely make little difference for average workers: Higher productivity led to higher median wages for regular people during the three decades after World War II, but that link was broken in the 1970s. Since then, productivity has continually increased but has barely shown up in the paychecks of regular people. Instead, the greater wealth has gone to those at the top of the pay scale, such as corporate executives.

But the fact that the case for corporate tax cuts makes no sense has not impeded its success. During the 1960s, the federal government took in an average of about 3.7 percent of the gross domestic product via corporate income taxes. By the late 1990s, it had fallen to 2.1 percent. The Congressional Budget Office now estimates that it will average 1.3 percent from 2021-2030 — that is, 0.8 percentage points of GDP less than 20 years ago.

The decrease may not seem like much on its face, but the CBO projects that the total U.S. GDP over the next 10 years will be $273 trillion. If the corporate income tax were still at the 2.1 percent of GDP level of the late 1990s, it would bring in $5.73 trillion. At the current projected rate of 1.3 percent, it will be $3.55 trillion. Corporations will save $2.18 trillion. 

by Jon Schwarz, The Intercept |  Read more:
Image: Soohee Cho/The Intercept, Getty Images
[ed. No surprise here. And, since Biden's infrastructure bill is supposed to be paid for with higher corporate taxes, we're already seeing expected pushback (obviously from Republicans, but also from at least one Dem). Here's Joe Manchin today:  "In a radio interview with West Virginia's MetroNews, Manchin said raising the corporate tax rate to 28 percent, as envisioned in the plan, is just too high, though he did say he could get behind a hike to 25 percent.

The senator claimed he wasn't alone, either. "There's six or seven other Democrats who feel very strongly about this," he said. "We have to be competitive, and we're not going to throw caution to the wind.
"]

Sunday, April 4, 2021