Saturday, April 24, 2021

PGA's Player Impact Program Rewards Most Popular Players

There are 244 players listed in the current FedEx Cup points standings. The PGA Tour’s new $40 million bonus pool for high-profile players, dubbed the Player Impact Program, will impact only 10 of them. It is, in theory, a way to reward the biggest movers of the proverbial needle without taking anything away from the “other” 234 players in the tour ecosystem.

“It doesn’t really matter to me,” said one top-50 player of the program, details of which were first reported on Tuesday by Golfweek. “Good for the big guys, doesn’t matter to the little guys. Maybe if I win a major, I’ll have a chance.”

This, essentially, is the best reaction the tour could hope for. But the highly unusual formula to determine the players who will receive the money, and the unprecedented nature of the tour paying members for what is only tangentially related to their on-course results, drew mixed reactions from players across the Q-score and Meltwater Mention spectrum.

Those terms, by the way? Get used to them. They’re part of an algorithm the tour will use to rank players on their “Impact Score.” The goal, a PGA Tour spokesman told Golf Digest, is to “recognize and reward players who positively move the needle.”

The five criteria to identify these players are as follows:
  • Popularity on Google search
  • Nielsen Brand Exposure rating, which measures the value a player delivers to sponsors via his total time featured on broadcasts
  • Q-rating, a metric of the familiarity and appeal of player’s brand
  • MVP rating, a measure of how much engagement a player’s social media and digital channels drive
  • Meltwater mentions, or the frequency with which a player is mentioned across a range of media channels.
Noticeably absent from the criteria is any direct measure of on-course success. The initial report mentioned FedEx Cup standing would be incorporated into the calculations, but the PGA Tour confirmed it was not actually part of the formula.

A program of this kind had been in discussion for multiple years, and the Player Advisory Council always understood the value in rewarding the tour’s highest-profile players.

“I had no issue with it,” Billy Horschel, a PAC member, told Golf Digest. “When you look at it, there’s maybe 10 to 30 guys that really push the tour and bring in the money, have a transcendent personality, get a lot of attention. They’re the reason we play for as much as we do. We don’t reward mediocrity.”

The actual implementation of the program is widely seen as a response to the Premier Golf League, a potential rival to the PGA Tour that garnered significant attention in early 2020 with the promise of offering a guaranteed-money structure to entice away top players. But the upstart venture, which was backed by Saudi Arabian financiers, lost steam when several stars—led by current PAC chairman Rory McIlroy, the first marquee player to publicly denounce the PGL—pledged loyalty to the PGA Tour.

“There's a little bit of envy [among the rank and file membership],” said one multiple-time PGA Tour winner about the program, which has been in place since January. “That it's not fair, that it's using $40 million not to better our game or our sport or the tour, that they're just giving $40 million to the top 10 players to prevent them from playing in another league, which is the absolute worst reason to do it. If you want to give it to them because they deserve it that’s one thing. To do it to prevent them from making an irrational decision, I feel like is the wrong reason to do it."

by Daniel Rapaport, Golf Digest | Read more:
Image: WL
[ed. Bad idea. Everything's becoming transactional these days.]

Friday, April 23, 2021

Lesson From the Old New Deal: What Economic Recovery Might Look Like in the 21st Century

When the Green New Deal reemerged into headlines in November 2018, unemployment in the US sat at 3.7 percent. Even supporters of the program voiced warranted skepticism about its viability. Sure, the climate crisis is important, but the government hardly ever spends huge sums on big social programs anymore—least of all when the economy appears to be doing relatively well, by conventional accounting. The window for a massive stimulus opens when there’s a recession, and we weren’t in one. Times have obviously changed since then, although the path to an ambitious climate response remains far from certain.

Joe Biden will be the president by the time this book is released, having won decisively in an election that should have been by all accounts—given the blood on Trump’s hands—a blowout. Instead, Trump collected ten million more votes than in 2016, Democrats lost seats where they were expected to gain them. After run-off elections in Georgia, the party managed to win back control of the Senate, held by the narrowest of margins. Biden was pushed by movements during his campaign to adopt a climate platform more ambitious than the one he ran on in the primary. But his administration will be hard-pressed to get any of that through Congress, left mainly to find creative uses for the executive branch—that is, if he decides to treat his $2 trillion commitment to a green-tinted stimulus as anything more than lip service to progressives and isn’t completely shut down by the 6-3 right-wing majority on the Supreme Court. Democrats’ underwhelming performance in 2020, moreover, doesn’t bode well for winning back more power in upcoming elections. If anything, there’s much more to be lost.

Understanding what the road toward anything like a Green New Deal looks like now, when all manner of crises are boiling over, means taking its namesake seriously. The New Deal—in all its deep flaws and contradictions—was more than just a big spending package that helped to drag the US out of the Great Depression. It reimagined what the US government could do, what it was for, and who it served. To effect such a drastic sea change in this country’s politics, it did something climate policy in the US has historically struggled with: it made millions of people’s lives demonstrably better than they would have been otherwise. That, in turn, helped solved the other big dilemma facing a Green New Deal and just about any major progressive legislative priority: the tangible mark New Deal programs left in nearly every county in the US helped to build a sturdy Democratic electoral coalition that could bat off challenges from the right and endure for decades. Even as many of its gains have been clawed back by a revanchist right, hallmarks like Social Security remain so broadly popular that even the GOP has stopped trying to go after them. A Green New Deal should aim even higher.

Like today, the bar for successful leadership some 90 years ago was pretty low. A very rich man with even richer friends, Herbert Hoover was mostly blind to the effects of the Great Depression on working people and for a while denied there was any unemployment problem at all. Before becoming president, Hoover had made his fortune in mining, transforming himself from poor Quaker boy to lowly engineer to magnate. He gave away large chunks of his fortune to charity and fancied himself both a man of the people and a magnanimous captain of industry. Hoover assumed his fellow businessmen were philanthropic types, too. As he would find out in the waning days of his administration, America’s businessmen might fund libraries and museums, but they had neither the will nor the ability to fix the problem they had helped create. The Depression defined and destroyed his administration and nearly took down the whole concept of liberal democracy with it.

In May 1930—with an unemployment rate screeching past 20 percent—Hoover assured the US Chamber of Commerce that “I am convinced we have now passed the worst. The depression is over.” That December, his State of the Union address promised that “the fundamental strength of the Nation’s economic life is unimpaired,” blaming the Depression on “outside forces” and urging against government action.

“Economic depression,” he said then, “can not be cured by legislative action or executive pronouncement. Economic wounds must be healed by the action of the cells of the economic body—the producers and consumers themselves.” Ideologically opposed to the idea of state intervention in business, Hoover that year had convened a compromise: the Emergency Committee for Employment, to gently nudge the private sector into putting 2.5 million people back to work through local citizens’ relief committees, comprised mostly of local officials and business executives. After several months it hadn’t worked; members of the committee could point to no evidence that it had created any jobs at all. Committee head Arthur Woods petitioned the White House to create a public works program with federal funding instead. Hoover refused, and the committee withered away shortly afterward as unemployment continued to skyrocket. Its replacement was an advertising campaign coaxing individuals to give to charity. Announcing the plan via radio address, Hoover bellowed that “no governmental action, no economic doctrine, no economic plan or project can replace that God-imposed responsibility of the individual man and woman to their neighbors.” Just before the 1932 election, Hoover warned that a New Deal—what Franklin Roosevelt was campaigning on—would “destroy the very foundations of our American system” through the “tyranny of government expanded into business activities.”

Hoover had a relatively successful career up until the crash, with a well-regarded run as secretary of commerce that included his successful management of the Great Mississippi Flood of 1927 by marshaling public and private resources toward recovery. That Hoover is widely remembered as a loser is thanks mostly to who and what he lost to. Roosevelt’s blowout victory in the 1932 election—where he won 42 of 48 states—ushered in a profound change in American life. With it came 14 years of uninterrupted, one-party control over the White House and both chambers of Congress, secured not by the kinds of authoritarianism that were common through that era, and which well-heeled American elites mused might be needed, but by democratically elected Democratic majorities. Accounting for two brief interruptions just after the end of World War II, Democratic control would extend on for a total of 44 years in the Senate and 58 years in the House.

Until he left office, Hoover refused to budge on his overall approach, as he would through the rest of his life. He pleaded with Roosevelt to denounce the agenda he had just run on, which included such things as widespread unemployment insurance, a job guarantee for the unemployed, tackling soil erosion, and putting private electric utilities into public hands. As the financial system collapsed, the unemployment rate floated around 25 percent, and fascism was on the march in Europe, Hoover did nothing. Federal Reserve chairman Eugene Meyer begged him to reconsider and declare the bank holiday he knew that Roosevelt was already planning as president-elect. “You are the only one with the power to act. We are fiddling while Rome burns,” he told Hoover. The president was unmoored: “I have been fiddled at enough and I can do some fiddling myself.”

Hours after taking the oath of office, Roosevelt and his top advisers embarked on a marathon session to save and restore faith in a banking system on the verge of collapse. Within 36 hours, the administration declared a nationwide bank holiday. Before it ended, on the afternoon of March 9, Roosevelt spent two hours presenting one of the earliest New Deal programs to his closest advisers. It would be a jobs program, he explained, that would “take a vast army of these unemployed out to healthful surroundings,” doing the “simple work” of forestry, soil conservation, and food control. By that evening, the program’s final report explains, the proposal was drafted “into legal form” and placed on the president’s desk. At ten, he convened with congressional leaders who brought it to Congress on March 21. It was signed into law on March 31, and the first recruits of the Civilian Conservation Corps (CCC) were taking physicals by April 7 before being bused from their homes in New York City to Westchester County, freshly issued clothes in hand.

By July, the program had established 1,300 camps for its 275,000 enrollees. Between 1933 and its end in 1942, the CCC’s workers—average age 18.5, serving between 6 months and 2 years—built 125,000 miles of road, 46,854 bridges, and more than 300,000 dams; they strung 89,000 miles of telephone wire and planted 3 billion trees. Among the most expansive and maligned of New Deal programs, the Works Progress Administration—derided as full of boondoggles and government waste—built 650,000 miles of roads, 78,000 bridges, and 125,000 civilian and military buildings; WPA workers served 900 million hot lunches to schoolchildren, ran 1,500 nursery schools, and put on 225,000 concerts. They produced 475,000 works of art and wrote at least 276 full-length books. From 1932 to 1939, the size of the federal civil service grew from 572,000 to 920,000.

The WPA’s predecessor, the Civil Works Administration, created 4.2 million federal jobs over the course of a single winter. Much of that work was in construction, but the program also employed 50,000 teachers so that rural schools could remain open, rewilded the Kodiak Islands with snowshoe rabbits, and excavated prehistoric mounds, the results of which ended up in the Smithsonian. In the first year of its operation, 1939, the Civil Aeronautics Board built 300 airports. They did it all without so much as a cell phone or computer.

Like the original, a Green New Deal won’t—if it’s successful—be a discrete set of policies so much as an era and style of governance. It will be the basis of a new social contract that sets novel terms for the relationship between the public and private sector and what it is that a government owes its people. Likewise, the New Deal was designed—learning as it went—to solve a problem the United States had no blueprint for: creating a welfare state capable of supporting millions of people essentially from scratch and with a wary eye toward those countries abroad that were handling a catastrophic economic meltdown in very different, far crueler ways. The New Deal might be best described by a spirit of what Roosevelt referred to as “bold, persistent experimentation”: flawed, contradictory, ever-evolving, and very, almost impossibly big. “It is common sense,” he said in the same speech, “to take a method and try it: if it fails, admit it frankly and try another. But above all, try something.” More than giving bureaucrats carte blanche to move fast and break things, the New Deal crafted a container in which innovation and experimentation could take place, providing a combination of ample public funds and rigorous standards, all to be overseen by a set of dogged administrators. As Paul Krugman would write some 75 years later, the “New Deal made almost a fetish out of policing its own programs against potential corruption,” well aware of the hostility its new order would face from those invested in continuing on with business as usual.

by Kate Aronoff, LitHub |  Read more:
Image: uncredited


Pascal Verzijl, ‘Little Things’ Analogue collage 2021
via:

Therapy Without Therapists

Americans have been getting sadder and more anxious for decades, and the economic recession and social isolation from COVID-19 have accelerated these trends. Despite increased demand for mental health services, those who seek treatment can’t get it. Most people seeking care overwhelmingly prefer psychotherapy over medication, yet they are more likely to be prescribed an antidepressant, often from their primary care provider.

The reasons are fairly obvious. Therapy is expensive. Private insurance companies don’t want to pay for unprofitable, long-term services provided by highly skilled (i.e., high-priced) professionals. When insurance companies do reimburse therapists for their services, they do not pay a living wage. Nor can therapists afford the prohibitive barriers to managing insurance claims—therapists report that most of their patients pay out-of-pocket for therapy or receive minimal insurance coverage for mental health services. When healthcare is privatized, socially useful services are scarce or nonexistent. The solution is equally obvious. Healthcare should be a universally-available public good.

Unsurprisingly, the healthcare industry has reframed this straightforward problem and its straightforward solution to turn a profit. According to industry leaders, the problem is not that a market-driven healthcare system unequally distributes much-needed care. Rather, the problem for them is that the provision of mental health services is not entirely subsumed by capital’s law of motion. Mental healthcare, by their logic, ought to be further scientifically managed to cut costs and increase efficiency.

Due to the economic imperatives of the system, clinical scientists and health service researchers have done their part to rationalize this logic. Designing brilliant studies, these scholars tell industry leaders what they want to hear—that the future of mental healthcare means fewer clinicians, less care, and more automation. At the National Institute of Mental Health Services Research Conference in 2018, Gregory Simon, a psychiatrist and public health scholar for Kaiser Permanente, warned of the coming transformations in the delivery of mental healthcare:
While the fourth industrial revolution has been transforming commerce and industry, and most of science, mental health services remain confidently ensconced in 19th century Vienna [displays an image of Sigmund Freud]. But not for long. The revolution is coming to us.
According to Simon, the fourth industrial revolution will involve the intensification of the division of labor through methods such as task-shifting and the widespread use of digital technologies. Dr. Simon prophesied that mental health “consumers” will soon ask their voice-activated devices: “Alexa, should I increase my dose of Celexa?” Dr. Simon needn’t have looked too far into the future. The transformations he anticipated have already radically reshaped the provision of mental healthcare—a revolution that has transpired behind the backs of both therapist and patient alike.

The Division of Labor in Mental Healthcare

In the past several decades, healthcare in the US has increasingly resembled an assembly line, with the labor process atomized into its component parts and assigned to different workers. Task-shifting is the preferred term by health service researchers for this increasing division of labor. It refers specifically to the process by which tasks from professionals with higher qualifications are delegated to those with fewer qualifications or to a new cadre of employees trained for the specific healthcare service. Recently clinical tasks have not just been passed on to lesser-skilled workers, but also to lay people and even to patients themselves. (...)

Task-shifting is already the norm in medicine and is only increasing as the US faces a shortage of physicians. It is common for patients to visit their doctors and have their body weight and blood pressure measured by medical assistants, to have their blood drawn by phlebotomists or nurses, and to have their responses to physicians’ questions be recorded by medical scribes. This increased division of labor means that physicians only work at the top of their degree qualifications and lesser-skilled workers perform simple clinical tasks at a lower cost. For fairly routine visits, like yearly check-ups, physicians are increasingly being replaced by physician assistants. According to the US Bureau of Labor Statistics, the median annual salary of a physician assistant is $112,260 whereas the median salary of a physician is $208,000. It is no wonder that as health systems Taylorize medicine, physician assistants are one of the fastest growing professions in the country.

To further deskill laborers and make them appendages to machines, biotechnology firms have developed products that automate these routine clinical tasks (e.g., blood pressure monitors, automatic brain scan image processors, etc.). Under a scientifically managed healthcare system, healthcare services are spread across many hands, reducing continuity of care. The proliferation of non-physician medical roles decreases total compensation for healthcare workers, but most importantly this increased fragmentation often reduces the quality of care, putting patients at risk. (...)

Due to the financial incentives introduced by the managed care system, psychiatrists—who earn an average of $220,430 per year after eight years of medical training—rarely conduct psychotherapy and devote most of their time to disseminating psychopharmacological treatments. They have been replaced by a cheaper labor force of lesser-educated clinicians. The majority of psychotherapy is now provided by clinical social workers, who receive two years of graduate training and earn an average annual salary of $50,470, followed by a long distance by clinical psychologists, who attend five to seven years of school and earn an average annual salary of $87,450. (...)

The Rise of Community Health Workers

The latest “innovation” to deskill mental healthcare workers has been to displace professionals entirely. Researchers have increasingly propagated the effectiveness of training lay people to provide brief therapy in lieu of licensed mental health providers. Though the stated rationale for training non-professionals, termed community health workers, is to integrate knowledge from traditional healers and communities to provide culturally competent care, their real function is to cut labor costs and put money back in the hands of corporate hospital chains. (...)

Community health worker models often draw inspiration from volunteer programs formed in resource scarce, low-and-middle-income countries in response to the lack of public or private infrastructure for mental healthcare. For example, one of the most revered volunteer community health worker models, Nepal’s Female Community Health Volunteer (FCHV) program, has been widely lauded for its expansive base of over 50,000 volunteers who offer counseling and necessary health services to women and families across the country. The FCHV is partly responsible for Nepal’s sharp declines in child and maternal mortality rates, and the public hospital system has integrated these exemplary volunteers into their service model.

However impressive the work of these women, it should go without saying that they should be adequately remunerated. Further, if they are providing essential health services, the care they provide should be incorporated into the public health system, not contingent upon a reserve army of volunteers. As several social scientists have noted, attempting to import public health models from resource-scarce contexts to high-income countries is ethically dubious, particularly if the model hinges on the exploitation of an unpaid workforce. The US has the necessary infrastructure and resources to adequately hire and compensate professionals. The imposition of scarcity and cheap labor in the US is a policy decision, not a rational response to real material constraints.

by Briana Last, Damage |  Read more:
Image: Getty via

Toward a Better Understanding of Systemic Racism

As an academic librarian in the United States, I have watched with dismay as Critical Race Theory (CRT) has become the dominant framing of the continuing impact of America’s terrible racial history on group well-being metrics. CRT has not only spawned jargon-filled institutional diversity, equity and inclusion policies, but affects individual academic departments and libraries. The way in which it constrains inquiry and pre-biases our research is not only evident in the classroom, but is beginning to influence how we academic librarians provide resources and teach research skills. CRT framing has even found its way into our job descriptions and library policies, and has taken on the character of a political or religious litmus test. Its slippery discourse carelessly uses loaded terms such as white supremacy and racism to describe downstream outcomes, rather than intentions and attitudes. It is increasingly hostile to the fundamentals of effective research.

Perhaps even worse, it risks obscuring the actual ways in which the shameful racial history of the US set in motion the present day observed racial disparities and prevents us from formulating the policies that might best address such disparities in the present. Both free inquiry and unbiased research and the ability to help groups disproportionately impacted by our history are going to become increasingly difficult if CRT continues to be the only way of thinking about systemic racism.

CRT makes two central claims. The first contains a crucial insight from the civil rights movement, without which we could make little sense of our cultural and social reality. The second, however, asserts that disparities themselves constitute racism and are evidence of and perpetuate white supremacy and must therefore be targeted by policies. This logical sleight-of-hand threatens both the cohesion of any pluralistic society and prevents us from addressing the actual problems that lead to racial disparities.

CRT approaches, then, rest on two claims, the second of which is believed to flow from the first.

Claim One: Systemic Racism

The first claim is that blacks suffered not only two hundred and fifty years of slavery, resulting in a direct and massive group-level difference in wealth, but another subsequent one hundred years of official subjugation and segregation and denial of the public goods that underwrite flourishing. This has led to group-level disparities in human capital development, resulting in, among other things, disparate outcomes. This is an inescapable fact. The modern racial landscape is not caused by something fundamentally wrong with black people—as true a white supremacist or racist would claim.

For example, the higher crime and victimisation rates among black communities could, as James Foreman Jr. has argued, be the product of an honour culture put in motion by Jim Crow-era underpolicing of any crime that did not disrupt the then racial and economic hierarchy. Higher poverty rates can be traced in large part to the economic legacy of slavery, as well as to various racist policies that prevented the acquisition of wealth.

Rerun the same multifaceted group immiseration experiment with any group, and you will get largely the same results. If blacks had immigrated to the US and been treated like, say, Norwegian immigrants, these massive developmental disparities would probably be largely absent. Although immigrants can certainly arrive with different cultural and economic averages that can manifest in some group-level differences, given the particular traits needed to succeed under different cultural circumstances, the massive differences in flourishing between black and white Americans are certainly impacted by our history around race.

In the US, discrimination against blacks has historically been orders of magnitude more profound than discrimination against other ethnic groups. Even without the racist post-hoc justifications of the practice, slavery would have had group-level ramifications on its own, given the near total lack of wealth held by blacks in 1865. Add a century of segregation and racism and you have a situation unmatched in its capacity to reproduce group-level generational misery.

This empirical claim about upstream group-level causation does not necessarily imply specific downstream personal or policy solutions. In fact, we need to consider a wider range of possibilities for reducing group-level suffering.

Where CRT runs into serious conceptual trouble, though, is in its second central claim.

Claim Two: All Disparities Are the Result of Continuing Racism

The second claim is that, because these disparities were set in motion by America’s reprehensible racial history, each of them is literally caused by this history in both the group and individual instance. Every disparity observed today stems from racism and white supremacy. Those who fail to seek a forced repair of the disparity are guilty of racism and perpetuating white supremacy. Any judgement, system or policy that perpetuates a disparity that can be traced to a racist past is itself white supremacist and racist. Since racism is the underlying cause of all disparities, large and small, insufficient alarm and concern at these disparities is also racist.

This second claim allows anti-racist ideology to be weaponised by both moralists and authoritarians.

This presents a dilemma: if racist policies have resulted in disparate flourishing metrics, why not address these disparities in every arena in which they exist?

The error here is imagining that group disparities continue to be neatly tied to the racism that set them in motion. This leads to a strange obsession with the disparities themselves and not their upstream, proximate causes, which at the individual level are not racially unique.

Conservative economist Glenn Loury has convincingly argued that present disparities are the result of developmental challenges that may have arisen as a consequence of racism, but no longer depend on it. Leftist political scientist Adolph Reed Jr. has reached a similar conclusion, from a Marxist perspective: the developmental problems of the black community are simply the result of greater exposure to a destructive political economy that can handicap anyone’s flourishing. While this greater exposure owes its origins to racism, Reed argues that the political economy itself, not black identity, should be the focus of policy efforts, since that same political economy can be the source of misery for anyone.

Despite their ideological differences, Loury and Reed have hit on an important point: disparities, rather than being independent variables that prove racism, are the result of experiences that can cause anyone suffering. The fact that blacks suffer more from them originated in racism but no longer tied to it.

Imagine a university that sincerely wants to reflect American demographics by having 14% of its faculty and students be the descendants of slaves. What do we do with the fact that being a successful student or faculty member requires human capital that our racial history has distributed unequally? How do you address a disparity in flourishing when there is a disparity in the human capital required for flourishing? Do we simply nullify those requirements and denounce them as racist, as CRT advocates do? Or do we give up entirely and say it’s all in the past and there’s nothing we can do, and focus solely on individual merit, as staunchly colour-blind meritocrats and opportunistic racists do?

A Better Definition of Systemic Racism

The unique history of blacks in the United States has left them more exposed to political, economic and developmental problems that can immiserate anyone. The best way to address this is to concentrate on the economic and developmental problems more broadly, and in so doing address the racial disparity without overtly racializing either problems or solutions.

by Brian Erb, Areo |  Read more:
Image: uncredited
[ed. See also: Creating an Inhabitable World for Humans Means Dismantling Rigid Forms of Individuality (Time).]

Saturday, April 17, 2021


via:

Manoucher Yektai, Untitled (Still Life), 1969 

The Blood-Clot Problem Is Multiplying

For weeks, Americans looked on as other countries grappled with case reports of rare, sometimes fatal blood abnormalities among those who had received the AstraZeneca vaccine against COVID-19. That vaccine has not yet been authorized by the FDA, so restrictions on its use throughout Europe did not get that much attention in the United States. But Americans experienced a rude awakening this week when public-health officials called for a pause on the use of the Johnson & Johnson vaccine, after a few cases of the same, unusual blood-clotting syndrome turned up among the millions of people in the country who have received it.

The world is now engaged in a vaccination program unlike anything we have seen in our lifetimes, and with it, unprecedented scrutiny of ultra-rare but dangerous side effects. An estimated 852 million COVID-19 vaccine doses have been administered across 154 countries, according to data collected by Bloomberg. Last week, the European Medicines Agency, which regulates medicines in the European Union, concluded that the unusual clotting events were indeed a side effect of the AstraZeneca vaccine; by that point, more than 220 cases of dangerous blood abnormalities had been identified. Only half a dozen cases have been documented so far among Americans vaccinated with the Johnson & Johnson vaccine, and a causal link has not yet been established. But the latest news suggests that the scope of this problem might be changing.

Whether the blood issues are ultimately linked to only one vaccine, or two vaccines, or more, it’s absolutely crucial to remember the unrelenting death toll from the coronavirus itself—and the fact that COVID-19 can set off its own chaos in the circulatory system, with blood clots showing up in “almost every organ.” That effect of the disease is just one of many reasons the European Medicines Agency has emphasized that the “overall benefits of the [AstraZeneca] vaccine in preventing COVID-19 outweigh the risks of side effects.” The same is true of Johnson & Johnson’s. These vaccines are saving countless lives across multiple continents.

But it’s also crucial to determine the biological cause of any vaccine-related blood conditions. This global immunization project presents a lot of firsts: the first authorized use of mRNA vaccines like the ones from Pfizer and Moderna; the first worldwide use of adenovirus vectors for vaccines like AstraZeneca’s, Johnson & Johnson’s, and Sputnik V; and the first attempt to immunize against a coronavirus. Which, if any, of these new frontiers might be linked to serious side effects? Which, if any, of the other vaccines could be drawn into this story, too? How can a tiny but disturbing risk be mitigated as we fight our way out of this pandemic? And what might be the implications for vaccine design in the years to come?

To answer these questions, scientists will have to figure out the biology behind this rare blood condition: what exactly causes it; when and why it happens. This is not an easy task. While the evidence available so far is fairly limited, some useful theories have emerged. The notions listed below are not all in competition with one another: Some are overlapping—or even mutually reinforcing—in important ways. And their details matter quite a bit. A better understanding of the cause of this condition may allow us to predict its reach.

by Roxanne Khamsi, The Atlantic |  Read more:
Image: DeAgostini/Getty/ Katie Martin/The Atlantic

Whose Feelings Count Most in a Pandemic?

If an alien or visitor happened to take a gander at lifestyle journalism over the past six months, they might assume that even though a lot of people are losing their jobs, waiting endlessly for unemployment, or even being evicted, the majority of the country has passed the pandemic baking bread, moving out of cities, and gazing out the window wondering if every day is Wednesday. For every story about the truly devastating impact the pandemic has had on normal life, it seems that there have been countless others that do little more than document every single possible concern of the upper-middle-class.

Lifestyle journalism catered specifically to the needs, wants, and desires of the beans and sourdough crowd: the same affluent workers whose jobs afforded them the flexibility to do their jobs from their homes. During the long, dark months of the spring, while many Americans were contending with lives lived mostly indoors, countless other people were doing the work that afforded the WFH class the freedom to worry only about how to occupy their time now that they were trapped inside.

The New York Times
quickly gathered their resources to create At Home, a section of gentle lifestyle content meant to quell the anxieties of their core audience, many of whom might have already escaped New York City during the worst spring months. The landing page for the section collects the various articles written for the express purpose of soothing the frazzled nerves of its readers and states its intended purpose: “We may be venturing outside, tentatively or with purpose, but with the virus still raging, we’re the safest inside,” the copy at the top of the page reads. Of course, inside was the “safest” place to be for a good long time, but even acknowledging that is a privilege. For all the Times readers who spent the spring worriedly disinfecting the groceries delivered to them by DoorDash or FreshDirect employees, there were countless other people working to make sure that the people locked in their homes, fearful of the out of doors, had food to eat. This divide was rarely noted in the lifestyle content that proliferated, most likely because it is not soothing to readers to think about the minimum wage employee riding a bicycle through rain and sleet to deliver them a pizza.

As the pandemic unfolded, I turned to the Times for recipes like many of my other peers did, but quickly developed a one-sided adversarial relationship with the What to Cook This Week email newsletter, written mostly by Food section editor Sam Sifton. Cataloging the innermost anxieties of the upper class has always been the hidden directive of the paper’s Style section, but witnessing that bleed over into the Cooking newsletter became tiresome after a while.

Consider this dispatch from the July 24 newsletter, some six months into the pandemic:
Good morning. I caught a fat porgy on a home-tied fly the other day, a blind cast into clear ocean water, streaming past boulders on an outgoing tide. It wasn’t the striped bass I was looking for, but I thought it might be good for a few tacos for dinner and that hauled me out of the rut I’ve found myself in these last few weeks. It’s been freestyle mapo tofus with ground beef and chile crisp; skillet pastas with Italian sausages and plenty of kale; crema-marinated chicken grilled and doused in lime; repeat. It gets boring, frankly.
For thousands of people who have yet to leave their neighborhoods or who have been working and running the household in a capacity that does not allow for leisurely casting a line into a clear blue ocean, Sifton’s missives are comically out of touch with other, more pressing realities like juggling childcare and a full-time job. What he and so many other writers have been working against since the pandemic started is nothing more than an exploration of what it means to be bored. Sourdough, an affectation that has largely been abandoned, was an effective way to channel anxieties about an airborne virus, but also, baking bread is nothing more than a hobby that adequately fills empty stretches of time while also making people feel productive. Baking bread for leisure is an activity that I imagine those who do it for a living, in industrial kitchens and the like, would rather not undertake. The gap between leisure and labor here is wide.

Other, more esoteric “hobbies,” like growing scallions in jam jars, was rebranded as “novel frugality” in a piece that now feels typical of the sort written during the spring and early summer. Habits like saving Ziploc bags, regrowing the aforementioned scallions, and eating the heel from a loaf of bread were the sort of penny-pinching habits reserved for the generation that survived the Great Depression, not the rest of us who have long luxuriated in the great American pastimes of consumerism and consumption, the April story at Vox implied. These habits, which are fairly normal and do not really deserve any special mention, were documented on social media and in pieces like the one that ran in Vox. Framed as an upper-class panic about safety and minimizing trips out of the house, these behaviors are unusual only because the people in question never really had to think about frugality in a concrete way. (...)

Paying close attention to lifestyle journalism over the past six months revealed that the anxieties, concerns, and fears that are being documented are purely those of Richard Florida’s “creative class”—upwardly-mobile individuals working in vaguely creative sectors who mostly congregate in cities like New York and San Francisco. These individuals value the sorts of amenities that make a city feel superior to a suburb: museums, bars, restaurants, and the ability to find a decent heirloom tomato at the height of summer. It’s worth noting that these concerns are, in the grand scheme of things, first-world problems. The trouble is that when these issues are given top billing, they appear to be the only issues that really matter. Carefully documenting the vagaries of the upper class and expecting their anxieties, hobbies, and worries to be representative for the entirety of society is a tale as old as time.

Giving space to the weird quarantine quirk that you and maybe three other people you’re friends with isn’t self-aware—it’s simply elevating an inside joke or observation made between friends by using the platform afforded to you and presenting it as a matter of course rather than an anomaly. Much like the case of the Amazon coat, which appeared in the Times Style section in November 2019, the small observations in and around the writer’s friend groups are not representative of the experiences of others and it is presumptuous to assume that just because something is happening to you, that the experience is universal.

by Megan Reynolds, Jezebel |  Read more:
Image: Chelsea Beck

Friday, April 16, 2021

Making Sense of the ‘Border Crisis’

You may have heard in the news recently that there is a Crisis At The Border. Huge numbers of people are now clamoring at the southern border, many of them unaccompanied children. As described by people on the right, this is a crisis caused by lax enforcement. Republican politicians like Tom Cotton and “centrist” commentators like Fareed Zakaria have argued that these increased migration numbers are due to the Biden administration’s softening of (as Zakaria puts it) Trump’s “practical policies” at the border. The examples they cite include:

  • The Migrant Protection Protocols (MPP)/Remain in Mexico program—required tens of thousands of asylum-seekers to wait in dangerous Mexican border towns, without housing, healthcare, or legal help, constantly vulnerable to a booming kidnapping-for-ransom industry, while their cases proceeded before U.S. border judges
  • The Safe Third Country Transit Ban—blocked virtually all migrants at the southern border from obtaining asylum if they had passed through any third country on their way to the U.S.
  • Various short-lived agreements with countries like Guatemala and Honduras—incentivized places designated by our government as “safe third countries” for asylum-seekers to accept planeloads of migrants apprehended at our southern border, despite the large numbers of asylum-seekers fleeing those same countries.
This narrative portrays a Biden administration that has invited an uncontrollable tsunami of immigration by breaking radically with the enforcement policies of his predecessor.

Meanwhile, many people on the left have agreed that there is currently a “crisis,” not because of the increased border numbers in and of themselves, but because of the cruel and unsafe conditions under which the arriving migrants are being detained. New images have emerged of children huddled inside foil wrappings at the Donna tent facility in Texas, packed into cages made of chain-link fencing, with little apparent regard for social distancing. These photos of “kids in cages” under Biden are visually identical to the photos of “kids in cages” that once whipped up Democrats into a righteous fury against Trump: some people have denounced the Biden administration as no better than Trump, while others have tried to distinguish Biden’s policies from Trump’s. Alexandria Ocasio-Cortez, for example, has been taking heat from the left for putting out a video message warning against drawing “false equivalencies” between the Trump administration’s systematic separation of children from their parents at the border from April-June 2018, and the Biden administration’s detention of children under deplorable conditions at the border now. Among non-Republicans, we thus have competing narratives that Biden is managing the crisis as well as he can under difficult circumstances, and that Biden is in fact cynically employing the exact same enforcement tactics as Trump, knowing that partisan hypocrisy will cause his supporters to make excuses for him.

Let’s first ask ourselves: is there a Crisis At The Border? On the one hand—yes. There is always a crisis at the border, in the sense that there are always people trying to migrate across the border, and we always have huge amounts of state firepower directed at making that process as miserable and unsafe for migrants as possible. But “crisis” isn’t really the most accurate word to describe the situation, because it implies that we’re talking about a sudden, alarming deviation from a status quo. In fact, these conditions are the status quo, and have been for several decades. When the border is suddenly in the news, there is usually some weird manufacturing of consent going on, and I don’t think it’s always easy for even well-intentioned people to understand the trajectory of the opinions that these crisis narratives drive them to reflexively adopt.

To illustrate what I mean, let’s take a couple examples of Border Crises in relatively recent memory. People may remember the media frenzy about a migration “surge” at the border in 2014, during Obama’s second term. In fact, numbers-wise, 2014 wasn’t really a remarkable year. There were 486,651 apprehensions at the border, which was somewhat higher than the previous year’s total of 414,397, but considerably below the annual averages for 2000-2009, when border apprehensions of 1 million a year or more were typical. What was different was that of those 2014 apprehensions, an atypically high percentage were children and families, mostly from Central America. Not wanting to deal with the logistical, legal, and political hassle of increased numbers of children at the border, the Obama administration began capturing and interning migrant families en masse, for the express purpose of deporting them as rapidly as possible, in what President Obama called “an aggressive deterrence strategy.” Characterizing a demographic shift within otherwise typical border numbers as a “crisis” or a “surge” was a conscious political choice by the Obama administration, allowing them to justify draconian enforcement against asylum-seeking families as a necessary evil, even as the administration continued to claim that its overall enforcement strategy was aimed at “felons, not families.” Even though the Obama administration’s intended policy of indefinite detention of families at the border was ultimately blocked, detaining families who presented at the border to seek asylum nevertheless became normalized. This has resulted in a family internment system at the border that’s lasted up to the present day.

A more recent “border crisis” took place under the Trump administration in late 2018 into the spring of 2019, when the Department of Homeland Security (DHS) repeatedly claimed that the numbers of people at the border were so huge and unmanageable that they had no place to safely house people while they were processed. DHS forced suffering migrants to wait in highly visible public locations, like beneath the port of entry bridge in El Paso, while loudly proclaiming that they lacked the resources to humanely deal with the problem. These repeated claims that DHS facilities lacked bedspace were actually lies. As advocates at the border pointed out, the Trump administration temporarily emptied out numerous detention centers during this exact period, and CBP officials have since admitted that they were instructed to falsely tell people approaching the border that they had no space to process them for asylum. At the time, however, mainstream media outlets were entirely credulous toward DHS’s self-serving statements about a “crisis” throughout the fall and spring, and ran stories uncritically regurgitating this narrative. In fact, the Trump administration was deliberately inflating this “crisis” in order to set the stage for the rollout of some of its most ambitiously cruel policies in the name of “border control”—like the Remain in Mexico program, the asylum ban, and the safe third country agreements. (The systematic family separations that people associate most strongly with Trump was an experiment that lasted a few months in 2018 and then ceased; these other policies, although they made less of a splash in the news, had much longer lifespans and affected tens of thousands more migrants).

This is all to say that Crisis At The Border narratives are often pure media creations for specific political purposes, and we should always be wary of unconsciously accepting that framing when it’s presented to us. For a good illustration of why the language of border crisis can be unhelpful even when used by well-intentioned people, we have only to look to the summer of 2019, where—hard on the heels of about eight months of crisis messaging by the Trump administration—the public became extremely angry about the horrific conditions under which migrants, including children, were being detained after apprehension at the border. This, they proclaimed, was the real border crisis! But because a crisis is imagined to be an atypical, short-term phenomenon, requiring quick and decisive action in order to return to a “normal” state of affairs, political energy quickly coalesced around just throwing a bunch of “emergency” money at DHS to improve detention conditions at the border. This having been accomplished, the moment of rage quickly faded from public consciousness; DHS got a nice fat payout, which it used to buy Border Patrol agents some sick new dirtbikes and ATVs; and nothing else changed.

So what should we make of the current Border Crisis? First, the right-wing narrative that there’s currently a “surge” caused by the Biden administration’s rollback of Trump’s asylum-restricting policies doesn’t seem to add up. It’s true that Biden has taken a couple of initial steps to roll back some of the worst parts of the Trump administration’s pre-pandemic border agenda, but the numbers of people approaching the border appear to have started rising back in April 2020, well before the election. DHS currently anticipates it will apprehend 2 million immigrants at the border in 2021, which would be a record high since 2006; but this is a speculative number based on current apprehension rates (March was an extremely high month) during a time when summary expulsions from the border have been going on for months and have stranded lots of migrants in border areas. The pandemic, together with a devastating sequence of droughts and hurricanes in Central America, has also exacerbated difficult conditions in sending countries. It’s hard to imagine a universe in which this wouldn’t affect the numbers of people seeking to migrate, regardless of who is president.

I do, however, think that the recently increased numbers of unaccompanied kids can be more directly tied to Biden’s enforcement choices. Currently, the Biden administration is continuing to deploy the Centers for Disease Control and Prevention (CDC) “public health” order wherever it sees fit, in order to bounce people back summarily from the border with zero due process. But unlike the Trump administration, the Biden administration has publicly stated that they won’t use the CDC order to block unaccompanied children. This is the most plausible explanation for why unaccompanied kids are now coming in higher numbers. Because single adults and even family units run the risk of being expelled directly from the border, it makes sense that kids would come to the border alone if they and their families want to ensure that they’re actually allowed in. If the Biden administration announced that it wouldn’t be applying the CDC order to anyone, I imagine we would see fewer “unaccompanied” kids. It’s true that kids who come to the border alone pose some unique challenges—the law requires the government to place unaccompanied kids in the custody of the Office of Refugee Resettlement until they can be connected to their family members in the U.S., and it does stand to reason that you can’t just release a child onto the street without identifying a caregiver—but the Biden administration’s choice to continue applying the CDC order to adults has likely played a role in increasing the numbers of kids in this situation. Changes in migration numbers and demographic composition are influenced by a whole host of push and pull factors, one component of which are the government’s own enforcement policies (as publicly stated) and practices (as actually observed by prospective border-crossers).

by Brianna Rennix, Current Affairs | Read more:
Image: David Peinado Romero (Shutterstock)

Seeing on the Far Side of the Moon

Instead of using one very large dish to collect radio waves, data from a number of radio telescopes (called an array) can be stitched together by computers into a coherent single observation. These telescopes can be located at a single site, or they can be separated by oceans. The Event Horizon Telescope (EHT), the instrument that Bouman and colleagues used to image the black hole, is actually a network of telescopes in Europe, North America, South America, Antarctica, and Hawaii. The resolution of the array is proportional not to the diameter of any one instrument, but rather to the distance between those instruments that are farthest apart. The EHT’s black hole measurement was made at a stunning resolution of 25 microarcseconds, roughly the capability from Earth to distinguish a golf ball on the moon. (...)

Space telescopes are incredible instruments. NASA’s most famous, the Hubble Space Telescope, has made numerous significant discoveries since it entered service in 1990, most famously estimating the age of the universe at 13.7 billion years, two orders of magnitude more precisely than the previous scientific estimate of 10 to 20 billion years. But Hubble operates mainly in the optical band, something that is mostly accessible from Earth. NASA’s less famous infrared instrument, the Spitzer Space Telescope, which was deactivated this year after tripling its planned design life, studied bands not observable from the ground. Its replacement, the powerful James Webb Space Telescope, is due to launch next year. It should produce even more stunning observations than Hubble when it comes online, as its sensitivity to infrared light is perfect for capturing optical waves, redshifted by the expansion of the cosmos, from some of the most distant objects in the observable universe.

But the biggest problem with these orbiting telescopes is that they cannot avail themselves of the solution used by terrestrial arrays to increase resolution—adding more telescopes and stitching the data together using computation. James Webb’s aperture is 6.5 meters in diameter, while the Event Horizon Telescope has an effective aperture the size of Earth. Space telescopes lack the power that arrays on the ground can achieve.

Astronomy, then, faces a Catch-22. Terrestrial telescopes can be built with excellent resolution thanks to aperture synthesis, but they have to cope with atmospheric interference that limits access to certain bands, as well as radio interference from human activity. Space telescopes don’t experience atmospheric interference, but they cannot benefit from aperture synthesis to boost resolution. What we need is to develop a telescope array that can marry the benefits of both: a large synthetic aperture like Earth-based arrays that is free from atmospheric and human radio interference like space telescopes.

A telescope array on the surface of the moon is the only solution. The moon has no atmosphere. Its far side is shielded from light and radio chatter coming from Earth. The far side’s ground is stable, with little tectonic activity, an important consideration for the ultra-precise positioning needed for some wavelengths. Turning the moon into a gigantic astronomical observatory would open a floodgate of scientific discoveries. There are small telescopes on the moon today, left behind from Apollo 16 and China’s Chang’e 3 mission. A full-on terrestrial-style far-side telescope array, however, is in a different class of instrument. Putting one (or more) on the moon would have cost exorbitant sums only a few years ago, but thanks to recent advances in launch capabilities and cost-reducing competition in the new commercial space industry, it is now well worth doing—particularly if NASA leverages private-sector innovation.

by Eli Dourado, Works in Progress | Read more:
Image:Antennas of the Atacama Large Millimeter/submillimeter Array (ALMA), on the Chajnantor Plateau. Credit: ESO/C. Malin

Thursday, April 15, 2021


via:
[ed. ...and about a million other songs, just rearrange as needed.]

via:

What the U.S. Got for $2 Trillion in Afghanistan

All told, the cost of nearly 18 years of war in Afghanistan will amount to more than $2 trillion. Was the money well spent?

There is little to show for it. The Taliban control much of the country. Afghanistan remains one of the world’s largest sources of refugees and migrants. More than 2,400 American soldiers and more than 38,000 Afghan civilians have died.

Still, life has improved, particularly in the country’s cities, where opportunities for education have grown. Many more girls are now in school. And democratic institutions have been built — although they are shaky at best.

Drawing on estimates from Brown University’s Costs of War Project, we assessed how much the United States spent on different aspects of the war and whether that spending achieved its aims.

$1.5 trillion waging war

When President George W. Bush announced the first military action in Afghanistan in the wake of terrorist attacks by Al Qaeda in 2001, he said the goal was to disrupt terrorist operations and attack the Taliban.

Eighteen years later, the Taliban are steadily getting stronger. They kill Afghan security force members — sometimes hundreds in a week — and defeat government forces in almost every major engagement, except when significant American air support is used against them.

Al Qaeda’s senior leadership moved to Pakistan, but the group has maintained a presence in Afghanistan and expanded to branches in Yemen, northern Africa, Somalia and Syria.

The $1.5 trillion in war spending remains opaque, but the Defense Department declassified breakdowns of some of the three most recent years of spending.

Most of the money detailed in those breakdowns — about 60 percent each year — went to things like training, fuel, armored vehicles and facilities. Transportation, such as air and sea lifts, took up about 8 percent, or $3 billion to $4 billion a year.

$10 billion on counternarcotics

Afghanistan supplies 80 percent of the world’s heroin.

In a report last year, the Special Inspector General for Afghanistan Reconstruction described counternarcotics efforts as a “failure.” Despite billions of dollars to fight opium poppy cultivation, Afghanistan is the source of 80 percent of global illicit opium production.

Before the war, Afghanistan had almost completely eradicated opium, according to United Nations data from 1996 to 2001, when the Taliban were in power.

Today, opium cultivation is a major source of income and jobs, as well as revenue for the Taliban. Other than war expenditures, it is Afghanistan’s biggest economic activity.

$87 billion to train Afghan military and police forces

Afghan forces can’t support themselves.

One of the major goals of the American effort has been to train thousands of Afghan troops. Most of American spending on reconstruction has gone to a fund that supports the Afghan Army and police forces through equipment, training and funding.

But nobody in Afghanistan — not the American military, and not President Ashraf Ghani’s top advisers — thinks Afghan military forces could support themselves.

The Afghan Army in particular suffers from increasing casualty rates and desertion, which means they have to train new recruits totaling at least a third of their entire force every year.

President Barack Obama had planned to hand over total responsibility for security to the Afghans by the end of 2014 and to draw down all American forces by 2016. That plan faltered when the Taliban took quick advantage and gained ground.

The American military had to persuade first President Obama, and then President Trump, to ramp up forces. Some 14,000 U.S. troops remained in the country as of this month.

$24 billion on economic development

Most Afghans still live in poverty.

War-related spending has roughly doubled the size of Afghanistan’s economy since 2007. But it has not translated into a healthy economy.

A quarter or more of Afghans are unemployed, and the economic gains have trailed off since 2015, when the international military presence began to draw down.

Overseas investors still balk at Afghanistan’s corruption — among the worst in the world, according to Transparency International, an anticorruption group — and even Afghan companies look for cheaper labor from India and Pakistan.

Hopes of self-sufficiency in the mineral sector, which the Pentagon boasted could be worth $1 trillion, have been dashed. A few companies from China and elsewhere began investing in mining, but poor security and infrastructure have prevented any significant payout.

$30 billion on other reconstruction programs

Much of that money was lost to corruption and failed projects.

American taxpayers have supported reconstruction efforts that include peacekeeping, refugee assistance and aid for chronic flooding, avalanches and earthquakes.

Much of that money, the inspector general found, was wasted on programs that were poorly conceived or riddled with corruption.

American dollars went to build hospitals that treated no patients, to schools that taught no students (and sometimes never existed at all) and to military bases the Afghans found useless and later shuttered.

The inspector general documented $15.5 billion in waste, fraud and abuse in reconstruction efforts from 2008 through 2017.

Thanks to American spending, Afghanistan has seen improvements in health and education — but they are scant compared with international norms.

Afghan maternal mortality remains among the highest in the world, while life expectancy is among the lowest. Most girls still receive little or no schooling, and education for boys is generally poor.

$500 billion on interest

The war has been funded with borrowed money.

To finance war spending, the United States borrowed heavily and will pay more than $600 billion in interest on those loans through 2023. The rest of the debt will take years to repay.

In addition to the more than $2 trillion the American government has already spent on the war, debt and medical costs will continue long into the future.

$1.4 trillion on veterans that have fought in post-9/11 wars by 2059

Medical and disability costs will continue for decades.

More than $350 billion has already gone to medical and disability care for veterans of the wars in Iraq and Afghanistan combined. Experts say that more than half of that spending belongs to the Afghanistan effort.

The final total is unknown, but experts project another trillion dollars in costs over the next 40 years as wounded and disabled veterans age and need more services.

by Sarah Almukhtar and Rod Nordland, NY Times | Read more:
Image: Johannes Eisele / AFP / Getty via
[ed. Reproduced nearly in full (...hope the NYT doesn't make me take it down). Don't forget the other "forever war" we're currently engaged in that's equally as insane and costly. See also: Leaving Afghanistan, and the Lessons of America’s Longest War (New Yorker).]

There Shouldn’t Be Vaccine Patents in a Health Crisis

The extremity of the Covid-19 vaccine apartheid cannot be overstated. As of mid-February, the United States had acquired enough vaccines for three times its total population, while in 130 countries, not a single vaccine shot had been administered. This is no accident, but the direct and long-predicted result of a vaccine production and access model tied to privatized intellectual property and entrenched medicine monopolies.

The majority of Americans want President Joe Biden to act to end this intolerable vaccine inequality. Sixty percent of U.S. voters said they wanted Biden to endorse a motion at the World Trade Organization that would waive patent barriers and other crucial intellectual property protections on Covid-19 vaccines, according to a new poll from Data for Progress and the Progressive International. This would enable a significant expansion of global production and rollout, while disrupting the extraordinary profiteering of pharmaceutical leviathans in a death-dealing pandemic.

The refusal on the part of major pharmaceutical companies and Western powers to ensure the sharing of vaccine patent and production information has been an immeasurable moral failure, not to mention a most foolish approach to a pandemic in need of a global response. The new poll also makes clear that, for Biden, blocking vaccine sharing is not even a popular position. Seventy-two percent of registered Democrats want the president to remove patent barriers to speed vaccine rollout and reduce costs for less affluent nations.

At present, WTO rules over intellectual property mean that most countries are barred from producing the leading vaccines that have been approved, including those by Pfizer, Moderna, and Johnson & Johnson, which are U.S.-produced. Last October, South Africa and India brought a proposal to the WTO for a temporary waiver that would apply to certain intellectual property on Covid-19 medical tools and technologies until global herd immunity is reached.

It garnered majority support from member states: A hundred countries support the proposal overall, and 58 governments now co-sponsor it; 375 civil society organizations, including Doctors Without Borders, Oxfam, and Amnesty International have signed a letter in support.

The waiver was blocked, however, by a small number of wealthy nations and blocs, including the U.S., the U.K., and the EU, that chose instead to leave vaccine production in the hands of only a few pharmaceutical companies, which, through public-private partnerships, have ensured priority access to the rich countries in turn.

There are no legitimate grounds for maintaining patent barriers in this health crisis unless you’re a pharmaceutical giant making billions or, of course, a Western power invested in maintaining global power through neoliberalization, market monopolies, and racialized capitalism. The strongest advocates of intellectual property protections in medicine, Bill Gates chief among them, have offered no ethical basis for the current status quo beyond vague gestures to protecting “innovation.”

Even a self-interested approach, that sees the devastating economic possibilities of a mutating virus turning the pandemic into something endemic, should make the necessity of a patent waiver clear. The commitment to monopoly medicine is, in this sense, ideological.

The WTO proposal needs backing by a consensus of the the organization’s 164 members to pass. It was under President Donald Trump that the U.S. blocked the patent waiver: a move that came as no surprise for an administration of white nationalists, which proudly left the World Health Organization. A change of tack by the Biden administration, which rejoined the WHO on Day One, could go a long way in pushing other wealthy countries to follow suit. (...)

Sen. Bernie Sanders, I-Vt., chair of the Senate Budget Committee, responded to the poll saying the U.S. should be “leading the global effort to end the coronavirus pandemic.” According to Sanders, “a temporary WTO waiver, which would enable the transfer of vaccine technologies to poorer countries, is a good way to do that.” More than 60 lawmakers have added their signature to a letter pushing Biden to save lives through a global vaccination drive.

by Natasha Lennard, The Intercept | Read more:
Image: Jessica Rinaldi/The Boston Globe via Getty Images
[ed. See also: Let Other Countries Copy the Covid Vaccines; and How Bill Gates Impeded Global Access to Covid Vaccines (TNR).]

US Congress: A Coin-Operated Stalemate Machine (and Whither AOC?)

Yves here. Tom Neuburger gives a hard look at AOC’s recent donations to corporate Democrats and tries to ferret out what she intended to accomplish.

Tom is at a loss to understand why AOC chose the party members she did. I am at a loss to understand why she thought $5,000 donations would have made any difference to the recipients even if they had been on board with taking funds from her. As I am sure readers know, there’s a dark art as to how heavyweight bundlers and donors work around formal contribution limits.

And on top of that, Congressional Democrats run a pay-to-play operation. Kicking in enough money to the DCCC is the cost of entry for getting House committee leadership positions. We explained this back in 2011, via the work of Tom Ferguson, in Congress is a “Coin Operated Stalemate Machine.” I strongly urge you to read the entire post. Key section:
A new article by Ferguson in the Washington Spectator sheds more light on this corrupt and defective system. Partisanship and deadlocks are a direct result of the increased power of a centralized funding apparatus. It’s easy to raise money for grandstanding on issues that appeal to well-heeled special interests, so dysfunctional behavior is reinforced.

Let’s first look at how crassly explicit the pricing is. Ferguson cites the work of Marian Currander on how it works for the Democrats in the House of Representatives:
Under the new rules for the 2008 election cycle, the DCCC [Democratic Congressional Campaign Committee] asked rank-and-file members to contribute $125,000 in dues and to raise an additional $75,000 for the party. Subcommittee chairpersons must contribute $150,000 in dues and raise an additional $100,000. Members who sit on the most powerful committees … must contribute $200,000 and raise an additional $250,000. Subcommittee chairs on power committees and committee chairs of non-power committees must contribute $250,000 and raise $250,000. The five chairs of the power committees must contribute $500,000 and raise an additional $1 million. House Majority Leader Steny Hoyer, Majority Whip James Clyburn, and Democratic Caucus Chair Rahm Emanuel must contribute $800,000 and raise $2.5 million. The four Democrats who serve as part of the extended leadership must contribute $450,000 and raise $500,000, and the nine Chief Deputy Whips must contribute $300,000 and raise $500,000. House Speaker Nancy Pelosi must contribute a staggering $800,000 and raise an additional $25 million.
Ferguson teases out the implications:
Uniquely among legislatures in the developed world, our Congressional parties now post prices for key slots on committees. You want it — you buy it, runs the challenge. They even sell on the installment plan: You want to chair an important committee? That’ll be $200,000 down and the same amount later, through fundraising…..

The whole adds up to something far more sinister than the parts. Big interest groups (think finance or oil or utilities or health care) can control the membership of the committees that write the legislation that regulates them. Outside investors and interest groups also become decisive in resolving leadership struggles within the parties in Congress. You want your man or woman in the leadership? Just send money. Lots of it….

The Congressional party leadership controls the swelling coffers of the national campaign committees, and the huge fixed investments in polling, research, and media capabilities that these committees maintain — resources the leaders use to bribe, cajole, or threaten candidates to toe the party line… Candidates rely on the national campaign committees not only for money, but for message, consultants, and polling they need to be competitive but can rarely afford on their own..

This concentration of power also allows party leaders to shift tactics to serve their own ends….They push hot-button legislative issues that have no chance of passage, just to win plaudits and money from donor blocs and special-interest supporters. When they are in the minority, they obstruct legislation, playing to the gallery and hoping to make an impression in the media…

The system …ensures that national party campaigns rest heavily on slogan-filled, fabulously expensive lowest-common-denominator appeals to collections of affluent special interests. The Congress of our New Gilded Age is far from the best Congress money can buy; it may well be the worst. It is a coin-operated stalemate machine that is now so dysfunctional that it threatens the good name of representative democracy itself.
If that isn’t sobering enough, a discussion after the Ferguson article describes the mind-numbing amount of money raised by the members of the deficit-cutting super committee. In addition, immediately after being named to the committee, several members launched fundraising efforts that were unabashed bribe-seeking. But since the elites in this country keep themselves considerable removed from ordinary people, and what used to be considered corruption in their cohort is now business as usual, nary an ugly word is said about these destructive practices.

So as much as AOC has seemed disappointing of late, the overwhelming majority of voters have no clue as to what she is up against.

by Yves Smith and Thomas Neuberger, Naked Capitalism |  Read more:
Image: Seth Wenig/AP Photo via Politico
[ed. A bit of inside baseball here for political junkies. Apparently AOC gave $5000 to various Democratic members of Congress to help with their campaigns, a few of them DINOs (Dems in name only), who see any association with her as radioactive in their conservative-leaning districts. So they've decided to reject or return the funds. The question is: why did AOC do this (and with such meager amounts)? Is she gravitating toward the middle, and becoming more of an establishment player? Trying to mend fences? Or, as one commenter suggested, playing "eleventy-dimensional chess" and using the money to shine a light on people who've never been exposed in this way to this kind of scrutiny before? Who knows? But as this post indicates, funding is a sensitive and intricate process. By the way, the numbers above are for Democrats. I'd bet the one's for Republicans are equally as stunning, if not significantly worse (I'm not going to check). Also, this is from 2011. Citizens United undoubtedly made the process (and money involved) even more obscene.]