Friday, April 23, 2021

Therapy Without Therapists

Americans have been getting sadder and more anxious for decades, and the economic recession and social isolation from COVID-19 have accelerated these trends. Despite increased demand for mental health services, those who seek treatment can’t get it. Most people seeking care overwhelmingly prefer psychotherapy over medication, yet they are more likely to be prescribed an antidepressant, often from their primary care provider.

The reasons are fairly obvious. Therapy is expensive. Private insurance companies don’t want to pay for unprofitable, long-term services provided by highly skilled (i.e., high-priced) professionals. When insurance companies do reimburse therapists for their services, they do not pay a living wage. Nor can therapists afford the prohibitive barriers to managing insurance claims—therapists report that most of their patients pay out-of-pocket for therapy or receive minimal insurance coverage for mental health services. When healthcare is privatized, socially useful services are scarce or nonexistent. The solution is equally obvious. Healthcare should be a universally-available public good.

Unsurprisingly, the healthcare industry has reframed this straightforward problem and its straightforward solution to turn a profit. According to industry leaders, the problem is not that a market-driven healthcare system unequally distributes much-needed care. Rather, the problem for them is that the provision of mental health services is not entirely subsumed by capital’s law of motion. Mental healthcare, by their logic, ought to be further scientifically managed to cut costs and increase efficiency.

Due to the economic imperatives of the system, clinical scientists and health service researchers have done their part to rationalize this logic. Designing brilliant studies, these scholars tell industry leaders what they want to hear—that the future of mental healthcare means fewer clinicians, less care, and more automation. At the National Institute of Mental Health Services Research Conference in 2018, Gregory Simon, a psychiatrist and public health scholar for Kaiser Permanente, warned of the coming transformations in the delivery of mental healthcare:
While the fourth industrial revolution has been transforming commerce and industry, and most of science, mental health services remain confidently ensconced in 19th century Vienna [displays an image of Sigmund Freud]. But not for long. The revolution is coming to us.
According to Simon, the fourth industrial revolution will involve the intensification of the division of labor through methods such as task-shifting and the widespread use of digital technologies. Dr. Simon prophesied that mental health “consumers” will soon ask their voice-activated devices: “Alexa, should I increase my dose of Celexa?” Dr. Simon needn’t have looked too far into the future. The transformations he anticipated have already radically reshaped the provision of mental healthcare—a revolution that has transpired behind the backs of both therapist and patient alike.

The Division of Labor in Mental Healthcare

In the past several decades, healthcare in the US has increasingly resembled an assembly line, with the labor process atomized into its component parts and assigned to different workers. Task-shifting is the preferred term by health service researchers for this increasing division of labor. It refers specifically to the process by which tasks from professionals with higher qualifications are delegated to those with fewer qualifications or to a new cadre of employees trained for the specific healthcare service. Recently clinical tasks have not just been passed on to lesser-skilled workers, but also to lay people and even to patients themselves. (...)

Task-shifting is already the norm in medicine and is only increasing as the US faces a shortage of physicians. It is common for patients to visit their doctors and have their body weight and blood pressure measured by medical assistants, to have their blood drawn by phlebotomists or nurses, and to have their responses to physicians’ questions be recorded by medical scribes. This increased division of labor means that physicians only work at the top of their degree qualifications and lesser-skilled workers perform simple clinical tasks at a lower cost. For fairly routine visits, like yearly check-ups, physicians are increasingly being replaced by physician assistants. According to the US Bureau of Labor Statistics, the median annual salary of a physician assistant is $112,260 whereas the median salary of a physician is $208,000. It is no wonder that as health systems Taylorize medicine, physician assistants are one of the fastest growing professions in the country.

To further deskill laborers and make them appendages to machines, biotechnology firms have developed products that automate these routine clinical tasks (e.g., blood pressure monitors, automatic brain scan image processors, etc.). Under a scientifically managed healthcare system, healthcare services are spread across many hands, reducing continuity of care. The proliferation of non-physician medical roles decreases total compensation for healthcare workers, but most importantly this increased fragmentation often reduces the quality of care, putting patients at risk. (...)

Due to the financial incentives introduced by the managed care system, psychiatrists—who earn an average of $220,430 per year after eight years of medical training—rarely conduct psychotherapy and devote most of their time to disseminating psychopharmacological treatments. They have been replaced by a cheaper labor force of lesser-educated clinicians. The majority of psychotherapy is now provided by clinical social workers, who receive two years of graduate training and earn an average annual salary of $50,470, followed by a long distance by clinical psychologists, who attend five to seven years of school and earn an average annual salary of $87,450. (...)

The Rise of Community Health Workers

The latest “innovation” to deskill mental healthcare workers has been to displace professionals entirely. Researchers have increasingly propagated the effectiveness of training lay people to provide brief therapy in lieu of licensed mental health providers. Though the stated rationale for training non-professionals, termed community health workers, is to integrate knowledge from traditional healers and communities to provide culturally competent care, their real function is to cut labor costs and put money back in the hands of corporate hospital chains. (...)

Community health worker models often draw inspiration from volunteer programs formed in resource scarce, low-and-middle-income countries in response to the lack of public or private infrastructure for mental healthcare. For example, one of the most revered volunteer community health worker models, Nepal’s Female Community Health Volunteer (FCHV) program, has been widely lauded for its expansive base of over 50,000 volunteers who offer counseling and necessary health services to women and families across the country. The FCHV is partly responsible for Nepal’s sharp declines in child and maternal mortality rates, and the public hospital system has integrated these exemplary volunteers into their service model.

However impressive the work of these women, it should go without saying that they should be adequately remunerated. Further, if they are providing essential health services, the care they provide should be incorporated into the public health system, not contingent upon a reserve army of volunteers. As several social scientists have noted, attempting to import public health models from resource-scarce contexts to high-income countries is ethically dubious, particularly if the model hinges on the exploitation of an unpaid workforce. The US has the necessary infrastructure and resources to adequately hire and compensate professionals. The imposition of scarcity and cheap labor in the US is a policy decision, not a rational response to real material constraints.

by Briana Last, Damage |  Read more:
Image: Getty via

Toward a Better Understanding of Systemic Racism

As an academic librarian in the United States, I have watched with dismay as Critical Race Theory (CRT) has become the dominant framing of the continuing impact of America’s terrible racial history on group well-being metrics. CRT has not only spawned jargon-filled institutional diversity, equity and inclusion policies, but affects individual academic departments and libraries. The way in which it constrains inquiry and pre-biases our research is not only evident in the classroom, but is beginning to influence how we academic librarians provide resources and teach research skills. CRT framing has even found its way into our job descriptions and library policies, and has taken on the character of a political or religious litmus test. Its slippery discourse carelessly uses loaded terms such as white supremacy and racism to describe downstream outcomes, rather than intentions and attitudes. It is increasingly hostile to the fundamentals of effective research.

Perhaps even worse, it risks obscuring the actual ways in which the shameful racial history of the US set in motion the present day observed racial disparities and prevents us from formulating the policies that might best address such disparities in the present. Both free inquiry and unbiased research and the ability to help groups disproportionately impacted by our history are going to become increasingly difficult if CRT continues to be the only way of thinking about systemic racism.

CRT makes two central claims. The first contains a crucial insight from the civil rights movement, without which we could make little sense of our cultural and social reality. The second, however, asserts that disparities themselves constitute racism and are evidence of and perpetuate white supremacy and must therefore be targeted by policies. This logical sleight-of-hand threatens both the cohesion of any pluralistic society and prevents us from addressing the actual problems that lead to racial disparities.

CRT approaches, then, rest on two claims, the second of which is believed to flow from the first.

Claim One: Systemic Racism

The first claim is that blacks suffered not only two hundred and fifty years of slavery, resulting in a direct and massive group-level difference in wealth, but another subsequent one hundred years of official subjugation and segregation and denial of the public goods that underwrite flourishing. This has led to group-level disparities in human capital development, resulting in, among other things, disparate outcomes. This is an inescapable fact. The modern racial landscape is not caused by something fundamentally wrong with black people—as true a white supremacist or racist would claim.

For example, the higher crime and victimisation rates among black communities could, as James Foreman Jr. has argued, be the product of an honour culture put in motion by Jim Crow-era underpolicing of any crime that did not disrupt the then racial and economic hierarchy. Higher poverty rates can be traced in large part to the economic legacy of slavery, as well as to various racist policies that prevented the acquisition of wealth.

Rerun the same multifaceted group immiseration experiment with any group, and you will get largely the same results. If blacks had immigrated to the US and been treated like, say, Norwegian immigrants, these massive developmental disparities would probably be largely absent. Although immigrants can certainly arrive with different cultural and economic averages that can manifest in some group-level differences, given the particular traits needed to succeed under different cultural circumstances, the massive differences in flourishing between black and white Americans are certainly impacted by our history around race.

In the US, discrimination against blacks has historically been orders of magnitude more profound than discrimination against other ethnic groups. Even without the racist post-hoc justifications of the practice, slavery would have had group-level ramifications on its own, given the near total lack of wealth held by blacks in 1865. Add a century of segregation and racism and you have a situation unmatched in its capacity to reproduce group-level generational misery.

This empirical claim about upstream group-level causation does not necessarily imply specific downstream personal or policy solutions. In fact, we need to consider a wider range of possibilities for reducing group-level suffering.

Where CRT runs into serious conceptual trouble, though, is in its second central claim.

Claim Two: All Disparities Are the Result of Continuing Racism

The second claim is that, because these disparities were set in motion by America’s reprehensible racial history, each of them is literally caused by this history in both the group and individual instance. Every disparity observed today stems from racism and white supremacy. Those who fail to seek a forced repair of the disparity are guilty of racism and perpetuating white supremacy. Any judgement, system or policy that perpetuates a disparity that can be traced to a racist past is itself white supremacist and racist. Since racism is the underlying cause of all disparities, large and small, insufficient alarm and concern at these disparities is also racist.

This second claim allows anti-racist ideology to be weaponised by both moralists and authoritarians.

This presents a dilemma: if racist policies have resulted in disparate flourishing metrics, why not address these disparities in every arena in which they exist?

The error here is imagining that group disparities continue to be neatly tied to the racism that set them in motion. This leads to a strange obsession with the disparities themselves and not their upstream, proximate causes, which at the individual level are not racially unique.

Conservative economist Glenn Loury has convincingly argued that present disparities are the result of developmental challenges that may have arisen as a consequence of racism, but no longer depend on it. Leftist political scientist Adolph Reed Jr. has reached a similar conclusion, from a Marxist perspective: the developmental problems of the black community are simply the result of greater exposure to a destructive political economy that can handicap anyone’s flourishing. While this greater exposure owes its origins to racism, Reed argues that the political economy itself, not black identity, should be the focus of policy efforts, since that same political economy can be the source of misery for anyone.

Despite their ideological differences, Loury and Reed have hit on an important point: disparities, rather than being independent variables that prove racism, are the result of experiences that can cause anyone suffering. The fact that blacks suffer more from them originated in racism but no longer tied to it.

Imagine a university that sincerely wants to reflect American demographics by having 14% of its faculty and students be the descendants of slaves. What do we do with the fact that being a successful student or faculty member requires human capital that our racial history has distributed unequally? How do you address a disparity in flourishing when there is a disparity in the human capital required for flourishing? Do we simply nullify those requirements and denounce them as racist, as CRT advocates do? Or do we give up entirely and say it’s all in the past and there’s nothing we can do, and focus solely on individual merit, as staunchly colour-blind meritocrats and opportunistic racists do?

A Better Definition of Systemic Racism

The unique history of blacks in the United States has left them more exposed to political, economic and developmental problems that can immiserate anyone. The best way to address this is to concentrate on the economic and developmental problems more broadly, and in so doing address the racial disparity without overtly racializing either problems or solutions.

by Brian Erb, Areo |  Read more:
Image: uncredited
[ed. See also: Creating an Inhabitable World for Humans Means Dismantling Rigid Forms of Individuality (Time).]

Saturday, April 17, 2021


via:

Manoucher Yektai, Untitled (Still Life), 1969 

The Blood-Clot Problem Is Multiplying

For weeks, Americans looked on as other countries grappled with case reports of rare, sometimes fatal blood abnormalities among those who had received the AstraZeneca vaccine against COVID-19. That vaccine has not yet been authorized by the FDA, so restrictions on its use throughout Europe did not get that much attention in the United States. But Americans experienced a rude awakening this week when public-health officials called for a pause on the use of the Johnson & Johnson vaccine, after a few cases of the same, unusual blood-clotting syndrome turned up among the millions of people in the country who have received it.

The world is now engaged in a vaccination program unlike anything we have seen in our lifetimes, and with it, unprecedented scrutiny of ultra-rare but dangerous side effects. An estimated 852 million COVID-19 vaccine doses have been administered across 154 countries, according to data collected by Bloomberg. Last week, the European Medicines Agency, which regulates medicines in the European Union, concluded that the unusual clotting events were indeed a side effect of the AstraZeneca vaccine; by that point, more than 220 cases of dangerous blood abnormalities had been identified. Only half a dozen cases have been documented so far among Americans vaccinated with the Johnson & Johnson vaccine, and a causal link has not yet been established. But the latest news suggests that the scope of this problem might be changing.

Whether the blood issues are ultimately linked to only one vaccine, or two vaccines, or more, it’s absolutely crucial to remember the unrelenting death toll from the coronavirus itself—and the fact that COVID-19 can set off its own chaos in the circulatory system, with blood clots showing up in “almost every organ.” That effect of the disease is just one of many reasons the European Medicines Agency has emphasized that the “overall benefits of the [AstraZeneca] vaccine in preventing COVID-19 outweigh the risks of side effects.” The same is true of Johnson & Johnson’s. These vaccines are saving countless lives across multiple continents.

But it’s also crucial to determine the biological cause of any vaccine-related blood conditions. This global immunization project presents a lot of firsts: the first authorized use of mRNA vaccines like the ones from Pfizer and Moderna; the first worldwide use of adenovirus vectors for vaccines like AstraZeneca’s, Johnson & Johnson’s, and Sputnik V; and the first attempt to immunize against a coronavirus. Which, if any, of these new frontiers might be linked to serious side effects? Which, if any, of the other vaccines could be drawn into this story, too? How can a tiny but disturbing risk be mitigated as we fight our way out of this pandemic? And what might be the implications for vaccine design in the years to come?

To answer these questions, scientists will have to figure out the biology behind this rare blood condition: what exactly causes it; when and why it happens. This is not an easy task. While the evidence available so far is fairly limited, some useful theories have emerged. The notions listed below are not all in competition with one another: Some are overlapping—or even mutually reinforcing—in important ways. And their details matter quite a bit. A better understanding of the cause of this condition may allow us to predict its reach.

by Roxanne Khamsi, The Atlantic |  Read more:
Image: DeAgostini/Getty/ Katie Martin/The Atlantic

Whose Feelings Count Most in a Pandemic?

If an alien or visitor happened to take a gander at lifestyle journalism over the past six months, they might assume that even though a lot of people are losing their jobs, waiting endlessly for unemployment, or even being evicted, the majority of the country has passed the pandemic baking bread, moving out of cities, and gazing out the window wondering if every day is Wednesday. For every story about the truly devastating impact the pandemic has had on normal life, it seems that there have been countless others that do little more than document every single possible concern of the upper-middle-class.

Lifestyle journalism catered specifically to the needs, wants, and desires of the beans and sourdough crowd: the same affluent workers whose jobs afforded them the flexibility to do their jobs from their homes. During the long, dark months of the spring, while many Americans were contending with lives lived mostly indoors, countless other people were doing the work that afforded the WFH class the freedom to worry only about how to occupy their time now that they were trapped inside.

The New York Times
quickly gathered their resources to create At Home, a section of gentle lifestyle content meant to quell the anxieties of their core audience, many of whom might have already escaped New York City during the worst spring months. The landing page for the section collects the various articles written for the express purpose of soothing the frazzled nerves of its readers and states its intended purpose: “We may be venturing outside, tentatively or with purpose, but with the virus still raging, we’re the safest inside,” the copy at the top of the page reads. Of course, inside was the “safest” place to be for a good long time, but even acknowledging that is a privilege. For all the Times readers who spent the spring worriedly disinfecting the groceries delivered to them by DoorDash or FreshDirect employees, there were countless other people working to make sure that the people locked in their homes, fearful of the out of doors, had food to eat. This divide was rarely noted in the lifestyle content that proliferated, most likely because it is not soothing to readers to think about the minimum wage employee riding a bicycle through rain and sleet to deliver them a pizza.

As the pandemic unfolded, I turned to the Times for recipes like many of my other peers did, but quickly developed a one-sided adversarial relationship with the What to Cook This Week email newsletter, written mostly by Food section editor Sam Sifton. Cataloging the innermost anxieties of the upper class has always been the hidden directive of the paper’s Style section, but witnessing that bleed over into the Cooking newsletter became tiresome after a while.

Consider this dispatch from the July 24 newsletter, some six months into the pandemic:
Good morning. I caught a fat porgy on a home-tied fly the other day, a blind cast into clear ocean water, streaming past boulders on an outgoing tide. It wasn’t the striped bass I was looking for, but I thought it might be good for a few tacos for dinner and that hauled me out of the rut I’ve found myself in these last few weeks. It’s been freestyle mapo tofus with ground beef and chile crisp; skillet pastas with Italian sausages and plenty of kale; crema-marinated chicken grilled and doused in lime; repeat. It gets boring, frankly.
For thousands of people who have yet to leave their neighborhoods or who have been working and running the household in a capacity that does not allow for leisurely casting a line into a clear blue ocean, Sifton’s missives are comically out of touch with other, more pressing realities like juggling childcare and a full-time job. What he and so many other writers have been working against since the pandemic started is nothing more than an exploration of what it means to be bored. Sourdough, an affectation that has largely been abandoned, was an effective way to channel anxieties about an airborne virus, but also, baking bread is nothing more than a hobby that adequately fills empty stretches of time while also making people feel productive. Baking bread for leisure is an activity that I imagine those who do it for a living, in industrial kitchens and the like, would rather not undertake. The gap between leisure and labor here is wide.

Other, more esoteric “hobbies,” like growing scallions in jam jars, was rebranded as “novel frugality” in a piece that now feels typical of the sort written during the spring and early summer. Habits like saving Ziploc bags, regrowing the aforementioned scallions, and eating the heel from a loaf of bread were the sort of penny-pinching habits reserved for the generation that survived the Great Depression, not the rest of us who have long luxuriated in the great American pastimes of consumerism and consumption, the April story at Vox implied. These habits, which are fairly normal and do not really deserve any special mention, were documented on social media and in pieces like the one that ran in Vox. Framed as an upper-class panic about safety and minimizing trips out of the house, these behaviors are unusual only because the people in question never really had to think about frugality in a concrete way. (...)

Paying close attention to lifestyle journalism over the past six months revealed that the anxieties, concerns, and fears that are being documented are purely those of Richard Florida’s “creative class”—upwardly-mobile individuals working in vaguely creative sectors who mostly congregate in cities like New York and San Francisco. These individuals value the sorts of amenities that make a city feel superior to a suburb: museums, bars, restaurants, and the ability to find a decent heirloom tomato at the height of summer. It’s worth noting that these concerns are, in the grand scheme of things, first-world problems. The trouble is that when these issues are given top billing, they appear to be the only issues that really matter. Carefully documenting the vagaries of the upper class and expecting their anxieties, hobbies, and worries to be representative for the entirety of society is a tale as old as time.

Giving space to the weird quarantine quirk that you and maybe three other people you’re friends with isn’t self-aware—it’s simply elevating an inside joke or observation made between friends by using the platform afforded to you and presenting it as a matter of course rather than an anomaly. Much like the case of the Amazon coat, which appeared in the Times Style section in November 2019, the small observations in and around the writer’s friend groups are not representative of the experiences of others and it is presumptuous to assume that just because something is happening to you, that the experience is universal.

by Megan Reynolds, Jezebel |  Read more:
Image: Chelsea Beck

Friday, April 16, 2021

Making Sense of the ‘Border Crisis’

You may have heard in the news recently that there is a Crisis At The Border. Huge numbers of people are now clamoring at the southern border, many of them unaccompanied children. As described by people on the right, this is a crisis caused by lax enforcement. Republican politicians like Tom Cotton and “centrist” commentators like Fareed Zakaria have argued that these increased migration numbers are due to the Biden administration’s softening of (as Zakaria puts it) Trump’s “practical policies” at the border. The examples they cite include:

  • The Migrant Protection Protocols (MPP)/Remain in Mexico program—required tens of thousands of asylum-seekers to wait in dangerous Mexican border towns, without housing, healthcare, or legal help, constantly vulnerable to a booming kidnapping-for-ransom industry, while their cases proceeded before U.S. border judges
  • The Safe Third Country Transit Ban—blocked virtually all migrants at the southern border from obtaining asylum if they had passed through any third country on their way to the U.S.
  • Various short-lived agreements with countries like Guatemala and Honduras—incentivized places designated by our government as “safe third countries” for asylum-seekers to accept planeloads of migrants apprehended at our southern border, despite the large numbers of asylum-seekers fleeing those same countries.
This narrative portrays a Biden administration that has invited an uncontrollable tsunami of immigration by breaking radically with the enforcement policies of his predecessor.

Meanwhile, many people on the left have agreed that there is currently a “crisis,” not because of the increased border numbers in and of themselves, but because of the cruel and unsafe conditions under which the arriving migrants are being detained. New images have emerged of children huddled inside foil wrappings at the Donna tent facility in Texas, packed into cages made of chain-link fencing, with little apparent regard for social distancing. These photos of “kids in cages” under Biden are visually identical to the photos of “kids in cages” that once whipped up Democrats into a righteous fury against Trump: some people have denounced the Biden administration as no better than Trump, while others have tried to distinguish Biden’s policies from Trump’s. Alexandria Ocasio-Cortez, for example, has been taking heat from the left for putting out a video message warning against drawing “false equivalencies” between the Trump administration’s systematic separation of children from their parents at the border from April-June 2018, and the Biden administration’s detention of children under deplorable conditions at the border now. Among non-Republicans, we thus have competing narratives that Biden is managing the crisis as well as he can under difficult circumstances, and that Biden is in fact cynically employing the exact same enforcement tactics as Trump, knowing that partisan hypocrisy will cause his supporters to make excuses for him.

Let’s first ask ourselves: is there a Crisis At The Border? On the one hand—yes. There is always a crisis at the border, in the sense that there are always people trying to migrate across the border, and we always have huge amounts of state firepower directed at making that process as miserable and unsafe for migrants as possible. But “crisis” isn’t really the most accurate word to describe the situation, because it implies that we’re talking about a sudden, alarming deviation from a status quo. In fact, these conditions are the status quo, and have been for several decades. When the border is suddenly in the news, there is usually some weird manufacturing of consent going on, and I don’t think it’s always easy for even well-intentioned people to understand the trajectory of the opinions that these crisis narratives drive them to reflexively adopt.

To illustrate what I mean, let’s take a couple examples of Border Crises in relatively recent memory. People may remember the media frenzy about a migration “surge” at the border in 2014, during Obama’s second term. In fact, numbers-wise, 2014 wasn’t really a remarkable year. There were 486,651 apprehensions at the border, which was somewhat higher than the previous year’s total of 414,397, but considerably below the annual averages for 2000-2009, when border apprehensions of 1 million a year or more were typical. What was different was that of those 2014 apprehensions, an atypically high percentage were children and families, mostly from Central America. Not wanting to deal with the logistical, legal, and political hassle of increased numbers of children at the border, the Obama administration began capturing and interning migrant families en masse, for the express purpose of deporting them as rapidly as possible, in what President Obama called “an aggressive deterrence strategy.” Characterizing a demographic shift within otherwise typical border numbers as a “crisis” or a “surge” was a conscious political choice by the Obama administration, allowing them to justify draconian enforcement against asylum-seeking families as a necessary evil, even as the administration continued to claim that its overall enforcement strategy was aimed at “felons, not families.” Even though the Obama administration’s intended policy of indefinite detention of families at the border was ultimately blocked, detaining families who presented at the border to seek asylum nevertheless became normalized. This has resulted in a family internment system at the border that’s lasted up to the present day.

A more recent “border crisis” took place under the Trump administration in late 2018 into the spring of 2019, when the Department of Homeland Security (DHS) repeatedly claimed that the numbers of people at the border were so huge and unmanageable that they had no place to safely house people while they were processed. DHS forced suffering migrants to wait in highly visible public locations, like beneath the port of entry bridge in El Paso, while loudly proclaiming that they lacked the resources to humanely deal with the problem. These repeated claims that DHS facilities lacked bedspace were actually lies. As advocates at the border pointed out, the Trump administration temporarily emptied out numerous detention centers during this exact period, and CBP officials have since admitted that they were instructed to falsely tell people approaching the border that they had no space to process them for asylum. At the time, however, mainstream media outlets were entirely credulous toward DHS’s self-serving statements about a “crisis” throughout the fall and spring, and ran stories uncritically regurgitating this narrative. In fact, the Trump administration was deliberately inflating this “crisis” in order to set the stage for the rollout of some of its most ambitiously cruel policies in the name of “border control”—like the Remain in Mexico program, the asylum ban, and the safe third country agreements. (The systematic family separations that people associate most strongly with Trump was an experiment that lasted a few months in 2018 and then ceased; these other policies, although they made less of a splash in the news, had much longer lifespans and affected tens of thousands more migrants).

This is all to say that Crisis At The Border narratives are often pure media creations for specific political purposes, and we should always be wary of unconsciously accepting that framing when it’s presented to us. For a good illustration of why the language of border crisis can be unhelpful even when used by well-intentioned people, we have only to look to the summer of 2019, where—hard on the heels of about eight months of crisis messaging by the Trump administration—the public became extremely angry about the horrific conditions under which migrants, including children, were being detained after apprehension at the border. This, they proclaimed, was the real border crisis! But because a crisis is imagined to be an atypical, short-term phenomenon, requiring quick and decisive action in order to return to a “normal” state of affairs, political energy quickly coalesced around just throwing a bunch of “emergency” money at DHS to improve detention conditions at the border. This having been accomplished, the moment of rage quickly faded from public consciousness; DHS got a nice fat payout, which it used to buy Border Patrol agents some sick new dirtbikes and ATVs; and nothing else changed.

So what should we make of the current Border Crisis? First, the right-wing narrative that there’s currently a “surge” caused by the Biden administration’s rollback of Trump’s asylum-restricting policies doesn’t seem to add up. It’s true that Biden has taken a couple of initial steps to roll back some of the worst parts of the Trump administration’s pre-pandemic border agenda, but the numbers of people approaching the border appear to have started rising back in April 2020, well before the election. DHS currently anticipates it will apprehend 2 million immigrants at the border in 2021, which would be a record high since 2006; but this is a speculative number based on current apprehension rates (March was an extremely high month) during a time when summary expulsions from the border have been going on for months and have stranded lots of migrants in border areas. The pandemic, together with a devastating sequence of droughts and hurricanes in Central America, has also exacerbated difficult conditions in sending countries. It’s hard to imagine a universe in which this wouldn’t affect the numbers of people seeking to migrate, regardless of who is president.

I do, however, think that the recently increased numbers of unaccompanied kids can be more directly tied to Biden’s enforcement choices. Currently, the Biden administration is continuing to deploy the Centers for Disease Control and Prevention (CDC) “public health” order wherever it sees fit, in order to bounce people back summarily from the border with zero due process. But unlike the Trump administration, the Biden administration has publicly stated that they won’t use the CDC order to block unaccompanied children. This is the most plausible explanation for why unaccompanied kids are now coming in higher numbers. Because single adults and even family units run the risk of being expelled directly from the border, it makes sense that kids would come to the border alone if they and their families want to ensure that they’re actually allowed in. If the Biden administration announced that it wouldn’t be applying the CDC order to anyone, I imagine we would see fewer “unaccompanied” kids. It’s true that kids who come to the border alone pose some unique challenges—the law requires the government to place unaccompanied kids in the custody of the Office of Refugee Resettlement until they can be connected to their family members in the U.S., and it does stand to reason that you can’t just release a child onto the street without identifying a caregiver—but the Biden administration’s choice to continue applying the CDC order to adults has likely played a role in increasing the numbers of kids in this situation. Changes in migration numbers and demographic composition are influenced by a whole host of push and pull factors, one component of which are the government’s own enforcement policies (as publicly stated) and practices (as actually observed by prospective border-crossers).

by Brianna Rennix, Current Affairs | Read more:
Image: David Peinado Romero (Shutterstock)

Seeing on the Far Side of the Moon

Instead of using one very large dish to collect radio waves, data from a number of radio telescopes (called an array) can be stitched together by computers into a coherent single observation. These telescopes can be located at a single site, or they can be separated by oceans. The Event Horizon Telescope (EHT), the instrument that Bouman and colleagues used to image the black hole, is actually a network of telescopes in Europe, North America, South America, Antarctica, and Hawaii. The resolution of the array is proportional not to the diameter of any one instrument, but rather to the distance between those instruments that are farthest apart. The EHT’s black hole measurement was made at a stunning resolution of 25 microarcseconds, roughly the capability from Earth to distinguish a golf ball on the moon. (...)

Space telescopes are incredible instruments. NASA’s most famous, the Hubble Space Telescope, has made numerous significant discoveries since it entered service in 1990, most famously estimating the age of the universe at 13.7 billion years, two orders of magnitude more precisely than the previous scientific estimate of 10 to 20 billion years. But Hubble operates mainly in the optical band, something that is mostly accessible from Earth. NASA’s less famous infrared instrument, the Spitzer Space Telescope, which was deactivated this year after tripling its planned design life, studied bands not observable from the ground. Its replacement, the powerful James Webb Space Telescope, is due to launch next year. It should produce even more stunning observations than Hubble when it comes online, as its sensitivity to infrared light is perfect for capturing optical waves, redshifted by the expansion of the cosmos, from some of the most distant objects in the observable universe.

But the biggest problem with these orbiting telescopes is that they cannot avail themselves of the solution used by terrestrial arrays to increase resolution—adding more telescopes and stitching the data together using computation. James Webb’s aperture is 6.5 meters in diameter, while the Event Horizon Telescope has an effective aperture the size of Earth. Space telescopes lack the power that arrays on the ground can achieve.

Astronomy, then, faces a Catch-22. Terrestrial telescopes can be built with excellent resolution thanks to aperture synthesis, but they have to cope with atmospheric interference that limits access to certain bands, as well as radio interference from human activity. Space telescopes don’t experience atmospheric interference, but they cannot benefit from aperture synthesis to boost resolution. What we need is to develop a telescope array that can marry the benefits of both: a large synthetic aperture like Earth-based arrays that is free from atmospheric and human radio interference like space telescopes.

A telescope array on the surface of the moon is the only solution. The moon has no atmosphere. Its far side is shielded from light and radio chatter coming from Earth. The far side’s ground is stable, with little tectonic activity, an important consideration for the ultra-precise positioning needed for some wavelengths. Turning the moon into a gigantic astronomical observatory would open a floodgate of scientific discoveries. There are small telescopes on the moon today, left behind from Apollo 16 and China’s Chang’e 3 mission. A full-on terrestrial-style far-side telescope array, however, is in a different class of instrument. Putting one (or more) on the moon would have cost exorbitant sums only a few years ago, but thanks to recent advances in launch capabilities and cost-reducing competition in the new commercial space industry, it is now well worth doing—particularly if NASA leverages private-sector innovation.

by Eli Dourado, Works in Progress | Read more:
Image:Antennas of the Atacama Large Millimeter/submillimeter Array (ALMA), on the Chajnantor Plateau. Credit: ESO/C. Malin

Thursday, April 15, 2021


via:
[ed. ...and about a million other songs, just rearrange as needed.]

via:

What the U.S. Got for $2 Trillion in Afghanistan

All told, the cost of nearly 18 years of war in Afghanistan will amount to more than $2 trillion. Was the money well spent?

There is little to show for it. The Taliban control much of the country. Afghanistan remains one of the world’s largest sources of refugees and migrants. More than 2,400 American soldiers and more than 38,000 Afghan civilians have died.

Still, life has improved, particularly in the country’s cities, where opportunities for education have grown. Many more girls are now in school. And democratic institutions have been built — although they are shaky at best.

Drawing on estimates from Brown University’s Costs of War Project, we assessed how much the United States spent on different aspects of the war and whether that spending achieved its aims.

$1.5 trillion waging war

When President George W. Bush announced the first military action in Afghanistan in the wake of terrorist attacks by Al Qaeda in 2001, he said the goal was to disrupt terrorist operations and attack the Taliban.

Eighteen years later, the Taliban are steadily getting stronger. They kill Afghan security force members — sometimes hundreds in a week — and defeat government forces in almost every major engagement, except when significant American air support is used against them.

Al Qaeda’s senior leadership moved to Pakistan, but the group has maintained a presence in Afghanistan and expanded to branches in Yemen, northern Africa, Somalia and Syria.

The $1.5 trillion in war spending remains opaque, but the Defense Department declassified breakdowns of some of the three most recent years of spending.

Most of the money detailed in those breakdowns — about 60 percent each year — went to things like training, fuel, armored vehicles and facilities. Transportation, such as air and sea lifts, took up about 8 percent, or $3 billion to $4 billion a year.

$10 billion on counternarcotics

Afghanistan supplies 80 percent of the world’s heroin.

In a report last year, the Special Inspector General for Afghanistan Reconstruction described counternarcotics efforts as a “failure.” Despite billions of dollars to fight opium poppy cultivation, Afghanistan is the source of 80 percent of global illicit opium production.

Before the war, Afghanistan had almost completely eradicated opium, according to United Nations data from 1996 to 2001, when the Taliban were in power.

Today, opium cultivation is a major source of income and jobs, as well as revenue for the Taliban. Other than war expenditures, it is Afghanistan’s biggest economic activity.

$87 billion to train Afghan military and police forces

Afghan forces can’t support themselves.

One of the major goals of the American effort has been to train thousands of Afghan troops. Most of American spending on reconstruction has gone to a fund that supports the Afghan Army and police forces through equipment, training and funding.

But nobody in Afghanistan — not the American military, and not President Ashraf Ghani’s top advisers — thinks Afghan military forces could support themselves.

The Afghan Army in particular suffers from increasing casualty rates and desertion, which means they have to train new recruits totaling at least a third of their entire force every year.

President Barack Obama had planned to hand over total responsibility for security to the Afghans by the end of 2014 and to draw down all American forces by 2016. That plan faltered when the Taliban took quick advantage and gained ground.

The American military had to persuade first President Obama, and then President Trump, to ramp up forces. Some 14,000 U.S. troops remained in the country as of this month.

$24 billion on economic development

Most Afghans still live in poverty.

War-related spending has roughly doubled the size of Afghanistan’s economy since 2007. But it has not translated into a healthy economy.

A quarter or more of Afghans are unemployed, and the economic gains have trailed off since 2015, when the international military presence began to draw down.

Overseas investors still balk at Afghanistan’s corruption — among the worst in the world, according to Transparency International, an anticorruption group — and even Afghan companies look for cheaper labor from India and Pakistan.

Hopes of self-sufficiency in the mineral sector, which the Pentagon boasted could be worth $1 trillion, have been dashed. A few companies from China and elsewhere began investing in mining, but poor security and infrastructure have prevented any significant payout.

$30 billion on other reconstruction programs

Much of that money was lost to corruption and failed projects.

American taxpayers have supported reconstruction efforts that include peacekeeping, refugee assistance and aid for chronic flooding, avalanches and earthquakes.

Much of that money, the inspector general found, was wasted on programs that were poorly conceived or riddled with corruption.

American dollars went to build hospitals that treated no patients, to schools that taught no students (and sometimes never existed at all) and to military bases the Afghans found useless and later shuttered.

The inspector general documented $15.5 billion in waste, fraud and abuse in reconstruction efforts from 2008 through 2017.

Thanks to American spending, Afghanistan has seen improvements in health and education — but they are scant compared with international norms.

Afghan maternal mortality remains among the highest in the world, while life expectancy is among the lowest. Most girls still receive little or no schooling, and education for boys is generally poor.

$500 billion on interest

The war has been funded with borrowed money.

To finance war spending, the United States borrowed heavily and will pay more than $600 billion in interest on those loans through 2023. The rest of the debt will take years to repay.

In addition to the more than $2 trillion the American government has already spent on the war, debt and medical costs will continue long into the future.

$1.4 trillion on veterans that have fought in post-9/11 wars by 2059

Medical and disability costs will continue for decades.

More than $350 billion has already gone to medical and disability care for veterans of the wars in Iraq and Afghanistan combined. Experts say that more than half of that spending belongs to the Afghanistan effort.

The final total is unknown, but experts project another trillion dollars in costs over the next 40 years as wounded and disabled veterans age and need more services.

by Sarah Almukhtar and Rod Nordland, NY Times | Read more:
Image: Johannes Eisele / AFP / Getty via
[ed. Reproduced nearly in full (...hope the NYT doesn't make me take it down). Don't forget the other "forever war" we're currently engaged in that's equally as insane and costly. See also: Leaving Afghanistan, and the Lessons of America’s Longest War (New Yorker).]

There Shouldn’t Be Vaccine Patents in a Health Crisis

The extremity of the Covid-19 vaccine apartheid cannot be overstated. As of mid-February, the United States had acquired enough vaccines for three times its total population, while in 130 countries, not a single vaccine shot had been administered. This is no accident, but the direct and long-predicted result of a vaccine production and access model tied to privatized intellectual property and entrenched medicine monopolies.

The majority of Americans want President Joe Biden to act to end this intolerable vaccine inequality. Sixty percent of U.S. voters said they wanted Biden to endorse a motion at the World Trade Organization that would waive patent barriers and other crucial intellectual property protections on Covid-19 vaccines, according to a new poll from Data for Progress and the Progressive International. This would enable a significant expansion of global production and rollout, while disrupting the extraordinary profiteering of pharmaceutical leviathans in a death-dealing pandemic.

The refusal on the part of major pharmaceutical companies and Western powers to ensure the sharing of vaccine patent and production information has been an immeasurable moral failure, not to mention a most foolish approach to a pandemic in need of a global response. The new poll also makes clear that, for Biden, blocking vaccine sharing is not even a popular position. Seventy-two percent of registered Democrats want the president to remove patent barriers to speed vaccine rollout and reduce costs for less affluent nations.

At present, WTO rules over intellectual property mean that most countries are barred from producing the leading vaccines that have been approved, including those by Pfizer, Moderna, and Johnson & Johnson, which are U.S.-produced. Last October, South Africa and India brought a proposal to the WTO for a temporary waiver that would apply to certain intellectual property on Covid-19 medical tools and technologies until global herd immunity is reached.

It garnered majority support from member states: A hundred countries support the proposal overall, and 58 governments now co-sponsor it; 375 civil society organizations, including Doctors Without Borders, Oxfam, and Amnesty International have signed a letter in support.

The waiver was blocked, however, by a small number of wealthy nations and blocs, including the U.S., the U.K., and the EU, that chose instead to leave vaccine production in the hands of only a few pharmaceutical companies, which, through public-private partnerships, have ensured priority access to the rich countries in turn.

There are no legitimate grounds for maintaining patent barriers in this health crisis unless you’re a pharmaceutical giant making billions or, of course, a Western power invested in maintaining global power through neoliberalization, market monopolies, and racialized capitalism. The strongest advocates of intellectual property protections in medicine, Bill Gates chief among them, have offered no ethical basis for the current status quo beyond vague gestures to protecting “innovation.”

Even a self-interested approach, that sees the devastating economic possibilities of a mutating virus turning the pandemic into something endemic, should make the necessity of a patent waiver clear. The commitment to monopoly medicine is, in this sense, ideological.

The WTO proposal needs backing by a consensus of the the organization’s 164 members to pass. It was under President Donald Trump that the U.S. blocked the patent waiver: a move that came as no surprise for an administration of white nationalists, which proudly left the World Health Organization. A change of tack by the Biden administration, which rejoined the WHO on Day One, could go a long way in pushing other wealthy countries to follow suit. (...)

Sen. Bernie Sanders, I-Vt., chair of the Senate Budget Committee, responded to the poll saying the U.S. should be “leading the global effort to end the coronavirus pandemic.” According to Sanders, “a temporary WTO waiver, which would enable the transfer of vaccine technologies to poorer countries, is a good way to do that.” More than 60 lawmakers have added their signature to a letter pushing Biden to save lives through a global vaccination drive.

by Natasha Lennard, The Intercept | Read more:
Image: Jessica Rinaldi/The Boston Globe via Getty Images
[ed. See also: Let Other Countries Copy the Covid Vaccines; and How Bill Gates Impeded Global Access to Covid Vaccines (TNR).]

US Congress: A Coin-Operated Stalemate Machine (and Whither AOC?)

Yves here. Tom Neuburger gives a hard look at AOC’s recent donations to corporate Democrats and tries to ferret out what she intended to accomplish.

Tom is at a loss to understand why AOC chose the party members she did. I am at a loss to understand why she thought $5,000 donations would have made any difference to the recipients even if they had been on board with taking funds from her. As I am sure readers know, there’s a dark art as to how heavyweight bundlers and donors work around formal contribution limits.

And on top of that, Congressional Democrats run a pay-to-play operation. Kicking in enough money to the DCCC is the cost of entry for getting House committee leadership positions. We explained this back in 2011, via the work of Tom Ferguson, in Congress is a “Coin Operated Stalemate Machine.” I strongly urge you to read the entire post. Key section:
A new article by Ferguson in the Washington Spectator sheds more light on this corrupt and defective system. Partisanship and deadlocks are a direct result of the increased power of a centralized funding apparatus. It’s easy to raise money for grandstanding on issues that appeal to well-heeled special interests, so dysfunctional behavior is reinforced.

Let’s first look at how crassly explicit the pricing is. Ferguson cites the work of Marian Currander on how it works for the Democrats in the House of Representatives:
Under the new rules for the 2008 election cycle, the DCCC [Democratic Congressional Campaign Committee] asked rank-and-file members to contribute $125,000 in dues and to raise an additional $75,000 for the party. Subcommittee chairpersons must contribute $150,000 in dues and raise an additional $100,000. Members who sit on the most powerful committees … must contribute $200,000 and raise an additional $250,000. Subcommittee chairs on power committees and committee chairs of non-power committees must contribute $250,000 and raise $250,000. The five chairs of the power committees must contribute $500,000 and raise an additional $1 million. House Majority Leader Steny Hoyer, Majority Whip James Clyburn, and Democratic Caucus Chair Rahm Emanuel must contribute $800,000 and raise $2.5 million. The four Democrats who serve as part of the extended leadership must contribute $450,000 and raise $500,000, and the nine Chief Deputy Whips must contribute $300,000 and raise $500,000. House Speaker Nancy Pelosi must contribute a staggering $800,000 and raise an additional $25 million.
Ferguson teases out the implications:
Uniquely among legislatures in the developed world, our Congressional parties now post prices for key slots on committees. You want it — you buy it, runs the challenge. They even sell on the installment plan: You want to chair an important committee? That’ll be $200,000 down and the same amount later, through fundraising…..

The whole adds up to something far more sinister than the parts. Big interest groups (think finance or oil or utilities or health care) can control the membership of the committees that write the legislation that regulates them. Outside investors and interest groups also become decisive in resolving leadership struggles within the parties in Congress. You want your man or woman in the leadership? Just send money. Lots of it….

The Congressional party leadership controls the swelling coffers of the national campaign committees, and the huge fixed investments in polling, research, and media capabilities that these committees maintain — resources the leaders use to bribe, cajole, or threaten candidates to toe the party line… Candidates rely on the national campaign committees not only for money, but for message, consultants, and polling they need to be competitive but can rarely afford on their own..

This concentration of power also allows party leaders to shift tactics to serve their own ends….They push hot-button legislative issues that have no chance of passage, just to win plaudits and money from donor blocs and special-interest supporters. When they are in the minority, they obstruct legislation, playing to the gallery and hoping to make an impression in the media…

The system …ensures that national party campaigns rest heavily on slogan-filled, fabulously expensive lowest-common-denominator appeals to collections of affluent special interests. The Congress of our New Gilded Age is far from the best Congress money can buy; it may well be the worst. It is a coin-operated stalemate machine that is now so dysfunctional that it threatens the good name of representative democracy itself.
If that isn’t sobering enough, a discussion after the Ferguson article describes the mind-numbing amount of money raised by the members of the deficit-cutting super committee. In addition, immediately after being named to the committee, several members launched fundraising efforts that were unabashed bribe-seeking. But since the elites in this country keep themselves considerable removed from ordinary people, and what used to be considered corruption in their cohort is now business as usual, nary an ugly word is said about these destructive practices.

So as much as AOC has seemed disappointing of late, the overwhelming majority of voters have no clue as to what she is up against.

by Yves Smith and Thomas Neuberger, Naked Capitalism |  Read more:
Image: Seth Wenig/AP Photo via Politico
[ed. A bit of inside baseball here for political junkies. Apparently AOC gave $5000 to various Democratic members of Congress to help with their campaigns, a few of them DINOs (Dems in name only), who see any association with her as radioactive in their conservative-leaning districts. So they've decided to reject or return the funds. The question is: why did AOC do this (and with such meager amounts)? Is she gravitating toward the middle, and becoming more of an establishment player? Trying to mend fences? Or, as one commenter suggested, playing "eleventy-dimensional chess" and using the money to shine a light on people who've never been exposed in this way to this kind of scrutiny before? Who knows? But as this post indicates, funding is a sensitive and intricate process. By the way, the numbers above are for Democrats. I'd bet the one's for Republicans are equally as stunning, if not significantly worse (I'm not going to check). Also, this is from 2011. Citizens United undoubtedly made the process (and money involved) even more obscene.]

The Decay of Cinema

This deep into the coronavirus pandemic, how many cinephiles haven’t yet got word of the bankruptcy or shuttering of a favorite movie theater? Though the coronavirus hasn’t quite killed filmgoing dead — at least not everywhere in the world — the culture of cinema itself had been showing signs of ill health long before any of us had heard the words “social distancing.” The previous plague, in the view of Martin Scorsese, was the Hollywood superhero-franchise blockbuster. “That’s not cinema,” the auteur-cinephile told Empire magazine in 2019. “Honestly, the closest I can think of them, as well made as they are, with actors doing the best they can under the circumstances, is theme parks.”

This past March, Scorsese published an essay in Harper‘s called “Il Maestro.” Ostensibly a reflection on the work of Federico Fellini, it also pays tribute to Fellini’s heyday, when on any given night in New York a young movie fan could find himself torn between screenings of the likes of La Dolce Vita, François Truffaut’s Shoot the Piano Player, Andrzej Wajda’s Ashes and Diamonds, John Cassavetes’ Shadows, and the work of other masters besides. This was early in the time when, as New Yorker critic Anthony Lane puts it, “adventurous moviegoing was part of the agreed cultural duty, when the duty itself was more of a trip than a drag, and when a reviewer could, in the interests of cross-reference, mention the names ‘Dreyer’ or ‘Vigo’ without being accused of simply dropping them for show.”

Alas, writes Scorsese, today the art of cinema today is “systematically devalued, sidelined, demeaned, and reduced to its lowest common denominator, ‘content.'” Video essayist Daniel Simpson of Eyebrow Cinema calls this lament “more than an artist railing against a businessman’s terminology, but a yearning for a time when movies used to be special in and of themselves, not just as an extension of a streaming service.” In “The Decay of Cinema,” Simpson connects this cri de cinephilic coeur by the man who directed Taxi Driver and GoodFellas to a 25-year-old New York Times opinion piece by Susan Sontag. A midcentury-style film devotee if ever there was one, Sontag mourns “the conviction that cinema was an art unlike any other: quintessentially modern; distinctively accessible; poetic and mysterious and erotic and moral — all at the same time.”

Some may object to Sontag’s claim that truly great films had become “violations of the norms and practices that now govern movie making everywhere.” Just two weeks after her piece ran, Simpson points out, the Coen brothers’ Fargo opened; soon to come were acclaimed pictures by Mike Leigh and Lars von Trier, and the next few years would see the emergence of Wes Anderson and Paul Thomas Anderson both. But what of today’s masterpieces, like Chung Mong-hong’s A Sun? Though released before the havoc of COVID-19, it has nevertheless — “without a franchise, rock-star celebrities, or an elevator-pitch high concept” — languished on Netflix. And as for an event of such seemingly enormous cinematic import as the completion of Orson Welles’ The Other Side of the Wind three decades after his death, the result wound up “simply dumped on the platform with everything else.”

In a time like this, when the many stuck at home have few options besides streaming services, one hesitates to accuse Netflix of killing either cinema or cinephilia. And yet Simpson sees a considerable difference between being a cinephile and being a “user,” a label that suggests “a customer to be satiated” (if not an addict to be granted a fix of his habit-forming commodity). “There’s only one problem with home cinema,” writes Lane. “It doesn’t exist.” Choice “pretty much defines our status as consumers, and has long been an unquestioned tenet of the capitalist feast, but in fact carte blanche is no way to run a cultural life (or any kind of life, for that matter).” If we continue to do our viewing in algorithm-padded isolation, we surrender what Simpson describes as “the human connection to the film experience” — one of the things that, when all the social distancing ends, even formerly casual moviegoers may find themselves desperately craving.

by Colin Marshall, Open Culture | Read more:
Image: The Decay of Cinema

Mick Jagger & Dave Grohl


[ed. Everybody's hit the wall.]

Wednesday, April 14, 2021

via:


Weeds: dandelion
via:

Two Paths to the Future

The world of 2120 is going to be radically different. In exactly what way I cannot say, any more than a peasant in 1500 could predict the specifics of the industrial revolution. But it almost certainly involves unprecedented levels of growth as the constraints of the old paradigm are dissolved under the new one. One corollary to this view is that our long-term concerns (global warming, dysgenics, aging societies) are only relevant to the extent that they affect the arrival of the next paradigm.

There are two paths to the future: silicon, and DNA. Whichever comes first will determine how things play out. The response to the coronavirus pandemic has shown that current structures are doomed to fail against a serious adversary: if we want to have a chance against silicon, we need better people. That is why I think any AI "control" strategy not predicated on transhumanism is unserious.

Our neolithic forefathers could not have divined the metallurgical destiny of their descendants, but today, perhaps for the first time in universal history, we can catch a glimpse of the next paradigm before it arrives. If you point your telescope in exactly the right direction and squint really hard, you can just make out the letters: "YOU'RE FUCKED".

Artificial Intelligence
Nothing human makes it out of the near-future.
There are two components to forecasting the emergence of superhuman AI. One is easy to predict: how much computational power we will have. The other is very difficult to predict: how much computational power will be required. Good forecasts are either based on past data, or generalization from theories constructed from past data. Because of their novelty, paradigm shifts are difficult to predict. We're in uncharted waters here. But there are two sources of information we can use: biological intelligence (brains, human or otherwise), and progress in the limited forms of artificial intelligence we have created thus far.

ML progress

GPT-3 forced me to start taking AI concerns seriously. Two features make GPT-3 a scary sign of what's to come: scaling, and meta-learning. Scaling refers to gains in performance from increasing the number of parameters in a model. Here's a chart from the GPT-3 paper:


Meta-learning refers to the ability of a model to learn how to solve novel problems. GPT-3 was trained purely on next-word prediction, but developed a wide array of surprising problem-solving abilities, including translation, programming, arithmetic, literary style transfer, and SAT analogies. Here's another GPT-3 chart:


Put these two together and extrapolate, and it seems like a sufficiently large model trained on a diversity of tasks will eventually be capable of superhuman general reasoning abilities. As gwern puts it:
More concerningly, GPT-3’s scaling curves, unpredicted meta-learning, and success on various anti-AI challenges suggests that in terms of futurology, AI researchers’ forecasts are an emperor sans garments: they have no coherent model of how AI progress happens or why GPT-3 was possible or what specific achievements should cause alarm, where intelligence comes from, and do not learn from any falsified predictions.

GPT-3 is scary because it’s a magnificently obsolete architecture from early 2018 (used mostly for software engineering convenience as the infrastructure has been debugged), which is small & shallow compared to what’s possible, on tiny data (fits on a laptop), sampled in a dumb way⁠, its benchmark performance sabotaged by bad prompts & data encoding problems (especially arithmetic & commonsense reasoning), and yet, the first version already manifests crazy runtime meta-learning—and the scaling curves still are not bending
Still, extrapolating ML performance is problematic because it's inevitably an extrapolation of performance on a particular set of benchmarks. Lukas Finnveden, for example, argues that a model similar to GPT-3 but 100x larger could reach "optimal" performance on the relevant benchmarks. But would optimal performance correspond to an agentic, superhuman, general intelligence? What we're really interested is surprising performances in hard-to-measure domains, long-term planning, etc. So while these benchmarks might be suggestive (especially compared to human performance on the same benchmark), and may offer some useful clues in terms of scaling performance, I don't think we can rely too much on them—the error bars are wide in both directions. (...)

How much power will we have?

Compute use has increased by about 10 orders of magnitude in the last 20 years, and that growth has accelerated lately, currently doubling approximately every 3.5 months. A big lesson from the pandemic is that people are bad at reasoning about exponential curves, so let's put it in a different way: training GPT-3 cost approximately 0.000005%5 of world GDP. Go on, count the zeroes. Count the orders of magnitude. Do the math! There is plenty of room for scaling, if it works.

The main constraint is government willingness to fund AI projects. If they take it seriously, we can probably get 6 orders of magnitude just by spending more money. GPT-3 took 3.14e23 FLOPs to train, so if strong AGI can be had for less than 1e30 FLOPs it might happen soon. Realistically any such project would have to start by building fabs to make the chips needed, so even if we started today we're talking 5+ years at the earliest.

Looking into the near future, I'd predict that by 2040 we could squeeze another 1-2 orders of magnitude out of hardware improvements. Beyond that, growth in available compute would slow down to the level of economic growth plus hardware improvements.

Putting it all together

The best attempt at AGI forecasting I know of is Ajeya Cotra's heroic 4-part 168-page Forecasting TAI with biological anchors. She breaks down the problem into a number of different approaches, then combines the resulting distributions into a single forecast. The resulting distribution is appropriately wide: we're not talking about ±15% but ±15 orders of magnitude. (...)

Metaculus has a couple of questions on AGI, and the answers are quite similar to Cotra's projections. This question is about "human-machine intelligence parity" as judged by three graduate students; the community gives a 54% chance of it happening by 2040. This one is based on the Turing test, the SAT, and a couple of ML benchmarks, and the median prediction is 2038, with an 83% chance of it coming before 2100.(...)

Both extremes should be taken into account: we must prepare for the possibility that AI will arrive very soon, while also tending to our long-term problems in case it takes more than a century.

Human Enhancement
All things change in a dynamic environment. Your effort to remain what you are is what limits you.
The second path to the future involves making better humans. Ignoring the AI control question for a moment, better humans would be incredibly valuable to the rest of us purely for the positive externalities of their intelligence: smart people produce benefits for everyone else in the form of greater innovation, faster growth, and better governance. The main constraint to growth is intelligence, and small differences cause large effects: a standard deviation in national averages is the difference between a cutting-edge technological economy and not having reliable water and power. While capitalism has ruthlessly optimized the productivity of everything around us, the single most important input—human labor—has remained stagnant. Unlocking this potential would create unprecedented levels of growth.

Above all, transhumanism might give us a fighting chance against AI. How likely are they to win that fight? I have no idea, but their odds must be better than ours. The pessimistic scenario is that enhanced humans are still limited by numbers and meat, while artificial intelligences are only limited by energy and efficiency, both of which could potentially scale quickly.

The most important thing to understand about the race between DNA and silicon is that there's a long lag to human enhancement. Imagine the best-case scenario in which we start producing enhanced humans today: how long until they start seriously contributing? 20, 25 years? They would not be competing against the AI of today, but against the AI from 20-25 years in the future. Regardless of the method we choose, if superhuman AGI arrives in 2040, it's already too late. If it arrives in 2050, we have a tiny bit of wiggle room.

Let's take a look at our options.

Normal Breeding with Selection for Intelligence (...)
Gene Editing (...)
Cyborgs (...)
Iterated Embryo Selection (...)
Cloning (...)

A Kind of Solution
I visualise a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.
Let's revisit the AI timelines and compare them to transhumanist timelines.
  • If strong AGI can be had for less than 1e30 FLOPs, it's almost certainly happening before 2040—the race is already over.
  • If strong AGI requires more than 1e40 FLOPs, people alive today probably won't live to see it, and there's ample time for preparation and human enhancement.
  • If it falls within that 1e30-1e40 range (and our forecasts, crude as they are, indicate that's probable) then the race is on.
Even if you think there's only a small probability of this being right, it's worth preparing for. Even if AGI is a fantasy, transhumanism is easily worth it purely on its own merits. And if it helps us avoid extinction at the hand of the machines, all the better!

So how is it actually going to play out? Expecting septuagenarian politicians to anticipate wild technological changes and do something incredibly expensive and unpopular today for a hypothetical benefit that may or may not materialize decades down the line—is simply not realistic. Right now from a government perspective these questions might as well not exist; politicians live in the current paradigm and expect it to continue indefinitely. On the other hand, the Manhattan Project shows us that immediate existential threats have the power to get things moving very quickly. In 1939, Fermi estimated a 10% probability that a nuclear bomb could be built; 6 years later it was being dropped on Japan.

by Alvaro de Menard, Fantastic Anachronism | Read more:
Image: via

[ed. Not a very encouraging prospect. Reminds me of the old Woody Allen quote: “More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.” For more scary predictions, see: Book Review: The Precipice (SSC).]

Terms: GPT-3: an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. AGI: Artificial General Intelligence: hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI, or general intelligent action. FLOPs: floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per secondTranshumanism: a philosophical movement, the proponents of which advocate and predict the enhancement of the human condition by developing and making widely available sophisticated technologies able to greatly enhance longevity, mood and cognitive abilities (Wikipedia).