Thursday, April 15, 2021

Wednesday, April 14, 2021

via:


Weeds: dandelion
via:

Two Paths to the Future

The world of 2120 is going to be radically different. In exactly what way I cannot say, any more than a peasant in 1500 could predict the specifics of the industrial revolution. But it almost certainly involves unprecedented levels of growth as the constraints of the old paradigm are dissolved under the new one. One corollary to this view is that our long-term concerns (global warming, dysgenics, aging societies) are only relevant to the extent that they affect the arrival of the next paradigm.

There are two paths to the future: silicon, and DNA. Whichever comes first will determine how things play out. The response to the coronavirus pandemic has shown that current structures are doomed to fail against a serious adversary: if we want to have a chance against silicon, we need better people. That is why I think any AI "control" strategy not predicated on transhumanism is unserious.

Our neolithic forefathers could not have divined the metallurgical destiny of their descendants, but today, perhaps for the first time in universal history, we can catch a glimpse of the next paradigm before it arrives. If you point your telescope in exactly the right direction and squint really hard, you can just make out the letters: "YOU'RE FUCKED".

Artificial Intelligence
Nothing human makes it out of the near-future.
There are two components to forecasting the emergence of superhuman AI. One is easy to predict: how much computational power we will have. The other is very difficult to predict: how much computational power will be required. Good forecasts are either based on past data, or generalization from theories constructed from past data. Because of their novelty, paradigm shifts are difficult to predict. We're in uncharted waters here. But there are two sources of information we can use: biological intelligence (brains, human or otherwise), and progress in the limited forms of artificial intelligence we have created thus far.

ML progress

GPT-3 forced me to start taking AI concerns seriously. Two features make GPT-3 a scary sign of what's to come: scaling, and meta-learning. Scaling refers to gains in performance from increasing the number of parameters in a model. Here's a chart from the GPT-3 paper:


Meta-learning refers to the ability of a model to learn how to solve novel problems. GPT-3 was trained purely on next-word prediction, but developed a wide array of surprising problem-solving abilities, including translation, programming, arithmetic, literary style transfer, and SAT analogies. Here's another GPT-3 chart:


Put these two together and extrapolate, and it seems like a sufficiently large model trained on a diversity of tasks will eventually be capable of superhuman general reasoning abilities. As gwern puts it:
More concerningly, GPT-3’s scaling curves, unpredicted meta-learning, and success on various anti-AI challenges suggests that in terms of futurology, AI researchers’ forecasts are an emperor sans garments: they have no coherent model of how AI progress happens or why GPT-3 was possible or what specific achievements should cause alarm, where intelligence comes from, and do not learn from any falsified predictions.

GPT-3 is scary because it’s a magnificently obsolete architecture from early 2018 (used mostly for software engineering convenience as the infrastructure has been debugged), which is small & shallow compared to what’s possible, on tiny data (fits on a laptop), sampled in a dumb way⁠, its benchmark performance sabotaged by bad prompts & data encoding problems (especially arithmetic & commonsense reasoning), and yet, the first version already manifests crazy runtime meta-learning—and the scaling curves still are not bending
Still, extrapolating ML performance is problematic because it's inevitably an extrapolation of performance on a particular set of benchmarks. Lukas Finnveden, for example, argues that a model similar to GPT-3 but 100x larger could reach "optimal" performance on the relevant benchmarks. But would optimal performance correspond to an agentic, superhuman, general intelligence? What we're really interested is surprising performances in hard-to-measure domains, long-term planning, etc. So while these benchmarks might be suggestive (especially compared to human performance on the same benchmark), and may offer some useful clues in terms of scaling performance, I don't think we can rely too much on them—the error bars are wide in both directions. (...)

How much power will we have?

Compute use has increased by about 10 orders of magnitude in the last 20 years, and that growth has accelerated lately, currently doubling approximately every 3.5 months. A big lesson from the pandemic is that people are bad at reasoning about exponential curves, so let's put it in a different way: training GPT-3 cost approximately 0.000005%5 of world GDP. Go on, count the zeroes. Count the orders of magnitude. Do the math! There is plenty of room for scaling, if it works.

The main constraint is government willingness to fund AI projects. If they take it seriously, we can probably get 6 orders of magnitude just by spending more money. GPT-3 took 3.14e23 FLOPs to train, so if strong AGI can be had for less than 1e30 FLOPs it might happen soon. Realistically any such project would have to start by building fabs to make the chips needed, so even if we started today we're talking 5+ years at the earliest.

Looking into the near future, I'd predict that by 2040 we could squeeze another 1-2 orders of magnitude out of hardware improvements. Beyond that, growth in available compute would slow down to the level of economic growth plus hardware improvements.

Putting it all together

The best attempt at AGI forecasting I know of is Ajeya Cotra's heroic 4-part 168-page Forecasting TAI with biological anchors. She breaks down the problem into a number of different approaches, then combines the resulting distributions into a single forecast. The resulting distribution is appropriately wide: we're not talking about ±15% but ±15 orders of magnitude. (...)

Metaculus has a couple of questions on AGI, and the answers are quite similar to Cotra's projections. This question is about "human-machine intelligence parity" as judged by three graduate students; the community gives a 54% chance of it happening by 2040. This one is based on the Turing test, the SAT, and a couple of ML benchmarks, and the median prediction is 2038, with an 83% chance of it coming before 2100.(...)

Both extremes should be taken into account: we must prepare for the possibility that AI will arrive very soon, while also tending to our long-term problems in case it takes more than a century.

Human Enhancement
All things change in a dynamic environment. Your effort to remain what you are is what limits you.
The second path to the future involves making better humans. Ignoring the AI control question for a moment, better humans would be incredibly valuable to the rest of us purely for the positive externalities of their intelligence: smart people produce benefits for everyone else in the form of greater innovation, faster growth, and better governance. The main constraint to growth is intelligence, and small differences cause large effects: a standard deviation in national averages is the difference between a cutting-edge technological economy and not having reliable water and power. While capitalism has ruthlessly optimized the productivity of everything around us, the single most important input—human labor—has remained stagnant. Unlocking this potential would create unprecedented levels of growth.

Above all, transhumanism might give us a fighting chance against AI. How likely are they to win that fight? I have no idea, but their odds must be better than ours. The pessimistic scenario is that enhanced humans are still limited by numbers and meat, while artificial intelligences are only limited by energy and efficiency, both of which could potentially scale quickly.

The most important thing to understand about the race between DNA and silicon is that there's a long lag to human enhancement. Imagine the best-case scenario in which we start producing enhanced humans today: how long until they start seriously contributing? 20, 25 years? They would not be competing against the AI of today, but against the AI from 20-25 years in the future. Regardless of the method we choose, if superhuman AGI arrives in 2040, it's already too late. If it arrives in 2050, we have a tiny bit of wiggle room.

Let's take a look at our options.

Normal Breeding with Selection for Intelligence (...)
Gene Editing (...)
Cyborgs (...)
Iterated Embryo Selection (...)
Cloning (...)

A Kind of Solution
I visualise a time when we will be to robots what dogs are to humans, and I’m rooting for the machines.
Let's revisit the AI timelines and compare them to transhumanist timelines.
  • If strong AGI can be had for less than 1e30 FLOPs, it's almost certainly happening before 2040—the race is already over.
  • If strong AGI requires more than 1e40 FLOPs, people alive today probably won't live to see it, and there's ample time for preparation and human enhancement.
  • If it falls within that 1e30-1e40 range (and our forecasts, crude as they are, indicate that's probable) then the race is on.
Even if you think there's only a small probability of this being right, it's worth preparing for. Even if AGI is a fantasy, transhumanism is easily worth it purely on its own merits. And if it helps us avoid extinction at the hand of the machines, all the better!

So how is it actually going to play out? Expecting septuagenarian politicians to anticipate wild technological changes and do something incredibly expensive and unpopular today for a hypothetical benefit that may or may not materialize decades down the line—is simply not realistic. Right now from a government perspective these questions might as well not exist; politicians live in the current paradigm and expect it to continue indefinitely. On the other hand, the Manhattan Project shows us that immediate existential threats have the power to get things moving very quickly. In 1939, Fermi estimated a 10% probability that a nuclear bomb could be built; 6 years later it was being dropped on Japan.

by Alvaro de Menard, Fantastic Anachronism | Read more:
Image: via

[ed. Not a very encouraging prospect. Reminds me of the old Woody Allen quote: “More than any other time in history, mankind faces a crossroads. One path leads to despair and utter hopelessness. The other, to total extinction. Let us pray we have the wisdom to choose correctly.” For more scary predictions, see: Book Review: The Precipice (SSC).]

Terms: GPT-3: an autoregressive language model that uses deep learning to produce human-like text. It is the third-generation language prediction model in the GPT-n series (and the successor to GPT-2) created by OpenAI, a San Francisco-based artificial intelligence research laboratory. AGI: Artificial General Intelligence: hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can. It is a primary goal of some artificial intelligence research and a common topic in science fiction and futures studies. AGI can also be referred to as strong AI, full AI, or general intelligent action. FLOPs: floating point operations per second (FLOPS, flops or flop/s) is a measure of computer performance, useful in fields of scientific computations that require floating-point calculations. For such cases it is a more accurate measure than measuring instructions per secondTranshumanism: a philosophical movement, the proponents of which advocate and predict the enhancement of the human condition by developing and making widely available sophisticated technologies able to greatly enhance longevity, mood and cognitive abilities (Wikipedia).

Afghanistan: An End to America's "Forever War"

Joe Biden has decided that 20 years is enough for America’s longest war, and has ordered the remaining troops out no matter what happens between now and September.

Biden’s withdrawal is one area of continuity with his predecessor, although unlike Donald Trump, this administration consulted the Afghans, US allies and its own agencies before announcing the decision. But both presidents were responding to a national weariness of “forever wars”.

To the surprise of no one, the Republican party that acquiesced in Trump’s order to get the troops out by May, is now launching attacks on Biden’s “reckless” decision. The political attacks will mount if, as many expected, the current peace initiative fails and the Taliban steps up their offensive.

In Afghanistan, any US president is damned if you do and damned if you don’t. Biden has plainly decided in that case, “don’t” is the better option.

In the Obama administration, Biden was a consistent voice of scepticism over the utility of military force in foreign policy, sometimes in opposition to advocates of humanitarian intervention.

He bluntly told a television interviewer on the campaign trail that he would feel “zero responsibility” if the status of Afghan women and other human rights suffered as a consequence of a US withdrawal.

“Are you telling me that we should go into China, go to war with China because what they’re doing to the Uyghurs,” he asked his CBS interviewer.

Safeguarding Afghan women and civil society has never been an official aim of the vestigial US military presence, but in the absence of a clearly defined goal, it became part of the de facto rationale.

“There are things that American officials have said over time to encourage that kind of thinking,” said Laurel Miller, who served as US special representative for Afghanistan and Pakistan, and now runs the Asia programme of the International Crisis Group.

“I’ll admit to – when I was in government – not feeling comfortable with some of those statements of enduring commitment, because I didn’t think it was believable.”

In making this decision, Biden has made clear he is setting aside Colin Powell’s famous “Pottery Barn rule”: if you break it, you own it. The quote comes from 2002 when the then secretary of state cited the fictional rule (which is not the policy of that furniture store) to warn George W Bush of the implications of invading Iraq. In Afghanistan, the US has part-owned the store for two decades now, and in reality, people and their livelihoods are still getting smashed.

by Julian Borger, The Guardian |  Read more:
Image: Kim Jae-Hwan/AFP/Getty Images
[ed. Finally. The problem being there was never a Plan B to start with. Just making stuff up as we went along (a textbook example of mission creep). See also: What Did the U.S. Get for $2 Trillion in Afghanistan? (NYT)]

The Daily Grind


Quite how long it takes a woman to grind for a family, apart from the time husking and shucking the maize, collecting the cooking water, and shaping and cooking the tortillas, depends on her skill and strength, the age and number of family members, the type of masa, and the quality of the metate. My estimate is that it takes about five hours a day to make enough masa for a family of five. This may seem incredible but it is in line with other estimates for contemporary Mexico and Guatemala collected by Michael Searcy, with Arnold Bauer’s estimate for Mexico, and experimental estimates for Europe collected in David Peacock’s in The Stone of Life (2013), 127. Since five hours is about as much as anyone can grind, the labor of one in five adults has to be devoted to making the staple bread.

The Daily Grind (Works in Progress)
Image: Magnus Ingvar Agustsson

How Trader Joe’s $2 "Two-Buck Chuck" Became a Best-Seller

Walk into almost any Trader Joe’s store and you’ll spot a behemoth display of Charles Shaw wine — or, as it’s more affectionately known, “Two Buck Chuck.”

Priced at a mere $1.99 to $3.79 per bottle, this magical ether is cheaper than most bottled water. It’s been knighted as the “darling of the discount wine world” by critics, and boasts a cult following among price-minded consumers.

For Trader Joe’s, the wine is also a gold mine.

The grocery chain has sold 1B+ bottles of Two Buck Chuck since debuting the beverage in 2002. Today, some locations sell as many as 6k bottles/day — or ~16% of the average store’s daily sales.

How is a supposedly decent wine sold at such a low price point? Where does it come from? And how did it rise to prominence?

This is the tale of one wine brand, two vintners, and the unlikely democratization of a historically snobby industry. (...)

The box wine baron

Fred Franzia did not share Shaw’s air d’élégance.

He was unrefined and heavyset, with a body shape the New Yorker likened to a “gourmet marshmallow.” Reclusive and gruff, he shied away from public appearances. He referred to winemakers as “bozos” and didn’t care for France.

Nonetheless, Franzia came from a long lineage of winemakers: His great-grandfather, Giuseppe, had immigrated to California’s Central Valley in 1893 and set up Franzia Brothers Winery (later sold to Coca-Cola); his uncle, Ernest Gallo, had built the largest wine exporter in California.

In 1973, Franzia launched his own wine company, Bronco Wine Co.

In a rickety wood-paneled trailer held together with duct tape, he set out to produce extremely cheap, high-quality “super-value” wines — wines that rejected the pretentiousness of Napa Valley.

Initially, Bronco operated as a wholesaler, buying bulk wine and selling it to larger wineries at a profit.

But soon, Franzia saw an opportunity to produce his own cheap wines — wines, as he later put it, that “yuppies would feel comfortable drinking.”

Through a legal loophole, he could say his wines were “Cellared and Bottled in Napa” if the brand was founded prior to 1986. So, he developed a strategy of buying out distressed wineries with distinguished-sounding names — Napa Ridge, Napa Creek, Domaine Napa — and using them to sell his stock of less-desirable Central Valley wines.

On a summer day in 1995, a few years after Charles F. Shaw Winery went bust, Franzia purchased the winery’s brand, label, and name for a mere $27k.

“We buy wineries from guys from Stanford who go bankrupt,” he later boasted. “Some real dumb-asses from there.”

Unbeknownst to the real Charles Shaw, Franzia was about to transform the once-fancy wine brand into an impossibly cheap everyman’s juice.

And in the process, he’d change the wine industry forever.

by Zachary Crockett, The Hustle |  Read more:
Image: Stephen Osman/Los Angeles Times via Getty Images

Tuesday, April 13, 2021

Charlie Musselwhite


[ed. See also: Ben Harper, Charlie Musselwhite - I'm In I'm Out And I'm Gone.]

What Is Kafkaesque? - The 'Philosophy' of Franz Kafka

Safety is Fatal

Humans need closeness and belonging but any society that closes its gates is doomed to atrophy. How do we stay open?

Many of us will recall Petri dishes from our first biology class – those shallow glass vessels containing a nutrient gel into which a microbe sample is injected. In this sea of nutrients, the cells grow and multiply, allowing the colony to flourish, its cells dividing again and again. But just as interesting is how these cells die. Cell death in a colony occurs in two ways, essentially. One is through an active process of programmed elimination; in this so-called ‘apoptotic’ death, cells die across the colony, ‘sacrificing’ themselves in an apparent attempt to keep the colony going. Though the mechanisms underlying apoptotic death are not well understood, it’s clear that some cells benefit from the local nutrient deposits of dying cells in their midst, while others seek nutrition at the colony’s edges. The other kind of colony cell death is the result of nutrient depletion – a death induced by the impact of decreased resources on the structure of the waning colony.

Both kinds of cell death have social parallels in the human world, but the second type is less often studied, because any colony’s focus is on sustainable development; and because a colony is disarmed in a crisis by suddenly having to focus on hoarding resources. At such times, the cells in a colony huddle together at the centre to preserve energy (they even develop protective spores to conserve heat). While individual cells at the centre slow down, become less mobile and eventually die – not from any outside threat, but from their own dynamic decline – life at the edges of such colonies remains, by contrast, dynamic. Are such peripheral cells seeking nourishment, or perhaps, in desperation, an alternative means to live?

But how far can we really push this metaphor: are human societies the same? As they age under confinement, do they become less resilient? Do they slow down as resources dwindle, and develop their own kinds of protective ‘spores’? And do these patterns of dying occur because we’ve built our social networks – like cells growing together with sufficient nutrients – on the naive notion that resources are guaranteed and infinite? Finally, do human colonies on the wane also become increasingly less capable of differentiation? We know that, when human societies feel threatened, they protect themselves: they zero in on short-term gains, even at the cost of their long-term futures. And they scale up their ‘inclusion criteria’. They value sameness over difference; stasis over change; and they privilege selfish advantage over civic sacrifice.

Viewed this way, the comparison seems compelling. In crisis, the colony introverts; collapsing inwards as inequalities escalate and there’s not enough to go around. In a crisis, as we’ve seen during the COVID-19 pandemic, people define ‘culture’ more aggressively, looking for alliances in the very places where they can invest their threatened social trust; for the centre is threatened and perhaps ‘cannot hold’.

Human cultures, like cell cultures, are not steady states. They can have split purposes as their expanding and contracting concepts of insiders and outsiders shift, depending on levels of trust, and on the relationship between available resources and how many people need them. Trust, in other words, is not only related to moral engagement, or the health of a moral economy. It’s also dependent on the dynamics of sharing, and the relationship of sharing practices to group size – this last being a subject that fascinates anthropologists.

In recent years, there’s been growing attention to what drives group size – and what the implications are for how we build alliances, how we see ourselves and others, and who ‘belongs’ and who doesn’t. Of course, with the advent of social media, our understanding of what a group is has fundamentally changed.

The British anthropologist Robin Dunbar popularised the question of group size in his book How Many Friends Does One Person Need? (2010). In that study, he took on the challenge of relating the question of group size to our understanding of social relationships. His interest was based on his early studies of group behaviour in animal primates, and his comparison of group sizes among tribal clans. Dunbar realised that, in groups of more than 150 people, clans tend to split. Averaging sizes of some 20 clan groups, he arrived at 153 members as their generalised limit.

However, as we all know, ‘sympathy groups’ (those built on meaningful relationships and emotional connections) are much smaller. Studies of grieving, for example, show that our number of deep relationships (as measured by extended grieving following the death of a sympathy group member) reach their upward limit at around 15 people, though others see that number as even smaller at 10, while others, still, focus on close support groups that average around five people.

For Dunbar, 150 is the optimal size of a personal network (even if Facebook thinks we have more like 500 ‘friends’), while management specialists think that this number represents the higher limits of cooperation. In tribal contexts, where agrarian or hunting skills might be distributed across a small population, the limiting number is taken to indicate the point after which hierarchy and specialisation emerge. Indeed, military units, small egalitarian companies and innovative think-tanks seem to top out somewhere between 150 and 200 people, depending on the strength of shared conventional understandings.

Though it’s tempting to think that 150 represents both the limits of what our brains can accommodate in assuring common purpose, and the place where complexity emerges, the truth is different; for the actual size of a group successfully working together is, it turns out, less important than our being aware of what those around us are doing. In other words, 150 might be an artefact of social agreement and trust, rather than a biologically determined structural management goal, as Dunbar and so many others think. We know this because it’s the limit after which hierarchy develops in already well-ordered contexts. But we also know this because of the way that group size shrinks radically in the absence of social trust. When people aren’t confident about what proximate others are mutually engaged in, the relevant question quickly turns from numbers of people in a functioning network to numbers of potential relationships in a group. So, while 153 people might constitute a maximum ideal clan size, based on brain capacity, 153 relationships exist in a much smaller group – in fact, 153 relationships exist exactly among only 18 people.

Dunbar’s number should actually be 18, since, under stress, the quality of your relationships matters much more than the number of people in your network. The real question is not how many friends a person can have, but how many people with unknown ideas can be put together and manage themselves in creating a common purpose, bolstered by social rules or cultures of practice (such as the need to live or work together). Once considered this way, anyone can understand why certain small elite groups devoted to creative thinking are sized so similarly.

Take small North American colleges. Increasingly, they vie with big-name universities such as Harvard and Stanford not only because they’re considered safer environments by worried parents, but because their smaller size facilitates growing trust among strangers, making for better educational experiences. Their smaller size matters. Plus, it’s no accident that the best of these colleges on average have about 150 teaching staff (Dunbar’s number) and that (as any teacher will know) a seminar in which you expect everyone to talk tops out at around 18 people.

But what do we learn from these facts? Well, we can learn quite a bit. While charismatic speakers can wow a crowd, even the most gifted seminar leader will tell you that his or her ability to involve everyone starts to come undone as you approach 20 people. And if any of those people require special attention (or can’t tolerate ideological uncertainty) that number will quickly shrink.

In the end, therefore, what matters much more than group size is social integration and social trust. As for Facebook’s or Dunbar’s question of how many ‘friends’ we can manage, the real question ought to be: how healthy is the Petri dish? To determine this, we need to assess not how strong are the dish’s bastions (an indicator of what it fears) but its ability, as with the small North American college, to engage productively and creatively in extroverted risk. And that’s a question that some other cultures have embraced much better than even North American colleges.

by David Napier, Aeon |  Read more:
Image: Fiddlesticks Country Club, a gated community in Fort Meyers, Florida. Photo by Michael Siluk/UIG/Getty

Novel HIV Vaccine Approach Shows Promise in “Landmark” First-in-Human Trial

A novel vaccine approach for the prevention of HIV has shown promise in Phase I trials, reported IAVI and Scripps Research. According to the organisations, the vaccine successfully stimulated the production of the rare immune cells needed to generate antibodies against HIV in 97 percent of participants.

The vaccine is being developed to act as an immune primer, to trigger the activation of naïve B cells via a process called germline-targeting, as the first stage in a multi-step vaccine regimen to elicit the production of many different types of broadly neutralizing antibodies (bnAbs). Stimulating the production of bnAbs has been pursued as a holy grail in HIV for decades. It is hoped that these specialised blood proteins could attach to HIV surface proteins called spikes, which allow the virus to enter human cells, and disable them via a difficult-to-access regions that does not vary much from strain to strain.

“We and others postulated many years ago that in order to induce bnAbs, you must start the process by triggering the right B cells – cells that have special properties giving them potential to develop into bnAb-secreting cells,” explained Dr William Schief, a professor and immunologist at Scripps Research and executive director of vaccine design at IAVI’s Neutralizing Antibody Center, whose laboratory developed the vaccine. “In this trial, the targeted cells were only about one in a million of all naïve B cells. To get the right antibody response, we first need to prime the right B cells. The data from this trial affirms the ability of the vaccine immunogen to do this.” (...)

One of the lead investigators on the trial, Dr Julie McElrath, senior vice president and director of Fred Hutch’s Vaccine and Infectious Disease Division, said the trial was “a landmark study in the HIV vaccine field,” adding that they had demonstrated “success in the first step of a pathway to induce broad neutralising antibodies against HIV-1.”

HIV affects more than 38 million people globally and is among the most difficult viruses to target with a vaccine, in large part because of its unusually fast mutation rate which allows it to constantly evolve and evade the immune system.

Dr Schief commented: “This study demonstrates proof of principle for a new vaccine concept for HIV, a concept that could be applied to other pathogens as well. With our many collaborators on the study team, we showed that vaccines can be designed to stimulate rare immune cells with specific properties and this targeted stimulation can be very efficient in humans. We believe this approach will be key to making an HIV vaccine and possibly important for making vaccines against other pathogens.”

The company’s said this study sets the stage for additional clinical trials that will seek to refine and extend the approach, with the long-term goal of creating a safe and effective HIV vaccine. As a next step, the collaborators are partnering with the biotechnology company Moderna to develop and test an mRNA-based vaccine that harnesses the approach to produce the same beneficial immune cells. According to the team, using mRNA technology could significantly accelerate the pace of HIV vaccine development, as it did with vaccines for COVID-19.

by Hannah Balfour , European Pharmaceutical Review | Read more:
Image: uncredited
[ed. The holy grail.]

Awful but Lawful

The courtroom is a stage. The rules of who says what and when are carefully determined, sometimes after prolonged legal contention. The truth that emerges is viewed as legitimate because it is the product of process. The prosecution and defense each present two versions of the truth, and the jury is set the task of selecting one. Punishment is allotted, or the defendant acquitted, based on which version is selected.

One assumption behind the courtroom theater is that both versions of the truth, the one presented by the prosecution and the one suggested by the defense, are equal, in that either one could be selected by the jury as the winning version of what happened. What is left unanswered in this system of procedural justice is whether both sides are equally worthy if one version of these truths depends on its connection to racial prejudices that a jury of ordinary people may have.

Derek Chauvin’s trial for the killing of George Floyd is an example of this. The lawyers for former Minneapolis police officer Chauvin are basing their defense on racist notions of Black men as angry and uncontrollable, and Black communities as inherently threatening and menacing. Beliefs that would be considered overtly racist is other contexts are thus drawn into the courtroom without any scrutiny of their foundations. In the opening argument, the defense counsel specifically notes that “Mr. Chauvin stands five foot nine, 140 pounds. George Floyd is six three and 223 pounds.” Not only that, but in these first two weeks of the trial, the defense has resorted to another stereotype—that of the drug-addicted Black man. In the defense’s opening statement, Chauvin’s lawyer described the initial call to police. Floyd, it was reported, “was under the influence of something. . . He’s not acting right. He’s six to six-and-a-half feet tall.”

The racist seed of the uncontrollable black man, planted early, has been nourished ever since. Defense counsel Eric Nelson insisted that video preceding Floyd’s death shows “the police squad car rocking back and forth” to highlight just how strong and wild Floyd was. So significant was the danger posed by the large and out-of-control Floyd, the defense wants the jury (and everyone else watching) to believe, that it justified the use of “maximum restraint technique,” what used to be called the “hobble or the hog tie.” It is in the justified and appropriate process of hog tying George Floyd that the defendant Derek Chauvin used “[one] knee to pin Mr. Floyd’s left shoulder blade and back to the ground and his right knee to pin Mr. Floyd’s left arm to the ground.”

As these details are tossed about in the courtroom, the damage to the larger discourse about race is already done. The Black man has been framed as an animal, a “hog” that must be tied up so that it won’t thrash and flail at having been apprehended. In the choreographed environment of the courtroom, where the focus is upon discerning a truth in the specific case at hand, there is no room to draw connections to America’s larger racist history. (...)

Having stated that a Black man can be hog tied and held to the ground, the defense also set out to prove that the Black crowd at Cup Foods was belligerent, hostile, and menacing. In his opening statement, Nelson told the jury that “as the crowd grew in size, similarly so too did their anger.” In the defense narrative, there is no effort to individuate the members of the crowd, to note that the “crowd” included a seventeen-year-old and her nine-year-old cousin out to get a snack at Cup Foods, an off-duty EMT who begged police to let her help Floyd, an MMA fighter who had trained alongside Minneapolis police officers, and an old man who could not help but cry when he took the stand. The defense deployment of the “angry black man” became literal in the cross-examination of Donald Wynn Williams II, the mixed martial artist. “It’s fair to say you got angrier and angrier?” Nelson needled Williams, until he replied, “I grew professional and professional. I stayed in my body. You can’t paint me out to be angry.” (...)

There is no doubt that the murder of George Floyd has provoked a racial reckoning in the United States. In the anti-racist work and conversations that have taken place since, efforts have been made to expose surreptitious and systemic racism. But if the Derek Chauvin trial is any illustration, this work has not yet reached the courtroom. Regardless of the outcome of the trial itself and whether there is any justice for George Floyd, the direct appeal to racist prejudice, featuring the uncontrollable and angry Black man, the rough and lawless world of the inner city, the mostly Black crowd as menacing, have all been normalized by the defense as plausible explanations of what happened. Having these ideas form such an integral and overt centerpiece of the trial in a courtroom, where they are not critically examined, suggests to those watching that such a narrative is permissible and possibly true.

by Rafia Zakaria, The Baffler | Read more:
Image: Defense attorney Eric Nelson delivers his opening statement at the Derek Chauvin trial. | AJC

Monday, April 12, 2021

Florida GOP Introduces Ballotless Voting In Disenfranchised Communities


TALLAHASSEE, FL—In an effort to streamline the state’s electoral process, Florida Republicans introduced a new bill to the legislature Thursday that would establish ballotless voting in disenfranchised communities. “We’ve eliminated the complex and insecure process of casting a ballot so that voters from underserved communities don’t have to worry about going to the polls or mailing anything in,” said co-sponsor Rep. Chris Sprowls of the popular proposal, which had already garnered unanimous support among Republicans in the House and Senate. “Come voting day, voters will be able to walk right up to the doors of their polling place, then turn around. No lines, no worry. We’ve listened to your concerns, and are confident that ballotless voting will address them.” At press time, Sprowls added that the bill would also help fight voter fraud by eliminating the likelihood of votes being erroneously counted.

by The Onion |  Read more:
Image: uncredited
[ed. See also: Georgia Lawmakers Warn Stricter Gun Regulation Could Cause Mass Shooters To Move To Other States; Man Opposes Taxing Rich Because He Knows One Day He Could Find $20 Bill On Ground; and, Report: Today Not One You Will Remember (Onion). And, more seriously: Republicans Are Making 4 Key Mistakes (The Atlantic): 

"Arizona Republicans propose to reduce the number of days for early voting. They want to purge voter rolls of people who missed the previous election. They want to cut off mail-in balloting five days before Election Day. And they want to require that affidavits of identity accompany any ballot that is mailed in.

Texas Republicans are pushing a bill to limit early voting, prohibit drive-through voting, limit the number of ballot drop-off locations, and restrict local officials’ ability to publicize voting by mail."]

Against Timarchus

When Aeschines, one of the ten Attic orators and member of the peace embassy dispatched to Philip of Macedon, was accused by Demosthenes and Timarchus of intriguing against Athens on behalf of the same, his defense before the Assembly was swift and straightforward: “Timarchus can’t accuse anyone of betraying Athens, because I heard he’s a fucking skank.”

It worked. Timarchus was stripped of citizenship and vanished from public life, while Aeschines went on to commission the Fourth Sacred War under Philip’s aegis, found a school of rhetoric, and eventually retire to the winemaking island of Samos, where he died well into his seventies. Being a dizzy little bitch who hates fun pays off sometimes.
 [Full text of Against Timarchus here.]

Gentlemen and themtlemen! You know I never have a bad word to say about anyone. It’s probably the second-best-known thing about me, my quiet and peaceful approach to conflict. If I were to guess what the first-best-known thing about me is, I honestly couldn’t even begin to guess, because I just don’t think about human relationships in that way, you know? But this isn’t about me, which is such a relief, because I’m so uncomfortable when things are about me, so I’m really glad to be able to say that this has nothing to do with me and everything to do with Athens. It’s the city I’m here to talk about, not me. Honestly, if I could talk about the city without being here at all, like if there was some way I could talk without talking, or being myself, or being in any way perceived by all of you, I would do that. But as I already said, this isn’t about me, so even though it makes me really uncomfortable to address you all publicly like this, I don’t even care, my discomfort at being the center of attention is just not as important as the dignity and safety of Athens, which always comes first, at least for me, and I hope for you guys, haha!

God!! Athens! Athens, you know? It’s just like — Athens! Okay? Like, what does that even mean, but also, it kind of means everything, right? It says it all. Athens!! I just think, for me personally, that Athens is so important, especially for all of us as Athenians, which I truly believe that we are, every one of us in our own very special way, that it would be such a shame if Athens ever came to harm because someone among us was representing themselves as a friend to Athens when in actual fact, like in honest-to-God real life, in a very tangible way, they were not a friend to Athens, and were actually making her look bad to other people, Macedonians for example.

And I’m just going to come out and say it, which is that this so-called “friend,” this person who has actually really done a lot of damage to Athens, is, I’m sorry to say, but it’s Timarchus. I don’t even care that he’s trying to hurt me, because I’m just whatever, but insofar as I am a representative of Athens, a legitimate member of her assemblies and jury pools, a member of the greater Athenian body, that’s the issue here, not me, personally, Aeschines. You know me, you know I absolutely do not hold grudges or even care what happens to this bag of meat that I call my “body.” But I do care, like really care, about my friends, and I honestly do consider Athens a friend. I really do. And when someone hurts her? Okay, then it’s like, let’s go. So let’s go.

I know this is probably not the first time some of you have heard about Timarchus. If I were to guess, I’d say you probably have been hearing a lot of really troubling shit about him over the years, because I’ve been hearing it too, even though I never passed it along or said anything about him myself. But like, you guys, you guys, we live in a democracy, right? Like we live in a society, yes, but more importantly a democratic society, so we have laws and rules and so on, and we do ask of our citizens a certain commitment to excellence that not just anybody can abide by. And I don’t think we should have to apologize for having high standards. Do you? Okay, good, I’m really glad to hear that. Honestly, I’m really relieved to hear that, because I was worried it was going to be just me, but I’m so glad that we can all agree that if someone violates those laws, or doesn’t live up to our admittedly very high standards (but that’s why Athens is so great, you guys, and just to throw this in for detail, I don’t want to get too far off topic but I do think it’s important, I also think this is why it’s actually completely fair to consider Macedonia like fully Greek, like absolutely there’s a shared commitment to Athenian values, which is why I sometimes call Macedonia Athens II, just for short), but our very high standards, then they should just like….they should just go! Away, and have to live somewhere else, and I don’t even care where, like be well, okay, be safe and healthy, good fortune go with you, I absolutely wish you the best wherever your journey takes you, but you just cannot stay here, because everyone here is already pulling their weight and frankly doing more than their fair share to begin with.

You are all probably aware that prostitution — sorry, “sex work,” I mean I want to be as nice about this as I can, and it’s perfectly fine to do sex work, that’s a totally legitimate option if that’s all you want out of life, I’ve known some really great prostitutes who I would absolutely invite to a dinner party if the vibe was right, but it’s not like being a judge, or a general, I think we can all admit that — is not a compatible side gig for an Athenian citizen, right? We actually have a full, actually-written-down law about that. If you’re, like….I don’t know, a really friendly Thracian of uncertain parentage and you want to be like a fun courtesan, you should go for it and really with my blessing, but it’s just not appropriate for a freeborn man of Athens tasked with safeguarding our citizenry. Right? You have to pick one. You can be a sex worker all you want, God bless, but then you have to stick with that, and you can’t try to become one of the nine archons, or apply to the priesthood, or hold office.

So don’t you think it’s kind of fucked up that Timarchus had the gall to address this assembly as a citizen of Athens even though he absolutely fucked for cash when he was in medical school? Like don’t you think the fact that he lied to us all about it is also a problem? We probably could have made an exception for him if he just asked. But he didn’t ask. It’s honestly not even the sex work, for me personally that causes the problem, but that he lied about it, because it like begs the question — ahaha sorry, that was just a little joke for some of you rhetoricians — it like raises the question of what else has he lied about? Also I know a lot of you were really uncomfortable last week when he took his cloak off during his address, like we were all just in the gymnasium or something, like it was no big deal, and we shouldn’t have to feel uncomfortable when we’re just trying to assemble.

To be clear, I’m not trying to shame Timarchus by bringing all of this old shit up, even though a lot of it isn’t even that old. Partly because I’m honestly not even sure he can feel shame? Like I just don’t think he registers emotions on that scale, at all. So it’s not even worth it. But also I don’t want to make a big deal out of this. I just want us all to agree to abide by the rules we already agreed on!

by Daniel Lavery, Shatner Chatner |  Read more:
Image: Aeschines, copy of Herculaneum original in the National Museum, Naples, early to mid 1800s, marble via
[ed. If you're not familair with Daniel's (formerly, Mallory's) charming work, see here and here. See also: Why We’re Freaking Out About Substack (NYT).]

Masters 2021 Champion: Hideki Matsuyama


Masters 2021: Hideki Matsuyama, quiet star, makes a loud statement for his nation and for himself (Golf Digest)

Saturday, April 10, 2021

The Universal Warrior


The oldest way of war was what Native North Americans called – evocatively – the ‘cutting off’ way of war (a phrase I am borrowing from W. Lee, “The Military Revolution of Native North America” in Empires and Indigines, ed. W. Lee (2011)), but which was common among non-state peoples everywhere in the world for the vast stretch of human history (and one may easily argue much of modern insurgency and terrorism is merely this same toolkit, updated with modern weapons). The goal of such warfare was not to subjugate a population but to drive them off, forcing them to vacate resource-rich land which could then be exploited by your group. To do this, you wanted to inflict maximum damage (casualties inflicted, animals rustled, goods stolen, people captured) at minimum risk, until the lopsided balance of pain you inflicted forced the enemy to simply move away from you to get out of your operational range.

The main tool of this form of warfare (detailed more extensively in A. Gat, War in Human Civilization (2006) and L. Keeley, War Before Civilization (1996)) was the raid. Rather than announcing your movements, a war party would attempt to advance into enemy territory in secret, hoping (in the best case) to catch an enemy village or camp unawares (typically by night) so that the population could be killed or captured (mostly killed; these are mostly non-specialized societies with limited ability to incorporate large numbers of subjugated captives) safely. Then you quickly get out of enemy territory before villages or camps allied to your target can retaliate. If you detected an incoming raid, you might rally up your allied villages or camps and ambush the ambusher in an equally lopsided engagement.

Only rarely in this did a battle result – typically when both the surprise of the raid and the surprise of the counter-raid ambush failed. At that point, with the chance for surprise utterly lost, both sides might line up and exchange missile fire (arrows, javelins) at fairly long range. Casualties in these battles were generally very low – instead the battle served both as a display of valor and a signal of resolve by both sides to continue the conflict. That isn’t to say these wars were bloodless – indeed the overall level of military mortality was much higher than in ‘pitched battle’ cultures, but the killing was done almost entirely in the ambush and the raid.

We may call this the first system of war. It is the oldest, but as noted above, never entirely goes away. We tend to call this style ‘asymmetric’ or ‘unconventional’ war, but it is the most conventional war – it was the first convention, after all. It is also sometimes denigrated as primitive, but should not be judged so quickly – first system armies have managed to frustrate far stronger opponents when terrain and politics were favorable.

What changed? Very briefly, agriculture, cities and the state. Agriculture created a stationary population that both wouldn’t move but which could also be dominated, subjugated and have their production extracted from them. Their wealth was clustered in towns which could be fortified with walls that would resist any quick raid, but control of that fortified town center (and its administrative apparatus of taxation) meant control of the countryside and its resources. Taking such a town meant a siege – delivering a large body of troops and keeping them there long enough to either breach the walls or starve out the town into surrender. This created a war where territorial control was defined by the taking of fixed points.

In such war, the goal was the deliver the siege. But delivery of the siege meant a large army which might now be confronted in the field (for it was unlikely to move by stealth, being that it has to be large enough to take the town). And so to prohibit the siege from being delivered, defenders might march out and meet the attackers in the field for that pitched battle. In certain periods, siegecraft or army size had so outpaced fortress design that everyone rather understood that after the outcome of the pitched battle, the siege would be a forgone conclusion – it is that unusual state of affairs which gives us the ‘decisive battle’ where a war might potentially be ended in a stoke (though they rarely were).

We may term this the second system of war. It is the system that most modern industrial and post-industrial cultures are focused on. Our cultural products are filled with such pitched battles, placed in every sort of era of our past or speculative future. It is how we imagine war. Except that it isn’t the sort of war we wage, is it?

Because in the early 1900s, the industrial revolution resulted in armies possessing both amounts of resources and levels of industrial firepower which precluded open pitched battles. All of those staples of our cultural fiction of battles, developed from the second system – surveying the enemy army drawn up in battle array, the tense wait, then the furious charge, coming to grips with the enemy in masses close up – none of that could survive modern machine guns and artillery.

What replaced it we may term the third system of war, though longer readers may know it by Biddle’s term, the Modern System (more here). Armies in this modern system still aim to control territory, as with second-system war, but they no longer square off in open fields. Rather, relying on cover and concealment to mitigate the overwhelming firepower a modern battlefield covered with machine guns, artillery and airpower, they aim to disorient and overwhelm the decision-making capabilities of their enemy with lightning mechanized offensives.

What happens when two current-day modern systems meet? We don’t really know, though there is a lot of speculation. One of the things which made the conflict between Azerbaijan and Armenia so closely watched last year (in 2020, for those reading this later) was that it provided a chance to see two sides both with (sometimes incomplete) access to the full modern kit of war – not only tanks, jets and artillery, but cyber warfare, drones and so on. The results remain to be much discussed analyzed, but it may well be that a fourth system of war is in the offing, defined by the way that drone-based airpower combined with electronic surveillance and cyber-warfare redefined the battle-space and allowed Azerbaijan in particular to project firepower deep into areas where Armenian forces considered themselves safe.

But I shouldn’t get too off track. The point of all of this is that these systems of war are not merely different, they are so radically different that armies created in one system often fundamentally fail to understand the others (thus the tendency for second and third system armies to treat first system war as some strange new innovation in war, when it is in fact the oldest system by far). As we’re going to see, the aims, experiences and outcomes of these systems are often very different. They demand and inculcate different values and condition societies differently as well.

Collections:

via: A Collection of Unmitigated Pedantry
Image: Via Wikipedia, a Mesolithic painting of a battle from Morella la Vella, Spain (c. 10,000BP), showing what looks to be an ambush, a normal occurrence in first system war.

Monty Python

Amazon Union Vote Fails

Earlier today the National Labor Relations Board announced the results of the vote on whether workers at the Amazon warehouse in Bessemer, Ala., would join a union. The vote was 738 in favor to 1,798 against. It’s bad news, but it doesn’t mean workers in future Amazon campaigns won’t or can’t win. They can. The results were not surprising, however, for reasons that have more to do with the approach used in the campaign itself than any other factor.

The stories of horrific working conditions at Amazon are well-known. Long before the campaign at Bessemer, anyone paying even scant attention would be aware that workers toil at such a grueling pace that they resort to urinating in bottles so as not to get disciplined for taking too much time to use the facilities, which the company calls “time off task.” Christian Smalls was fired a year ago for speaking publicly about people not getting personal protective equipment in his Amazon facility, in bright-blue state New York. Jennifer Bates, the Amazon employee from the Bessemer warehouse, delivered testimony to Congress that would make your stomach turn. Workers at Amazon desperately need to unionize, in Alabama, Germany—and any other place where the high-tech, futuristic employer with medieval attitudes about employees sets up a job site of any kind. With conditions so bad, what explains the defeat in Bessemer?

Three factors weigh heavily in any unionization election: the outrageously vicious behavior of employers—some of it illegal, most fully legal—including harassing and intimidating workers, and telling bold lies (which, outside of countries with openly repressive governments, is unique to the United States); the strategies and tactics used in the campaign by the organizers; and the broader social-political context in which the union election is being held.

Blowout in Bessemer: A Postmortem on the Amazon Campaign (The Nation)

[ed. What it was all about:]

Amazon is the second-largest private employer in the US, with 800,000 employees, and it has fiercely resisted attempts at worker organizing. The only other unionization effort to make it to a vote was in 2014, with a small group of repair technicians in Delaware, and it failed after an aggressive anti-union campaign. More recently, the NLRB found that Amazon threatened and fired workers who protested the company’s handling of COVID-19. While the Bessemer effort would only organize a single warehouse, it would show that it can be done. Already, employees at other Amazon facilities have expressed interest in following in BHM1’s footsteps.

“There’s a basic principle of organizing work that success breeds success, and that organizing often happens in self-reinforcing cycles of victory,” said Benjamin Sachs, a professor at Harvard Law School. “Organizing requires workers taking a risk, and the workers are more likely to take a risk when they see that the risk is going to pay off.”

Such a chain reaction could do more than change the conditions that hundreds of thousands of Amazon employees work under. Because of its size and the sprawling geographic scope of its logistics network, the quality and pay of Amazon’s jobs have a powerful effect on the quality and pay of other jobs. Amazon itself has been touting this effect in its ads lobbying for a $15 minimum wage, and indeed, a recent study found that when Amazon raised its starting wage to $15 an hour in 2018, wages at nearby employers also rose.

But when Amazon jobs are compared to similar types of work, they come off much worse. Logistics jobs were historically a path to the middle class, and unionized warehouses typically pay double what Amazon does. When Amazon opens a warehouse, a Bloomberg analysis found, wages at other nearby warehouses often drop. Amazon’s methods for worker tracking and enforcing productivity — aspects of the job that prompted BHM1 to unionize — have also spread across the logistics industry and other sectors as companies attempt to compete with Amazon.

Sachs calls Amazon a bellwether employer, for its outsize role in shaping the labor market and defining the future of work, similar to the role the auto industry played in the early 20th century. “The unionization of that industry, which had a lot to do with labor law reform, was a defining moment for the labor market for decades,” he said.

Why the Amazon union vote is bigger than Amazon (The Verge)
Image: Patrick T. Fallon/AFP via Getty

Vlogging and Fishing in the Cascade Mountains


There it was again – an all-too-familiar splash in the shallow, rocky portion of the lake, maybe 200 feet along the shore from where I was standing. I had heard it twice already, and seen nothing but circular ripples on the glasslike surface of the water. But this time, I was watching. Just as I’d identified the torpedo-shaped, thrashing object launching from the surface of the water as a massive trout, a second one leapt into the air and snatched at an unfortunate insect.

I was backpacking and fishing deep in the Cascade mountains of Washington state, in search of alpine trout to catch and eat, and to film another adventure for my YouTube channel, NW Fishing Secrets. I’d started my fishing show as a hobby in April 2019, filming instructional videos on how to catch local fish, but as the audience grew rapidly, I realised viewers wanted more than that.

They wanted to feel what it was like to actually be in the wilds. NWFS had become more than a tutorial series – it was now a fishing adventure show, bringing the outdoors into people’s living rooms, allowing viewers who might not be able to visit these remote places to experience them as if they were there with me.

The evening before, I had driven 100 miles from my home near Seattle into the mountains in my self-converted, 1998 yellow campervan. The remote trailhead was another 12 miles up a gravel road crossed by several small creeks. I wanted to get as far away from urban life as I could, to be in a place where it was unlikely I’d see another soul. I wanted to be alone in the mountains. That night I filmed time-lapses of the bright starry sky while getting my camera gear ready for a four-day mountain backpacking and fishing adventure.

The next morning, after a good night’s sleep on the small bed in the back of my van, and a cup of freshly brewed coffee made in my little side-door mounted kitchen, I set off on the trail. I was travelling as light as possible – in my backpack were my pole-less tent, sleeping bag, butane stove, water purifier, first aid kit, fishing rod and reel, lures and various other bits of tackle. However, my video equipment – five cameras, batteries, tripods and a solar charger – must have brought the weight to around 60lbs. “See you in four days,” I mumbled to my van before disappearing into the forest.
by Leif Steffny, The Guardian |  Read more:
Image: YouTube/NW Fishing Secrets
[ed. For my grandson. Gotta give the guy an 'A' for enthusiasm (catching an 8" fish, carving a spoon, driving across a little stream...). Also packing as light as possible (plus the 60 lbs of video gear), then pulling out an avocado, tortilla, bottle of chipolte sauce, etc. : ) Strangely relaxing. See also: here and here.