Monday, September 30, 2019


Michele Mikesell
via:

Al Varlez
via:

Top 20 Acoustic Guitar Intros of All Time


The Backroom Deal That Could’ve Given Us Single-Payer

Back in March 2009, leaks from the White House made it clear that a single-payer health insurance system was “off the table” as an option for health care reform. By doing so, the President had ruled out the simplest and most obvious reform of the disaster that is US healthcare. Instituting single-payer would have meant putting US health insurance companies out of business and extending the existing Medicare or Medicaid to the entire population. Instead, over the following weeks the outlines of the bloated monstrosity known as Obamacare emerged; an impossibly complicated Rube Goldberg contraption, badly designed, incompetently executed, and whose intended beneficiaries increasingly seem to hate.

The decision to abandon the nationalization of perhaps the most unpopular companies in the US is correctly attributed to the fundamental conservatism of the Obama White House, and its unwillingness to take on the health insurers, pharmaceutical companies, or any interest group willing and able to spend millions lobbying, hiring former politicians, and donating to campaigns. Obama’s “wimpiness,” his need to always take the path of least resistance, became common tropes among the American left. Obamacare, liberals claim, is the best possible reform that could’ve been wrangled out of the health insurance industry.

But were the many backroom deals that make up Obamacare really an easier alternative to nationalization? A look at the financial details reveals the opposite conclusion. In strictly financial terms, nationalization would have been the easiest way forward, costing relatively little and delivering immediate savings while making access to health care truly universal. Politically, Obama could have counted on the support of a unlikely ally of progressive causes: health insurance shareholders, the theoretical owners of those very companies who would have been relieved of their then-dubious investments with a huge payout.

As of the end of 2008, the private insurance market covered 60 percent of the US population. For-profit insurers accounted for a large and growing share. The top five insurers accounted for 60 percent of the market — all but one of them for-profit companies. Absent a Bolshevik revolution, implementing a single-payer system would have required proper compensation for the owners of these institutions for their loss of future income — shareholders in the case of the for-profit insurers and, allegedly, the supposed policyholders in the case of most non-profits.

How much compensation? Well, in mid-2009, the total market capitalization of four out of the five top health insurers (the fifth is a nonprofit) amounted to about $60 billion. By then, the stock market had already rebounded nicely from the lows of the crisis, and the uncertainty over Obamacare had largely dissipated, so these were not particularly depressed valuations. Extrapolating this valuation to the rest of the health insurers would have a put a price tag of about $120 billion on the whole racket.

This means that buying out the entire health insurance industry at an enormously generous premium of, say, 100 percent, would have cost the Treasury $240 billion – about 2 percent of 2009 gross domestic product. And this figure is highly inflated —premiums for buying out well-established companies rarely exceed 50 percent and are usually closer to 20 percent. Also, I am valuing the dubious claims of non-profit policyholders on par with the more vigorously-enforced property rights of for-profit shareholders.

Other than the big smiles on the faces of health insurer shareholders across the country, what would have been the US Treasury’s payoff for writing a $240 billion check? Once again, the numbers are simple, and startling. US private insurance, whether for-profit or otherwise, may well be the most wasteful bureaucracy in human history, making the old Gosplan office look like a scrappy startup by comparison. Estimates of pure administrative waste range anywhere from 0.75 percent to 2.6 percent of total US economic output.

Extrapolating again from the biggest four for-profit insurers, in 2008, the industry as a whole claimed to spend 18.5 percent of the premiums it collected on things other than payments to providers. (The other 81.5% that is spent paying for actual care is known as medical loss ratio. Keeping this ratio down is a health insurer CEO’s top priority.) Medicare, by contrast, spends just 2 percent. The difference amounts to $130 billion, to which we must add the compliance costs the private insurers impose on health care providers — $28 billion, according to Health Affairs. The costs incurred by consumers are difficult to measure, although very real to anyone who’s spent an afternoon on the phone with a health insurance rep.

So, to recap, nationalization of the health insurance industry in 2009 would have cost no more (and almost certainly a lot less) than $240 billion. The savings in waste resulting from replacing the health insurance racket with an extension of Medicare would have resulted in no less than $158 billion a year. That’s an annualized return on investment of 66 percent. The entire operation would have paid for itself in less than 18 months, and after that, an eternity of administrative efficiency for free. And, of course, happy shareholders.

by Enrique Diaz-Alvarez, Jacobin | Read more:
Image: uncredited

Pain Patients Get Relief from War on Opioids

Ever since U.S. health authorities began cracking down on opioid prescriptions about five years ago, one vulnerable group has suffered serious collateral damage: the approximately 18 million Americans who have been taking opioids to manage their chronic pain. Pain specialists report that desperate patients are showing up in their offices, after being told by their regular physician, pharmacy or insurer that they can no longer receive the drugs or must shift to lower doses, no matter how severe their condition.

Abrupt changes in dosage can destabilize patients who have relied for many years on opioids, and the consequences can be dire, says Stefan Kertesz, an expert on opioids and addiction at the University of Alabama at Birmingham School of Medicine. “I’ve seen deaths from suicide and medical deterioration after opioids are cut.”

Last week, after roughly three years of intensive lobbying and alarming reports from the chronic pain community, the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) took separate actions to tell clinicians that it is dangerous to abruptly curtail opioids for patients who have taken them longterm for pain. The FDA did so by requiring changes to opioid labels specifically warning about the risks of sudden and involuntary dose tapering. The agency cited reports of "serious withdrawal symptoms, uncontrolled pain, psychological distress, and suicide" among patients who have been inappropriately cut off from the painkillers.

One day later, CDC director Robert Redfield issued a clarification of the center’s 2016 “Guideline for Prescribing Opioids for Chronic Pain,” which includes cautions about prescribing doses above specific thresholds. Redfield’s letter emphasized that these thresholds were not intended for patients already taking high doses for chronic pain but were meant to guide first-time opioid prescriptions. The letter follows another recent clarification sent by the CDC to oncology and hematology groups, emphasizing that cancer patients and sickle cell patients were largely exempt from the guideline. (...)

Tougher rules on opioid prescriptions from federal and state authorities, health insurance companies and pharmacies, were an understandable response to the nation’s “opioid crisis,” an epidemic of abuse and overdose that led to a 345 percent spike in U.S. deaths related to legal and illicit opioids between 2001 and 2016. Since 2016, most fatal overdoses have involved illegally produced fentanyl sold on the street, according to CDC data, but past research has shown that many victims got started with a prescription opioid such as oxycodone.

The CDC’s 2016 guideline was aimed at reining in irresponsible prescribing practices. (The agency’s own analysis showed that prescriptions for opioids had quadrupled between 1999 and 2010.) The guideline stressed that the first-line treatments for chronic pain are non-opioid medications and non-drug approaches such as physical therapy. When resorting to opioids, the guideline urged doctors to prescribe “the lowest effective dosage,” to carefully size up risks versus benefits when raising doses above 50 morphine milligram equivalents (MME) a day, and to “carefully justify a decision” to go to 90 MME or above.

That advice on dosage was widely misinterpreted as a hard limit for all patients. Kertesz has collected multiple examples of letters from pharmacies, medical practices and insurers that incorrectly cite the guideline as a reason to cut off long-term opioid patients.

Frank Gawin, a retired psychiatrist in Hawaii, is one of many chronic pain sufferers ensnared by that kind of mistake. For 20 years he took high-dose opioids (about 400 MME daily) to manage extreme pain from complications of Lyme Disease. Gawin, an expert on addiction himself, was well aware of the risks but notes that he stayed on the same dose throughout those 20 years. “It helped me profoundly and probably extended my career by 10 to 15 years,” he says. About five months ago, his doctor, a pain specialist he prefers not to name, informed Gawin and other patients that she would be tapering everyone below 80 MMEs because she was concerned about running afoul of medical authorities. Gawin has not yet reached that goal, but his symptoms have already returned with a vengeance. “As I am talking to you, I am in pain,” he said in a phone interview. “I’m having trouble concentrating. I’m depleted. I’m not myself.”

Last week’s federal actions could go a long way in informing physicians not to cut off patients like Gawin. Of particular value, say patient advocates and experts, is the emphasis on working together with patients on any plan to taper the drugs. “It’s finally about patient consent,” says Andrea Anderson, former executive director of the Alliance for the Treatment of Intractable Pain, an advocacy group. She notes that the FDA urged doctors to create an individualized plan for patients who do wish to taper and that the agency stated that “No standard opioid tapering schedule exists that is suitable for all patients.”

by Claudia Wallis, Scientific American | Read more:
Image: Getty
[ed. Thanks government, medical community, insurers, prescribers, politicians and media, you've made life miserable for tens of millions of people. See also: Pain Patients to Congress: CDC's Opioid Guideline Is Hurting Us (MedPage Today); Suicides Associated With Forced Tapering of Opiate Pain Treatments (JEC); and How Opioid Critics and Law Firms Profit From Litigation (Pain News Network).]

A New Theory of Obesity

Nutrition researcher Kevin Hall strives to project a Zen-like state of equanimity. In his often contentious field, he says he is more bemused than frustrated by the tendency of other scientists to “cling to pet theories despite overwhelming evidence that they are mistaken.” Some of these experts, he tells me with a sly smile, “have a fascinating ability to rationalize away studies that don’t support their views.”

Among those views is the idea that particular nutrients such as fats, carbs or sugars are to blame for our alarming obesity pandemic. (Globally the prevalence of obesity nearly tripled between 1975 and 2016, according to the World Health Organization. The rise accompanies related health threats that include heart disease and diabetes.) But Hall, who works at the National Institute of Diabetes and Digestive and Kidney Diseases, where he runs the Integrative Physiology section, has run experiments that point fingers at a different culprit. His studies suggest that a dramatic shift in how we make the food we eat—pulling ingredients apart and then reconstituting them into things like frosted snack cakes and ready-to-eat meals from the supermarket freezer—bears the brunt of the blame. This “ultraprocessed” food, he and a growing number of other scientists think, disrupts gut-brain signals that normally tell us that we have had enough, and this failed signaling leads to overeating.

Hall has done two small but rigorous studies that contradict common wisdom that faults carbohydrates or fats by themselves. In both experiments, he kept participants in a hospital for several weeks, scrupulously controlling what they ate. His idea was to avoid the biases of typical diet studies that rely on people’s self-reports, which rarely match what they truly eat. The investigator, who has a physics doctorate, has that discipline’s penchant for precise measurements. His first study found that, contrary to many predictions, a diet that reduced carb consumption actually seemed to slow the rate of body fat loss. The second study, published this year, identified a new reason for weight gain. It found that people ate hundreds more calories of ultraprocessed than unprocessed foods when they were encouraged to eat as much or as little of each type as they desired. Participants chowing down on the ultraprocessed foods gained two pounds in just two weeks.

“Hall’s study is seminal—really as good a clinical trial as you can get,” says Barry M. Popkin, a professor of nutrition at the University of North Carolina at Chapel Hill, who focuses on diet and obesity. “His was the first to prove that ultraprocessed foods are not only highly seductive but that people tend to eat more of them.” The work has been well received, although it is possible that the carefully controlled experiment does not apply to the messy way people mix food types in the real world.

The man who designed the research says he is not on a messianic mission to improve America’s eating habits. Hall admits that his four-year-old son’s penchant for chicken nuggets and pizza remains unshakable and that his own diet could and probably should be improved. Still, he believes his study offers potent evidence that it is not any particular nutrient type but the way in which food is manipulated by manufacturers that plays the largest role in the world’s growing girth. He insists he has no dog in any diet wars fight but is simply following the evidence. “Once you’ve stepped into one camp and surrounded yourself by the selective biases of that camp, it becomes difficult to step out,” he says. Because his laboratory and research are paid for by the national institute whatever he finds, Hall notes that “I have the freedom to change my mind. Basically, I have the privilege to be persuaded by data.” (...)

Processed Calories

Hall likes to compare humans to automobiles, pointing out that both can operate on any number of energy sources. In the case of cars, it might be diesel, high-octane gasoline or electricity, depending on the make and model. Similarly, humans can and do thrive on any number of diets, depending on cultural norms and what is readily available. For example, a traditional high-fat/low-carb diet works well for the Inuit people of the Arctic, whereas a traditional low-fat/high-carb diet works well for the Japanese. But while humans have evolved to adapt to a wide variety of natural food environments, in recent decades the food supply has changed in ways to which our genes—and our brains—have had very little time to adapt. And it should come as no surprise that each of us reacts differently to that challenge.

At the end of the 19th century, most Americans lived in rural areas, and nearly half made their living on farms, where fresh or only lightly processed food was the norm. Today most Americans live in cities and buy rather than grow their food, increasingly in ready-to-eat form. An estimated 58 percent of the calories we consume and nearly 90 percent of all added sugars come from industrial food formulations made up mostly or entirely of ingredients—whether nutrients, fiber or chemical additives—that are not found in a similar form and combination in nature. These are the ultraprocessed foods, and they range from junk food such as chips, sugary breakfast cereals, candy, soda and mass-manufactured pastries to what might seem like benign or even healthful products such as commercial breads, processed meats, flavored yogurts and energy bars.

Ultraprocessed foods, which tend to be quite high in sugar, fat and salt, have contributed to an increase of more than 600 available calories per day for every American since 1970. Still, although the rise of these foods correlates with rising body weights, this correlation does not necessarily imply causation. There are plenty of delicious less processed foods—cheese, fatty meats, vegetable oil, cream—that could play an equal or even larger role. So Hall wanted to know whether it was something about ultraprocessing that led to weight gain. “Basically, we wondered whether people eat more calories when those calories come from ultraprocessed sources,” he says. (...)

A Gut-Brain Disconnect

Why are more of us tempted to overindulge in egg substitutes and turkey bacon than in real eggs and hash brown potatoes fried in real butter? Dana Small, a neuroscientist and professor of psychiatry at Yale University, believes she has found some clues. Small studies the impact of the modern food environment on brain circuitry. Nerve cells in the gut send signals to our brains via a large conduit called the vagus nerve, she says. Those signals include information about the amount of energy (calories) coming into the stomach and intestines. If information is scrambled, the mixed signal can result in overeating. If “the brain does not get the proper metabolic signal from the gut,” Small says, “the brain doesn’t really know that the food is even there.”

Neuroimaging studies of the human brain, done by Small and others, indicate that sensory cues—smells and colors and texture—that accompany foods with high-calorie density activate the striatum, a part of the brain involved in decision-making. Those decisions include choices about food consumption.

And that is where ultraprocessed foods become a problem, Small says. The energy used by the body after consuming these foods does not match the perceived energy ingested. As a result, the brain gets confused in a manner that encourages overeating. For example, natural sweeteners—such as honey, maple syrup and table sugar—provide a certain number of calories, and the anticipation of sweet taste prompted by these foods signals the body to expect and prepare for that calorie load. But artificial sweeteners such as saccharin offer the anticipation and experience of sweet taste without the energy boost. The brain, which had anticipated the calories and now senses something is missing, encourages us to keep eating.

To further complicate matters, ultraprocessed foods often contain a combination of nutritive and nonnutritive sweeteners that, Small says, produces surprising metabolic effects that result in a particularly potent reinforcement effect. That is, eating them causes us to want more of these foods. “What is clear is that the energetic value of food and beverages that contain both nutritive and nonnutritive sweeteners is not being accurately communicated to the brain,” Small notes. “What is also clear is that Hall has found evidence that people eat more when they are given highly processed foods. My take on this is that when we eat ultraprocessed foods we are not getting the metabolic signal we would get from less processed foods and that the brain simply doesn’t register the total calorie load and therefore keeps demanding more.”

by Ellen Ruppel Shell, Scientific American |  Read more:
Image: Jamie Chung (photo); Amy Henry (prop styling); Source: “NOVA. The Star Shines Bright,” by Carlos A. Monteiro et al., in World Nutrition, Vol. 7, No. 1; January-March 2016

How the Puffy Vest Became a Symbol of Power

In a recent episode of the HBO series "Succession", the powerful Roy clan at the centre of the show attend a conference for billionaires at an exclusive mountain resort.

The audience learns everything they need to know about the characters by their puffer vest. Kendall Roy, played by Jeremy Strong, wears a Cucinelli puffer vest, and his brother Roman (Kieran Culkin) wears a Ralph Lauren one. Their brother-in-law, Tom (Matthew Macfadyen) sports a shiny Moncler number. When they enter a cocktail party, they are surrounded by wealthy folk decked out in puffer vests of their own.

Michelle Matland, the costume designer for "Succession", which also airs on Sky Atlantic in the UK, told BoF the vests were chosen precisely because they have become so closely associated with the one percent. From tech titans like Amazon founder Jeff Bezos to billionaire investor John Henry to Lachlan Murdoch, son of media titan Rupert Murdoch and one of the rumoured inspirations for "Succession," the bulky, down-filled puffer vest has become the fashion item of choice for the ultra-wealthy.

“The costume was stolen directly from the world of billionaires,” Matland said. “[The Roys] are self-aware, and know how to take advantage of situations, so of course they are going to be wearing puffy vests. It’s their veneer of strength.”

In addition to serving as a status symbol, puffer vests are also big business for luxury brands. Searches for the item were up 7 percent on Lyst last year. Men who have zero fashion sensibility will happily drop $1,000 or more on a Moncler or Cucinelli puffer vest, said Victoria Hitchcock, a stylist who works with Silicon Valley professionals.

“A lot of these guys don’t want to be too ambitious with their style choices, but will still wear luxury vests because they can stand out with it and still keep their simplicity sort of style,” Hitchcock said.

Brands like Moncler, Herno, Canada Goose and Cucinelli incorporate the puffer vests into their permanent collections. Balenciaga, Burberry and Prada are among the luxury brands that also sell puffers.

Non-luxury brands like Patagonia, Uniqlo and the North Face count the puffer vest as some of their best sellers too (these brands are better known for the puffer vest’s popular cousin, the fleece vest, which have themselves become so popular in New York’s business and tech worlds that they are sometimes referred to as the “Midtown Uniform.”)

The puffer vest is an offshoot of the puffer jacket, invented by Australian chemist George Finch, who made a coat from balloon fabric and feather down for an early attempt by British explorers to climb Mount Everest in 1922. Brands like Eddie Bauer and the North Face took his design to the masses, but the product was mainly reserved for outdoor enthusiasts and the working class, said fashion historian Laura McLaws Helms.

“It was popular in the labour movement, at construction sites because it was a utilitarian garment,” she said. “That the richest men in America are wearing puffer vests is a huge leap from its roots.”

Over the last five years, though, the puffer vest has been co-opted by the tech industry, initially via brands like Patagonia. The item also rode the nostalgia trend, as men who grew up watching Marty McFly from"Back To The Future" in his red puffer entered the workforce.

by Chavie Lieber, BoF |  Read more:
Image: Rachel Deeley for BoF

Saturday, September 28, 2019

Metric


Genie Espinosa
via:

Ready, Fire, Aim: U.S. Interests in Afghanistan, Iraq, and Syria

I have been asked to join my fellow panelists in speaking about U.S. interests in Afghanistan, Iraq, and Syria. For some reason, our government has never been able to articulate these interests, but, judging by the fiscal priority Americans have assigned to these three countries in this century, they must be immense – almost transcendent. Since we invaded Afghanistan in 2001, we have spent more than $5 trillion and incurred liabilities for veterans’ disabilities and medical expenses of at least another trillion dollars, for a total of something over $6 trillion for military efforts alone.

This is money we didn’t spend on sustaining, still less improving, our own human and physical infrastructure or current and future well-being. We borrowed almost all of it. Estimates of the costs of servicing the resulting debt run to an additional $8 trillion over the next few decades. Future generations of Americans will suffer from our failure to invest in education, scientific research, and transportation. On top of that, we have put them in hock for at least $14 trillion in war debt. Who says foreign policy is irrelevant to ordinary Americans?

At the moment, it seems unlikely our descendants will feel they got their money’s worth. We have lost or are losing all our so-called “forever wars.” Nor are the people of West Asia and North Africa likely to remember our interventions favorably. Since we began them in 2001, well over one million individuals in West Asia have died violent deaths. Many times more than that have died as a result of sanctions, lost access to medical care, starvation, and other indirect effects of the battering of infrastructure, civil wars, and societal collapse our invasions have inflicted on Afghanistan, Iraq, Libya, and Syria and their neighbors.

The so-called “Global War on Terrorism” launched in Afghanistan in 2001 has metastasized. The US. Armed forces are now combating “terrorism” (and making new enemies) in eighty countries. In Syria alone, where since 2011 we have bombed and fueled proxy wars against both the Syrian government and its extremist foes, nearly 600,000 have died. 11 million have been driven from their homes, five million of them into refuge in other countries.

Future historians will struggle to explain how an originally limited post-9/11 punitive raid into Afghanistan morphed without debate into a failed effort to pacify and transform the country. Our intervention began on October 7, 2001. By December 17, when the battle of Tora Bora ended, we had accomplished our dual objectives of killing, capturing, or dispersing the al Qaeda architects of “9/11” and thrashing the Taliban to teach them that they could not afford to give safe haven to the enemies of the United States. We were well placed then to cut the deal we now belatedly seek to make, demanding that the governing authorities in Afghanistan deny their territory to terrorists with global reach as the price of our departure, and promising to return if they don’t.

Instead, carried away with our own brilliance in dislodging the Islamic Emirate from Kabul and the ninety percent of the rest of the country it then controlled, we nonchalantly moved the goal posts and committed ourselves to bringing Afghans the blessings of E PLURIBUS UNUM, liberty, and gender equality, whether they wanted these sacraments or not. Why? What interests of the United States – as opposed to ideological ambitions – justified this experiment in armed evangelism?

The success of policies is measurable only by the extent to which they achieve their objectives and serve a hierarchy of national interests. When, as in the case of the effort to pacify Afghanistan and reengineer Iraq, there is no coherent statement of war aims, one is left to evaluate policies in terms of their results. And one is also left to wonder what interests those policies were initially meant to support or advance.

In the end, our interests in Afghanistan seem to have come down to avoiding having to admit defeat, keeping faith with Afghans whose hopes we raised to unrealistic levels, and protecting those who have collaborated with us. In other words, we have acted in accordance with what behavioral economists call “the fallacy of sunk costs.” We have thrown good money after bad. We have doubled down on a losing game. We have reinforced failure.

To justify the continuation of costly but unsuccessful policies, our leaders have cited the definitive argument of all losers, the need to preserve “credibility.” This is the theory that steadfastness in counterproductive behavior is better for one’s reputation than acknowledging impasse and changing course. By hanging around in Afghanistan, we have indeed demonstrated that we value obduracy above strategy, wisdom, and tactical flexibility. It is hard to argue this this has enhanced our reputation internationally. (...)

By taking over Iraq, we successfully prevented Baghdad from transferring nonexistent weapons to terrorist groups that did not exist until our thoughtless vivisection of Iraqi society created them. We also destroyed Iraq as the balancer and check on Iran’s regional ambitions, an interest that had previously been a pillar of our policies in the Persian Gulf. This made continued offshore balancing impossible and compelled us for the first time to station U.S. forces in the region permanently. This, in turn, transformed the security relationship between the Gulf Arabs and Iran from regional rivalry into military confrontation, producing a series of proxy wars in which our Arab protégés have demanded and obtained our support.

Our intervention in Iraq ignited long-smoldering divisions between Shiite and Sunni Islam, fueling passions that have undermined religious tolerance and fostered terrorism both regionally and worldwide. The only gainers from our misadventures in Iraq were Iran and Israel, which saw their most formidable Arab rival flattened, and, of course, the U.S. defense and homeland security budgets, which fattened on the resulting threat of terrorist blowback. Ironically, the demise of Iraq as an effective adversary thrust Israel into enemy deprivation syndrome, leading to its (and later our) designation of Iran as the devil incarnate. Israel, joined by Saudi Arabia and the UAE, believes that the cure for its apprehensions about Iran is for the U.S. military to crush it on their behalf.

The other principal legacies of our lurch into strategy-free militarism, aside from debt and a bloated defense budget, are our now habitual pursuit of military solutions to non-military problems, our greatly diminished deference to foreign sovereignty and international law, domestic populism born of war weariness and disillusionment with Washington, declining willingness of allies to follow us, the incitement of violent anti-Americanism among the Muslim fourth of humanity, the entrenchment of Islamophobia in U.S. politics, and the paranoia and xenophobia these developments have catalyzed among Americans. (...)

To say, “we meant well” is true – as true of the members of our armed forces as it is of our diplomats and development specialists. But good intentions are not a persuasive excuse for the outcomes wars contrive. We have hoped that the many good things we have done to advance human and civil rights in Afghanistan and Iraq might survive our inevitable disengagement from both. They won’t. The years to come are less likely to gratify us than to force us to acknowledge that the harm we have done to our own country in this century vastly exceeds the good we have done abroad.

by Chas. W. Freeman |  Read more:
[ed. See also: 10 Ways that the Climate Crisis and Militarism are Intertwined (Counterpunch).]

For All Fankind

When Marvel Studios was founded in the summer of 1996, superheroes were close to irrelevant. Comic book sales were in decline, Marvel’s initially popular Saturday morning cartoons were waning, and the company’s attempts over the previous decades to break through in Hollywood had gone nowhere, with movies based on Daredevil, the Incredible Hulk, and Iron Man all having been optioned without any film being made. Backs against the wall, Marvel’s executives realized that their only chance of getting traction in La La Land was by doing the legwork themselves.

The company’s fortunes hardly turned around overnight. Marvel was forced to fire a third of its employees and declare bankruptcy a few months after launching its film studio, and the movie rights to Spider-Man—then the company’s most valuable piece of intellectual property—were sold off in the ensuing years in a frantic attempt to raise cash. It wasn’t until 2008 that Marvel Studios finally released an Iron Man movie—the choice of protagonist having less to do with that hero’s particular following than the ease with which the toy company that had taken control of Marvel during its bankruptcy could market action figure tie-ins. Against all expectations, Iron Man made half a billion dollars worldwide. Just over a year later, Disney purchased Marvel Studios for $4 billion. A decade after that, Avengers: Endgame would break the weekend box-office record—set by the previous Avengers installment—and net over $2 billion in less than two weeks.

New York magazine’s Vulture vertical was launched the year before Iron Man’s release, promising “a serious take on lowbrow culture.” A few months later, Chris Hardwick began a blog called “The Nerdist,” which quickly pivoted from its original raison d’etre of “palatable tech” to dispatches on ephemera from the original Transformers movie and guest posts about DC’s Silver Age reboot. Today, each site serves as a lodestar for overlapping fandoms, with Vulture hosting Game of Thrones, Stranger Things, and The Bachelor content, while Nerdist continues to concentrate on legacy franchises like Star Wars and Marvel. As their staffs crank out daily updates, prognostications, and YouTube clips on these and many other television and movie series, their success has pressured older outlets to shift from a more traditional, criticism-centric format to a menu of recaps and listicles, as well as inspiring newer, general interest sites like The Ringer and Vox to integrate fan-pleasing deeply into their pop culture coverage.

As the fandom press has risen, culture has been reorganized around a cluster of franchises that would have been dismissed by the critics of previous generations as the province of children, nerds, or—most especially—nerdy children. Success in Hollywood now has as much to do with the number of people who see a particular film or TV show as with how easily its intellectual property can be franchised. Why settle for one Iron Man when you could have over a decade of Avengers movies? For both Hollywood and the digital newsrooms of Vulture, Nerdist, and their imitators, the logic is obvious: cater to a readymade fanbase, and the dollars will take care of themselves.

Fishing for Eyeballs

In a 2016 Variety guest column, Hollywood’s shift from chasing viewers to pursuing fans was convincingly attributed to “digital empowerment” by the cultural anthropologist-cum-industry consultant Susan Kresnicka. Including herself among the new legions of fans, she writes that combining a capability for “consuming, connecting and creating on our own terms” with “access to multitudes of others who share our passion for a show, movie, book, story, character, sport, band, artist, video game, brand, product, hobby, etc.” galvanizes mere interest into a commercial force that drives enthusiasts to “watch more, share more, buy more, evangelize more, participate more, help more.”

“Marketing strategies are increasingly crafted to drive not just breadth but depth of engagement,” Kresnicka notes. “And the conversation has in large part moved from how to ‘manage’ fans to how to ‘relate’ to fans.” A classic example of this shift is the slow-drip of news that precedes every new Star Wars or superhero film, a process that typically begins more than two years ahead of a theatrical release. First comes the announcement about the movie itself. Next, rumors swirl about who will direct and star. In front of a ballroom of cosplayers at San Diego Comic Con, a teaser will ramp up speculation even further. The proper trailer will arrive months later, dropped online with no advance warning to incite delirium on social media. All the while, an armada of YouTube speculators cultivate theories, half-baked or coolly rational, about how this latest installment will fit into a sometimes branching, sometimes ouroborosian plot arc that spans decades.

Studios have come to understand that by lengthening each film’s advance publicity cycle, fans are given more opportunities to demonstrate their fandom, amplifying the FOMO of casual viewers such that they, too, are driven to see what all the fuss is about. Each new crumb of information becomes a reason to post on Facebook, a kernel of brand awareness to drive the decision to buy an overpriced hoodie at the mall. Multiplying that effect is the fact that the lead times for these films are now so long that there is never not a new movie to talk about. Solo: A Star Wars Story didn’t live up to your expectations? Good news, the cast for Episode IX has just been announced! (...)

Such mining of the smallest news drops for content is everywhere in the fandom press. But what really sets these outlets apart from buttoned-up operations like the New York Times and CNN—each more than happy to crib a few clicks by throwing a link to the newest Star Wars teaser up on their website—is the length to which they’ll go to dissect the utterly banal. The release of the Star Wars: The Rise of Skywalker trailer merited not only a quick embedded video post from Vulture but also a thousand-word follow-up analyzing its title.

Titles, as it turns out, are irresistible to the fandom press. Last December, Netflix released a clip that did nothing beyond reveal the names of each episode in the third season of Stranger Things, which flashed briefly onscreen while spooky music played. The one-minute video merited a blog post on Vulture. And Nerdist. And Entertainment Weekly. And Variety. Once a fandom has been identified, every piece of content, no matter how inconsequential, becomes an excuse to go fishing for eyeballs.

by Kyle Paoletta, The Baffler | Read more:
Image: Zoë van Dijk

Astrid Echle, Blue Rabbits

The Pre-College Racket

Among the thousands of personal appeals on the crowdfunding site GoFundMe, you’ll find a 2017 campaign for a young woman named Kirstin, a then high school junior with wavy light brown hair, hazel eyes, and a smile that hints at suppressed excitement.

“Kirstin’s Invited to Stanford!” the page, created by Kirstin’s aunt, declares. “My 16-year-old niece has been offered a once-in-a-lifetime opportunity. After working hard her entire school career to achieve a goal, she has done it!”

Check out the complete 2019 Washington Monthly rankings here. Kirstin, it turns out, was not admitted as an undergraduate, but was raising funds for an “Intensive Law & Trial” summer program offered on the Stanford University campus. Tuition for the ten-day program runs to $4,095, not including airfare and pocket money. “Stanford, one of the most prestigious law schools in the country, is impressed enough with her to have invited her to this program in Palo Alto, California this summer,” the post continues. “Her extended family is trying hard to raise the deposit of $800.00 by week’s end so this opportunity does not slip through her fingers.”

Search “pre-college” on GoFundMe.com and you’ll find dozens of similar campaigns from hopeful students dazzled by the allure of two weeks on an elite campus. “Going to the Summer @ Brown PreCollege Program would give me a preview of what life would be like if I attend the school of my dreams,” reads a 2018 campaign by Benjina, from Newark, New Jersey. “This program will give me the experience of a lifetime,” writes Yakeleen, a high schooler from Tucson, Arizona, hoping to raise $2,200 to attend Harvard’s pre-college program. “Coming from a low income background while being a first generation student, this is a grand oppurtunity [sic] I intend on taking advantage of.”

These posts reflect the growing trend of summer “pre-college” programs at the nation’s most prestigious universities. Stanford, which launched its “pre-collegiate studies” program in 2012, hosts three-week summer sessions for high schoolers with course options on more than fifty different subjects, in addition to the mock trial program Kirstin hoped to attend. Similar programs abound at other elite institutions. In fact, of the top forty schools ranked in U.S. News & World Report, all but one—Dartmouth—offer some sort of summer program for high school students (and, in some cases, even middle schoolers). “More and more colleges and universities are offering short-term on-campus programs that offer a taste of what life would be like at their institution,” reports the International Association for College Admission Counseling.

These programs can offer precocious teens an enriching, hands-on preview of college life. But they also exploit both the allure of brand-name universities and families’ anxieties about an increasingly cutthroat college admissions process in which “summer experiences” matter. While even ambitious teens once spent their summers scooping ice cream or lazing by the pool, they now choose from a dizzying array of summer options, including trips to every corner of the planet and camps in every subject from robotics to equestrianism. “Admissions officers want to see that students are spending at least a few of their weeks productively during the summer,” said Andrew Belasco, CEO of the college advising firm College Transitions.

The popularity of summer pre-college programs suggests that many kids and parents see them as a good way to get a leg up on college admissions. And many universities, including Columbia and Johns Hopkins, explicitly encourage that belief. But admissions experts I spoke to were unanimous that, when it comes to getting into college, the benefits of most pre-college programs are negligible. The big winners, rather, are the schools themselves, who use pre-college programs to generate millions of dollars in revenue while relying on marketing practices that oversell the programs’ benefits, including elaborate admissions processes that imply a misleading degree of selectivity.

And while the target demographic is most likely the sort of upper-middle-class family that can afford expensive private university education, it’s clear that the universities are consciously drawing in families who struggle to afford the programs’ high costs. Some schools, including Stanford, distribute “fundraising guides” encouraging students to solicit contributions, including through crowdsourcing sites like GoFundMe. “With successful planning, creativity and resilience, students have worked with their community to achieve the goal of funding,” Stanford’s guide reads. “This is a great opportunity to gain leadership skills and connect to your community.”

For Kirstin’s family, creativity appears to have taken the form of debt. “We came up short but Kirstin saved and raised $650.00 on her own,” her aunt wrote in an update posted July 2017. “Brian and I put the balance of her tuition on credit because we are not letting this pass her by.” (...)

College admissions experts say that for many families, these experiences aren’t worth the often very hefty price tags. Harvard’s pre-college program costs $4,600 for a two-week session, while Brown charges $2,776 for one week and $6,976 for a four-week version.

One reason these programs don’t blow the socks off admissions officers is that they don’t reflect either the academic rigor or the selective admissions of the institutions that host them. Many pre-college programs are run by separate departments within a university (often the school of professional studies), or even by an outside company, and so have no connection to undergraduate education or admissions.

Among the private, for-profit companies that run pre-college programs is Envision, a subsidiary of the global educational travel company WorldStrides. In addition to programs at Johns Hopkins, UCLA, Yale, Rice, Georgia Tech, and other schools, Envision runs the Stanford-based mock trial program that was the subject of Kirstin’s GoFundMe campaign. Although that program hires Stanford Law School faculty to help teach some classes, the fine print on the Envision site notes, “This cultural excursion is not affiliated with Stanford Law School in any way.” It is, in other words, a side hustle for Stanford professors. Children “invited” to attend are invited by the company, which also runs the admissions process, not by Stanford Law. Likewise with Envision’s “Global Young Leaders Conference,” a ten-day excursion costing $3,095 that includes embassy visits, a tour of D.C., and “real world simulations” that seem very much like what one would do in a high school Model United Nations. Like the mock trial program, the application process is managed entirely by the company, although college credit is offered through George Mason University. (...)

While some programs require a minimum GPA, the standard tends to be forgiving. Johns Hopkins, for example, where the average high school GPA of incoming freshmen is 3.93, requires only a 3.0 minimum GPA for its summer “immersion” program ($2,575, one week, no college credit). “Most of our programs are not super selective,” said Liz Ringel, chief marketing officer for Summer Discovery, a company that runs pre-college programs on fourteen campuses, including the University of Pennsylvania, Johns Hopkins, and other top-tier institutions. “We want to make sure that students are in good academic standing, they haven’t been expelled, they don’t have any disciplinary action against them, and they are going to enjoy the experience on campus.”

Ultimately, schools may be less interested in a student’s academic brilliance than in their ability to pay. Among the college admissions consultants I interviewed, the consensus was that a primary purpose of these pre-college summer programs is making money. “Colleges are businesses, and one of the reasons they run summer programs is because they have all of these empty dorm rooms that ideally they could fill with people and make use of the resources that are already there,” said Bright Horizons’ Heaton. In 2015, a Brown University administrator told the campus newspaper that the school’s summer program had brought in $6 million that year, 70 percent of which was essentially profit. “The summer program,” the paper reported, “is one of several efforts administrators have made in recent years to diversify the University’s revenue streams and reduce its reliance on undergraduate tuition.”

by Anne Kim, Washington Monthly |  Read more:
Image: Chris Matthews

Friday, September 27, 2019

Japanese Breakfast

Song of My Self-Care

There was a time when the Internet seemed to promise the world to the world. When it appeared to be opening up a benign, infinite network of possibilities, in which everyone was enfranchised and newly accessible to one another as they were drawn, in one of Jia Tolentino’s many felicitous phrases, to the “puddles and blossoms of other people’s curiosity and expertise.” It would be a world in which hierarchies in whatever guise would be upended, a democratic forum to rival and exceed the philosophical marketplace of ancient Greece (no exclusion of anyone, not women, not slaves). At the very least, it was a place where, because you could be sure that someone out there was listening, you would find yourself able to articulate the thoughts that, for lack of an audience, had previously threatened to remain forever unspoken, stuck to the tip of your tongue.

This was the world that Tolentino, born in Canada to parents from the Philippines, saw burgeoning all around her as she grew up in Texas. In a way, she was primed for the illimitable expanse of the Internet by her Christian upbringing, which teaches its followers that everyone on earth is being watched by God. It gave her a flight of optimism, before this same system slowly but surely “metastasize[d] into a wreck”: “this feverish, electric, unlivable hell.” While the Internet was meant to allow you to reach out to any- and everyone without a hint of the cruel discriminations that blight our world, it turned into the opposite, a forum where individuals are less speaking to other people than preening and listening to themselves—turning themselves into desirable objects to be coveted by all. It became, that is, the perfect embodiment of consumer capitalism, where everything can be touted in the marketplace.

How, Tolentino asks, did the idea take hold that “ordinary personhood would seamlessly adjust itself around whatever within it would sell”? How did our basic humanity come to be “reframed as an exploitable viral asset”? We are in danger, as she quotes Werner Herzog saying of psychoanalysis, of losing “our dark corners and the unexplained,” of making ourselves “uninhabitable.” “It’s as if we’ve been placed on a lookout that oversees the entire world,” Tolentino writes, “and given a pair of binoculars that makes everything look like our own reflection.” Hence the title of this collection, Trick Mirror. The more our image appears to inflate our value, the more our vision shrinks to our own measure—and the more we succumb to the old, imperial delusion that allows us to believe we can command and control the furthest reaches of the universe as well as ourselves, regardless of the consequences (“reflections on self-delusion” is the subtitle to the book).

Tolentino is known to readers of The New Yorker, and before that to readers of the websites The Hairpin and Jezebel. In this collection of always trenchant and at times luminous essays, she establishes herself as the important critical voice she has been on her way to becoming for some time, although comparisons with Susan Sontag and Joan Didion seem to me unhelpful—as if, for a woman writer, theirs are the only hills to climb. For Tolentino, this book is the fulfillment of a long-held dream, to claim her place in a higher culture than the one that threatened to devour her as a young girl. (...)

Tolentino knows she is implicated in the world she lays out here with such merciless precision. In the end, she is the last person she wants to let off the hook. “I have felt so many times,” she writes in an essay on scamming as the new American deal, which takes in everything from bailing out the bankers after the crash of 2008 to the student debt disaster, Facebook, and the campaign lies of Donald Trump, “that the choice of this era is to be destroyed or to morally compromise ourselves in order to be functional—to be wrecked, or to be functional for reasons that contribute to the wreck.” Pointing the finger at herself is something of a refrain: “I live very close to this scam category, perhaps even inside it…. I am part of that world…even if I criticize its emptiness”; “Lately I’ve been wondering how everything got so intimately terrible, and why, exactly, we keep playing along.” With this hum of disapproval, Tolentino is describing not just the “ethical brokenness” that she believes is the quandary of critical thought in our time—her deepest implication as a citizen in the unjust structures she laments. She is also issuing a warning. She is instructing her readers, first and foremost herself, not to get too comfortable, or to enjoy her writing too much.

Tolentino is a woman of color in her early thirties, from a relatively privileged family that moved up and down in the middle class. Having been able to pay for their daughter’s private education in grade school, her parents hit serious financial trouble (their house was repossessed) about the time she joined the reality show on MTV. From a relatively young age, she therefore learned that she would have to earn for herself the privilege and stability to which she had almost become accustomed (she chose the University of Virginia over Yale, because Virginia awarded her a scholarship). Before being hired by The New Yorker, she was barely earning $35,000 a year, almost the exact sum she has devoted over the past decade to friends’ weddings (a rite that she clearly sees as another scam, although it is by no means clear who—those getting married or those like herself who have so far refused to do so—she disapproves of most).

She could hardly, then, be expected to be immune to the lure of capital. Nor to the ideal of perfection that, especially for women, is its increasingly coercive accompaniment. Hence, again, “mirror,” the surface in which women never stop searching for unattainable beauty because somewhere deep down they have been conned into thinking that, merely by dint of being women and despite whatever efforts they make, they are flawed beyond repair. “You make appointments with mirrors,” a friend once said to me. The last thing a woman expects or remotely wishes to see in the mirror is herself. “Optimization”—Tolentino’s term for this insane hunt that turns women into their own quarry—is a counsel of despair. “I like trying to look good,” she writes, “but it’s hard to say how much you can genuinely, independently like what amounts to a mandate.”

by Jacqueline Rose, NYRB | Read more:
Image: Joanna Neborsky

Thursday, September 26, 2019


Luis Pérez, citizen no. 25, The Letter/London Cafe, 2017
via:

Rafael Ochoa, Autumn Quinces, 2014
via:

Letter from Hong Kong

"I think you should still have the party later in the month,” the gallery owner was saying to a friend. We were at the opening of an art show. “It’s important to let people know that life still goes on.”

Life went on—at a gallery, a hotel bar by the water, hot pot at midnight—until out of nowhere, people who were just on one side of the room laughing to themselves rushed to the window where I was sitting to take a better look at the scene below. Riot police were chasing a protester down a barricaded thoroughfare in the middle of a busy shopping district. “Do they really need seven cops in full gear against someone walking around in a T-shirt and jeans?” I asked, now squeezed between my friend and a man who had installed himself at our booth with a big camera. A small part of me was ready to play devil’s advocate: what did I miss that justified the excessive violence?

“It’s just the way it is now,” said the man, while his friends teased him for interrupting our dinner.

If at the beginning of the crisis in Hong Kong, three months ago, it was hard for some to imagine how a supposedly democratic conclave within authoritarian China and the pride of imperial capitalists everywhere would soon become the undeclared police state it now is, it’s an even bigger challenge to imagine how the political crisis might be resolved without a dramatic redefinition of the relationship between the semi-autonomous territory and Beijing’s dictatorial central government. (...)

At every point of unprecedented escalation in this fight, there has been reason to expect it to be the last. But the unraveling of Hong Kong has fallen on a series of deaf ears: first those of its ineffectual local government, and then the central government in Beijing, who jumped to take out fire hoses on the banging of pots and pans. The unraveling has been lost, too, on some at the office, in the family group chat, on first dates. When you meet someone, the first thing they want to know is, what are you doing this weekend? What does your family think? How do you think this will end? Of course, under any circumstances, what everyone asks is, which side are you on?

Tear gas, water cannons, fire, vandalism, “mobs,” and blood make headlines, but as anyone can tell you about what it’s like to fight for things that have been taken away from them, the real test is in the tedious minutiae of organizing, strategizing, communicating, reading the news, reading the enemy’s news, analyzing, investigating, budgeting, fact-checking, panicking, and waiting. There is, most of all, the waiting. Waiting in line at the ticket machine, waiting for the daily police press conference, waiting for the bus in boycott of the subway, waiting for the “I’m home” text from friends who live in neighborhoods threatened by state-sanctioned thugs, waiting for time to pass during the eighth hour of a three-day airport sit-in, waiting for a news-free hour to pass, waiting for a single sensible response from the government.

What protesters in Hong Kong are up against is more than often absurd, but no longer shocking, even when the small victories won with orthodox protest tactics feel dated by the time the government counteracts them with brute force. Barely a month ago, few would have expected that a peaceful 1.7 million-person rally against police brutality might be the last of its kind legally permitted in Hong Kong. Police had rejected the organizers’ original application for an August 18 march but allowed for an assembly at Victoria Park, the route’s original starting point and a space that fits approximately 100,000. To bend the rule without breaking it—an assembly is not illegal if the crowd is waiting to enter a legal assembly—the protesters came up with a strategy called “orderly flow.” On paper, we were to fill the park to capacity until the crowd inevitably spilled out onto the streets, forcing the original protesters to vacate and make room for newcomers, and creating, in effect, a conveyor belt of protesters on the original marching route.

In practice, it was a slow procession of first making our way through an over-crowded subway station to the park; then sitting around, suffering the blitz of traffic directions over loudspeaker, following the call to slogan chants and cringing when the response waned; then standing around, waiting for the crowd to move, the downpour to stop, the march to begin. I thought of Renata Adler’s report from a 1966 Mississippi Black Power march:
Perhaps the reason for the disproportionate emphasis on divisive issues during the march was that civil-rights news—like news of any unified, protracted struggle against injustice—becomes boring. One march, except to the marchers, is very like another. Tents, hot days, worried nights, songs, rallies, heroes, villains, even tear gas and clubbings—the props are becoming stereotyped.
For the ordeal that Hong Kong’s protesters have put themselves through, there are no traditionally understood rewards to speak of: no recognition of individual heroism, no big checks, no promised land. Even moral superiority grows stale in political warfare. The paradox of this insurgence is that the protests have been driven by defensive necessity rather than choice. It is those who don’t come out for whatever reason—because they support unquestioned authority, because they don’t think they have a stake in politics, because they believe this is other people’s fight for other people’s futures, because they have more important interests to protect—who are really choosing.

Around me at the Victoria Park assembly were middle-aged couples, families with young children, fashionable girls leading the chants with their perky pubescent voices. As someone crowd-adverse and by nature suspicious of mass emotion, I tend to recoil at these engines for rousing collective morale. If protesting against the tyranny of an unaccountable government is the noble thing to do, am I supposed to feel invigorated, exhilarated, or at least enthusiastic? What is the correct emotional expression of a moral duty? Nonetheless, my reaction to these call and responses has become Pavlovian: whenever I hear “Hong Konger,” my brain completes, “add oil.” “Liberate Hong Kong”—“Revolution of our times.”

by Jaime Chu, The Baffler | Read more:
Image: Studio Incendo

Wednesday, September 25, 2019

Artificial Intelligence Takes On Earthquake Prediction

In May of last year, after a 13-month slumber, the ground beneath Washington’s Puget Sound rumbled to life. The quake began more than 20 miles below the Olympic mountains and, over the course of a few weeks, drifted northwest, reaching Canada’s Vancouver Island. It then briefly reversed course, migrating back across the U.S. border before going silent again. All told, the monthlong earthquake likely released enough energy to register as a magnitude 6. By the time it was done, the southern tip of Vancouver Island had been thrust a centimeter or so closer to the Pacific Ocean.

Because the quake was so spread out in time and space, however, it’s likely that no one felt it. These kinds of phantom earthquakes, which occur deeper underground than conventional, fast earthquakes, are known as “slow slips.” They occur roughly once a year in the Pacific Northwest, along a stretch of fault where the Juan de Fuca plate is slowly wedging itself beneath the North American plate. More than a dozen slow slips have been detected by the region’s sprawling network of seismic stations since 2003. And for the past year and a half, these events have been the focus of a new effort at earthquake prediction by the geophysicist Paul Johnson.

Johnson’s team is among a handful of groups that are using machine learning to try to demystify earthquake physics and tease out the warning signs of impending quakes. Two years ago, using pattern-finding algorithms similar to those behind recent advances in image and speech recognition and other forms of artificial intelligence, he and his collaborators successfully predicted temblors in a model laboratory system — a feat that has since been duplicated by researchers in Europe.

Now, in a paper posted this week on the scientific preprint site arxiv.org, Johnson and his team report that they’ve tested their algorithm on slow slip quakes in the Pacific Northwest. The paper has yet to undergo peer review, but outside experts say the results are tantalizing. According to Johnson, they indicate that the algorithm can predict the start of a slow slip earthquake to “within a few days — and possibly better.”

“This is an exciting development,” said Maarten de Hoop, a seismologist at Rice University who was not involved with the work. “For the first time, I think there’s a moment where we’re really making progress” toward earthquake prediction.

Mostafa Mousavi, a geophysicist at Stanford University, called the new results “interesting and motivating.” He, de Hoop, and others in the field stress that machine learning has a long way to go before it can reliably predict catastrophic earthquakes — and that some hurdles may be difficult, if not impossible, to surmount. Still, in a field where scientists have struggled for decades and seen few glimmers of hope, machine learning may be their best shot. (...)

More than a decade ago, Johnson began studying “laboratory earthquakes,” made with sliding blocks separated by thin layers of granular material. Like tectonic plates, the blocks don’t slide smoothly but in fits and starts: They’ll typically stick together for seconds at a time, held in place by friction, until the shear stress grows large enough that they suddenly slip. That slip — the laboratory version of an earthquake — releases the stress, and then the stick-slip cycle begins anew.

When Johnson and his colleagues recorded the acoustic signal emitted during those stick-slip cycles, they noticed sharp peaks just before each slip. Those precursor events were the laboratory equivalent of the seismic waves produced by foreshocks before an earthquake. But just as seismologists have struggled to translate foreshocks into forecasts of when the main quake will occur, Johnson and his colleagues couldn’t figure out how to turn the precursor events into reliable predictions of laboratory quakes. “We were sort of at a dead end,” Johnson recalled. “I couldn’t see any way to proceed.”

At a meeting a few years ago in Los Alamos, Johnson explained his dilemma to a group of theoreticians. They suggested he reanalyze his data using machine learning — an approach that was well known by then for its prowess at recognizing patterns in audio data.

Together, the scientists hatched a plan. They would take the roughly five minutes of audio recorded during each experimental run — encompassing 20 or so stick-slip cycles — and chop it up into many tiny segments. For each segment, the researchers calculated more than 80 statistical features, including the mean signal, the variation about that mean, and information about whether the segment contained a precursor event. Because the researchers were analyzing the data in hindsight, they also knew how much time had elapsed between each sound segment and the subsequent failure of the laboratory fault.

Armed with this training data, they used what’s known as a “random forest” machine learning algorithm to systematically look for combinations of features that were strongly associated with the amount of time left before failure. After seeing a couple of minutes’ worth of experimental data, the algorithm could begin to predict failure times based on the features of the acoustic emission alone.

Johnson and his co-workers chose to employ a random forest algorithm to predict the time before the next slip in part because — compared with neural networks and other popular machine learning algorithms — random forests are relatively easy to interpret. The algorithm essentially works like a decision tree in which each branch splits the data set according to some statistical feature. The tree thus preserves a record of which features the algorithm used to make its predictions — and the relative importance of each feature in helping the algorithm arrive at those predictions.

When the Los Alamos researchers probed those inner workings of their algorithm, what they learned surprised them. The statistical feature the algorithm leaned on most heavily for its predictions was unrelated to the precursor events just before a laboratory quake. Rather, it was the variance — a measure of how the signal fluctuates about the mean — and it was broadcast throughout the stick-slip cycle, not just in the moments immediately before failure. The variance would start off small and then gradually climb during the run-up to a quake, presumably as the grains between the blocks increasingly jostled one another under the mounting shear stress. Just by knowing this variance, the algorithm could make a decent guess at when a slip would occur; information about precursor events helped refine those guesses.

The finding had big potential implications. For decades, would-be earthquake prognosticators had keyed in on foreshocks and other isolated seismic events. The Los Alamos result suggested that everyone had been looking in the wrong place — that the key to prediction lay instead in the more subtle information broadcast during the relatively calm periods between the big seismic events.

To be sure, sliding blocks don’t begin to capture the chemical, thermal and morphological complexity of true geological faults. To show that machine learning could predict real earthquakes, Johnson needed to test it out on a real fault. What better place to do that, he figured, than in the Pacific Northwest?

by Ashley Smart, Quanta Magazine |  Read more:
Image: Race Jones, Outlive Creative

What If We Stopped Pretending?

There is infinite hope,” Kafka tells us, “only not for us.” This is a fittingly mystical epigram from a writer whose characters strive for ostensibly reachable goals and, tragically or amusingly, never manage to get any closer to them. But it seems to me, in our rapidly darkening world, that the converse of Kafka’s quip is equally true: There is no hope, except for us.

I’m talking, of course, about climate change. The struggle to rein in global carbon emissions and keep the planet from melting down has the feel of Kafka’s fiction. The goal has been clear for thirty years, and despite earnest efforts we’ve made essentially no progress toward reaching it. Today, the scientific evidence verges on irrefutable. If you’re younger than sixty, you have a good chance of witnessing the radical destabilization of life on earth—massive crop failures, apocalyptic fires, imploding economies, epic flooding, hundreds of millions of refugees fleeing regions made uninhabitable by extreme heat or permanent drought. If you’re under thirty, you’re all but guaranteed to witness it.

If you care about the planet, and about the people and animals who live on it, there are two ways to think about this. You can keep on hoping that catastrophe is preventable, and feel ever more frustrated or enraged by the world’s inaction. Or you can accept that disaster is coming, and begin to rethink what it means to have hope.

Even at this late date, expressions of unrealistic hope continue to abound. Hardly a day seems to pass without my reading that it’s time to “roll up our sleeves” and “save the planet”; that the problem of climate change can be “solved” if we summon the collective will. Although this message was probably still true in 1988, when the science became fully clear, we’ve emitted as much atmospheric carbon in the past thirty years as we did in the previous two centuries of industrialization. The facts have changed, but somehow the message stays the same.

Psychologically, this denial makes sense. Despite the outrageous fact that I’ll soon be dead forever, I live in the present, not the future. Given a choice between an alarming abstraction (death) and the reassuring evidence of my senses (breakfast!), my mind prefers to focus on the latter. The planet, too, is still marvelously intact, still basically normal—seasons changing, another election year coming, new comedies on Netflix—and its impending collapse is even harder to wrap my mind around than death. Other kinds of apocalypse, whether religious or thermonuclear or asteroidal, at least have the binary neatness of dying: one moment the world is there, the next moment it’s gone forever. Climate apocalypse, by contrast, is messy. It will take the form of increasingly severe crises compounding chaotically until civilization begins to fray. Things will get very bad, but maybe not too soon, and maybe not for everyone. Maybe not for me.

Some of the denial, however, is more willful. The evil of the Republican Party’s position on climate science is well known, but denial is entrenched in progressive politics, too, or at least in its rhetoric. The Green New Deal, the blueprint for some of the most substantial proposals put forth on the issue, is still framed as our last chance to avert catastrophe and save the planet, by way of gargantuan renewable-energy projects. Many of the groups that support those proposals deploy the language of “stopping” climate change, or imply that there’s still time to prevent it. Unlike the political right, the left prides itself on listening to climate scientists, who do indeed allow that catastrophe is theoretically avertable. But not everyone seems to be listening carefully. The stress falls on the word theoretically.

Our atmosphere and oceans can absorb only so much heat before climate change, intensified by various feedback loops, spins completely out of control. The consensus among scientists and policy-makers is that we’ll pass this point of no return if the global mean temperature rises by more than two degrees Celsius (maybe a little more, but also maybe a little less). The I.P.C.C.—the Intergovernmental Panel on Climate Change—tells us that, to limit the rise to less than two degrees, we not only need to reverse the trend of the past three decades. We need to approach zero net emissions, globally, in the next three decades.

This is, to say the least, a tall order. It also assumes that you trust the I.P.C.C.’s calculations. New research, described last month in Scientific American, demonstrates that climate scientists, far from exaggerating the threat of climate change, have underestimated its pace and severity. To project the rise in the global mean temperature, scientists rely on complicated atmospheric modelling. They take a host of variables and run them through supercomputers to generate, say, ten thousand different simulations for the coming century, in order to make a “best” prediction of the rise in temperature. When a scientist predicts a rise of two degrees Celsius, she’s merely naming a number about which she’s very confident: the rise will be at least two degrees. The rise might, in fact, be far higher.

As a non-scientist, I do my own kind of modelling. I run various future scenarios through my brain, apply the constraints of human psychology and political reality, take note of the relentless rise in global energy consumption (thus far, the carbon savings provided by renewable energy have been more than offset by consumer demand), and count the scenarios in which collective action averts catastrophe. The scenarios, which I draw from the prescriptions of policy-makers and activists, share certain necessary conditions.

The first condition is that every one of the world’s major polluting countries institute draconian conservation measures, shut down much of its energy and transportation infrastructure, and completely retool its economy. According to a recent paper in Nature, the carbon emissions from existing global infrastructure, if operated through its normal lifetime, will exceed our entire emissions “allowance”—the further gigatons of carbon that can be released without crossing the threshold of catastrophe. (This estimate does not include the thousands of new energy and transportation projects already planned or under construction.) To stay within that allowance, a top-down intervention needs to happen not only in every country but throughout every country. Making New York City a green utopia will not avail if Texans keep pumping oil and driving pickup trucks.

The actions taken by these countries must also be the right ones. Vast sums of government money must be spent without wasting it and without lining the wrong pockets. Here it’s useful to recall the Kafkaesque joke of the European Union’s biofuel mandate, which served to accelerate the deforestation of Indonesia for palm-oil plantations, and the American subsidy of ethanol fuel, which turned out to benefit no one but corn farmers.

Finally, overwhelming numbers of human beings, including millions of government-hating Americans, need to accept high taxes and severe curtailment of their familiar life styles without revolting. They must accept the reality of climate change and have faith in the extreme measures taken to combat it. They can’t dismiss news they dislike as fake. They have to set aside nationalism and class and racial resentments. They have to make sacrifices for distant threatened nations and distant future generations. They have to be permanently terrified by hotter summers and more frequent natural disasters, rather than just getting used to them. Every day, instead of thinking about breakfast, they have to think about death.

Call me a pessimist or call me a humanist, but I don’t see human nature fundamentally changing anytime soon. I can run ten thousand scenarios through my model, and in not one of them do I see the two-degree target being met.

by Jonathan Franzen, New Yorker |  Read more:
Image: Leonardo Santamaria
[ed. Mr. Franzen has taken a lot of crap for this article (defeatist!), but does anyone dispute his characterization of human nature and short-sighted self-interest? See also: Death on the Beach (The Baffler).]

Amazon’s Allbirds Clone Shows Its Relentless Steamrolling of Brands

In its pursuit of being “the Everything Store,” Amazon has been known to copy popular items and sell them itself for cheaper.

Allbirds now appears to be the latest target. Billing them as “the world’s most comfortable shoes,” Allbirds creates environmentally friendly footwear that has been unofficially recognized as part of the Silicon Valley tech worker and entrepreneur uniform. The five-year-old direct-to-consumer shoe startup has been valued at $1.4 billion, and doesn’t sell its goods on Amazon.

Now the Amazon-brand 206 Collective Men’s Galen Wool Blend Sneakers have a striking resemblance to Allbirds’ popular shoe, the Wool Runners—at a much lower price. While Allbirds sells for $95, the Amazon brand is priced at $45. The shoe appears to be newly released, with the first customer review dating to Sept. 19. Amazon declined to comment.

Accumulating data on sales history and customer shopping patterns, the online retailer can swiftly turn around copies of products that already exist, and at a much lower price. For years, Amazon has aggressively been cutting out the middleman to make more profit. Since launching AmazonBasics in 2009, it now reportedly carries 135 brands selling items ranging from batteries to everyday items. Amazon sells products very similar to its best-selling items. For example, the Instant Pot has been a hit for Amazon, and, for more than a year it has sold an AmazonBasics clone of it.

The site has also adjusted its search system to more prominently feature listings that are more profitable for Amazon, according to a recent Wall Street Journal report. Amazon denied that report.

by Michelle Cheng, Quartz |  Read more:
Image: via
[ed. I love my Allbirds. See also: Allbirds calls out Amazon for its unsustainable knockoff of its sneakers (Quartz).]

The Avett Brothers

Tuesday, September 24, 2019