Wednesday, October 2, 2019

How Bill Clinton and American Financiers Armed China

It’s the 70th anniversary of the People’s Republic of China, which Xi Jinping is celebrating with aggressive rhetoric and a militaristic display of his ICBMs that can strike at the U.S. in 30 minutes. So today I’m going to write about how American aerospace monopolists, dumb Pentagon procurement choices, and the Bill Clinton administration helped create the Chinese missile threat we are now confronting.

In August of 1994, Bill Clinton’s Secretary of Commerce, Ron Brown, flew to China to try and seal two deals for American corporations. The first was to enable Chrysler the ability to build minivans in China, and the second was to get the Chinese to buy 40 MD-90 aircraft ‘Trunkliners” from McDonnell Douglas.

The McConnell Douglas deal was particularly important to the Clinton administration for a number of reasons. The company was dying; it was badly run by financiers who lacked an appreciation for quality production. More importantly, it had lost a key military contract for the F-22 in 1986, so the government felt an obligation to find customers to prop it up. There was also politics, with Bill Clinton trying to honor his unofficial 1992 campaign slogan, “it’s the economy, stupid.” Clinton would indeed hail the deal on the eve of the 1994 midterm election.

The Chinese agreed to buy the planes, but with one caveat. They wanted a side deal; McDonnell Douglas should sell a mysterious company called the China National Aero Technology Import and Export Corporation (CATIC) a set of specialist machine tools that shape and bend aircraft parts stashed in a factory in Columbus, Ohio.

When Chinese representatives went to Columbus, Ohio, workers wouldn’t let them see the tools, because workers realized that they would lose their jobs if the tools were sold to the Chinese. The Chinese then sent a letter to the corporation saying that the deal for the Trunkliners was at a stalemate, but if the machine tools were sold to a mysterious Chinese company, well, that would have a “big influence” on whether McDonnell Douglas could close the deal on the planes.

It wasn’t just the workers who caused problems. The government could have been a hurdle for McDonnell Douglas as well, because these weren’t just any old machine tools. “According to military experts,” reported the New York Times, “the machines would enable the Chinese military to improve significantly the performance abilities -- speed, range and maneuverability -- of their aircraft. And if diverted, they could do the same for missiles and bombers.” Selling the tools wasn’t just a commercial deal, the machining equipment was subject to export controls for sensitive national security technology.

It was an insane idea, selling the Chinese government this important machining capacity. The Pentagon protested vehemently, as did Republican Congressman Tillie Fowler, who was on the Armed Services Committee. Fowler said allowing the transfer to reflects an ''emphasis on short-term gain at the expense of national security and long-term economic gain.'' And yet that’s what McDonnell Douglas sought, and what the Clinton administration pushed through. The Commerce Department cleared the deal, in return for a pledge (or behavioral remedy) that China would not use the tools to build missiles, but would dedicate them to a civilian aircraft machine tool center in Beijing.

McDonnell Douglas basically knew the behavioral remedies were fraudulent almost immediately; one of the most important pieces of equipment was shipped not to Beijing but directly to a Nanchang military plant. It wasn’t just McDonnell Douglas who understood the con; Clinton officials had the details of the deal, and let it go through anyway. Why? They used the same excuses we hear today - competitiveness and a fear of offending China. Here’s the NYT explaining what happened.
“American officials want to avoid sending any signals that would fuel China's belief that the United States is trying to ''contain'' China's power, militarily or economically. And they know that if they deny a range of industrial technology to China, other competitors -- chiefly France and Germany -- are ready to leap in and fill the void.”
China never honored the overall deal. By 1999, China had acquired only one of the 20 promised Trunkliner airplanes. And three years later, the Federal government indicted McDonnell Douglas for “conspiracy, false statements and misrepresentations in connection with a 1994 export license to sell 13 pieces of machining equipment to China.” The government also went after the Chinese company.

Still, this was too little too late. The episode was by any metric catastrophic; the Chinese government got missile making machine tools in return for a promise they didn’t honor, which should have been a massive scandal, borderline treason. But ultimately it wasn’t a scandal, because Republicans, leading globalization thinkers, and Clinton Democrats decided that transferring missile technology to China didn’t matter.

Remember, during this entire period, Bill Clinton pressed aggressively to open up the U.S. industrial base to Chinese offshoring. And towards the end of the Clinton administration, McDonnell Douglas, as we all now know, later merged with Boeing, and that merger ended up destroying the capacity of Boeing - by then the sole American large civilian aircraft maker - to manufacture safe civilian planes.

How Bill Clinton Made the Worst Strategic Decisions in American History

Chinese power today is a result of a large number of incidents similar to this one, the wholesale transfer of knowhow, technology, and physical stuff from American communities to Chinese ones. And the confused politics of China is a result of the failure of the many policymaking elites who participated in such rancid episodes, and are embarrassed about it. As we peer at an ascendant and dangerous China, it makes sense to look back at how Clinton thought about the world, and why he would engage in such a foolish strategy.

Broadly speaking, there were two catastrophic decisions Clinton made in 1993 that ended up eroding the long-term American defense posture. The first was to radically break from the post-World War II trading system. This system was organized around free trade of goods and services among democratic nations, along with somewhat restricted financial capital flows. He did this by passing NAFTA, by bailing out Mexico and thus American banks, by creating the World Trade Organization, and by opening up the United States to China as deep commercial partners.

The Clinton framework gutted the ability of U.S. policymakers to protect industrial power, and empowered Wall Street and foreign officials to force the U.S. to export its industrial base abroad, in particular to China. The radicalism of the choice was in the intertwining of the U.S. industrial base with an autocratic strategic competitor. During the Cold War, we had never relied on the USSR for key inputs, and basically didn’t trade with them. Now, we would deeply integrate our technology and manufacturing with an enemy (and yes, the Chinese leaders saw and currently still see us as enemies).

The second choice was to reorganize the American defense industrial base, ripping out contracting rules and consolidating power into the hands of a small group of defense giants. In the early 1990s, as part of the ‘reinventing government’ initiative, the Clinton team sought to radically empower private contractors in the government procurement process. This new philosophy was most significant when it hit the military, a process led by William Perry.
In 1993, Defense Department official William Perry gathered CEOs of top defense contractors and told them that they would have to merge into larger entities because of reduced Cold War spending. “Consolidate or evaporate,” he said at what became known as “The Last Supper” in military lore. Former secretary of the Navy John Lehman noted, “industry leaders took the warning to heart.” They reduced the number of prime contractors from 16 to six; subcontractor mergers quadrupled from 1990 to 1998. They also loosened rules on sole source—i.e. monopoly—contracts, and slashed the Defense Logistics Agency, resulting in thousands of employees with deep knowledge of defense contracting leaving the public sector.
Perry was a former merger specialist who fetishized expensive technology in weapons systems. But what Perry was doing was part of an overall political deal. In the 1980s, the Reagan administration radically raised defense spending. Democrats went along with the spending boost, on condition that they get to write the contracting rules. So while the Reagan build-up was big and corrupt, it was not unusually corrupt. When Clinton came into office, his team asked defense contractor how to make them happy in an environment of stagnant or reduced defense spending. The answer was simple. Raise their margins. The merger wave and sole source contracting was the result.

The empowering of finance friendly giant contractors bent the bureaucracies towards only seeing global capital flows, not the flow of stuff or the ability to produce. This was already how most Clinton administration officials saw the world. They just assumed, wrongly, that stuff moves around the world without friction, and that American corporations operate in a magic fairy tale where practical problems are solved by finance and this thing called ‘the free market.’ In their Goldman, McKinsey and Boston Consulting Group-ified haze of elitist disdain for actually making and doing real things, they didn’t notice or care that the Chinese Communist Party was centralizing production in China. They just assumed that Chinese production was ‘the free market’ at work, instead of a carefully state-sponsored effort by Chinese bureaucrats to build strategic military and economic power.

Part of this myopia was straightforward racism, an inability to imagine that a non-white country could topple Western power. Part of it was greed, as Chinese money poured into the coffers of Bush-era and Clinton-era officials, as well as private equity barons. This spigot of cash continued through the Bush and Obama administrations.

by Matt Stoller, BIG |  Read more:
Image: uncredited

Toshihiko Okuya, study #127
via:

Lerson
via:

Beachheads and Obstacles

Apple and Google may be the first companies people think of when you ask who won mobile, but Amazon and Facebook were not far behind.

Amazon spent the smartphone era not only building out Amazon.com, but also Amazon Web Services (AWS). AWS was just as much a critical platform for the smartphone revolution as were iOS and Android: many apps ran on the phone with data or compute on Amazon’s cloud; mobile also created a vacuum in the enterprise for SaaS companies eager to take advantage of Microsoft’s desire to prop up its own mobile platforms instead of supporting iOS and Android, and those SaaS companies were built on AWS.

Smartphones, meanwhile, saved Facebook from itself: instead of a futile attempt to be a platform within the browser, mobile made Facebook just an app, and it was the best possible thing that could have happened to the company. Facebook was freed to focus solely on content its users wanted and advertising to go along with it, generating billions of dollars and a deep moat in targeting advertising along the way.

What is not clear is if Amazon’s and Facebook’s management teams agree. After all, both launched smartphones of their own, and both failed spectacularly.

Facebook’s attempt was rather half-assed (to use the technical term). Instead of writing their own operating system, Facebook Home was a launcher that sat on top of Android; instead of designing their own hardware, the Facebook One was built by HTC. Both decisions ended up being good ones because they made failure less expensive.

Amazon, meanwhile, went all out to build the Fire Phone: a new operating system (based on Android, but incompatible with it), new hardware, including a complicated camera system that included four front-facing cameras, and a sky-high price to match. It fared about as well as the Facebook One, which is to say not well at all.

That, though, is what made last week’s events so interesting: it is these two failures that seemed to play a bigger role in what was announced than did the successes.

Amazon and Facebook’s Announcements

Start with Amazon: the company announced a full fifteen hardware products. In order: Echo Dot with Clock, a new Echo, Echo Studio (an Echo with a high-end speaker system), Echo Show 8 (a third-size of the Echo with a screen), Echo Glow (a lamp), new Eero routers, Echo Flex (a microphone only Echo that hangs off an outlet), Ring Retrofit Alarm Kit (that lets you leverage your preinstalled alarm), Ring Stick Up Cam (a smaller Ring camera), Ring Indoor Cam (an even smaller Ring camera), Amazon Smart Oven (an oven that integrates with Alexa), Fetch (a pet tracker), Echo Buds (wireless headphones with Alexa), Echo Frames (eyeglasses with Alexa), and Echo Loop (a ring with Alexa). Whew!

This is an approach that is the exact opposite of the Fire Phone: instead of pouring all of its resources into one high-priced device, Amazon is making just about every device it can think of, and seeing if they sell. Moreover, they are doing so at prices that significantly undercut the competition: the Echo Studio is $150 cheaper than a HomePod, the Echo Show 8 is $60 cheaper than the Google Nest Hub, and the new Eero is $150 cheaper than the product Eero sold as an independent company. Amazon is clearly pushing for ubiquity; a whale strategy this is not.

Facebook, meanwhile, effectively consolidated its Oculus product line from three to one: the mid-tier Oculus Quest, a standalone virtual reality (VR) unit, gained the capability to connect to a gaming PC in order to play high-end Oculus Rift games; Oculus Go apps, meanwhile, gained the capability to run on the relatively higher-specced Oculus Quest. It is not clear why either the Go or Rift should be a target for developers or customers going forward.

The broader goal, though, remains the same: Facebook is determined to own a platform; the lesson the company seems to have drawn from its smartphone experience is the importance of doing it all.

Beachheads and Obstacles

What Amazon and Facebook do have in common — and perhaps this is why both seem to look back at their very successful smartphone eras with regret — is that Apple and Google are their biggest obstacles to success, and it’s because of their smartphone platforms.

Amazon to its great credit — and perhaps because the company did not have a smartphone to rely on — found a beachhead in the home, the one place where your phone may not be with you. Now it is trying to not only saturate the home but also extend beyond it, both through on-body accessories and also an expanding number of deals with automakers.

Facebook, meanwhile, is searching for a beachhead of its own in virtual reality. That, the company believes, will give it the track to augmented reality, and by extension, usefulness in the real world.

Amazon’s challenge is Google: Android phones are already everywhere, and Google is catching up in the home more quickly and more effectively than Amazon is pushing outside of it. Google also has a much stronger position when it comes to the sort of Internet services that provide the rough grist of intelligence of virtual assistants: emails, calendars, and maps.

Facebook, meanwhile, is ultimately challenging Apple: augmented reality is going to start at the high end with an integrated solution, and Apple has considerably more experience building physical products for the real world, and a major lead in chip design and miniaturization, not to mention consumer trust. Moreover, while there is obviously technical overlap when it comes to creating virtual reality and augmented reality headsets, the product experience is fundamentally distinct.

by Ben Thompson, Stratechery |  Read more:
Image: uncredited

How to Set Your Google Data to Self-Destruct

Last year you may have been addicted to Beyoncé. But nowadays you’re more into Lizzo. You also once went through a phase of being obsessed with houseplants, but have lately gotten into collecting ballpoint pens.

People’s tastes and interests change. So why should our Google data histories be eternal?

For years, Google has kept a record of our internet searches by default. The company hoards that data so it can build detailed profiles on us, which helps it make personalized recommendations for content but also lets marketers better target us with ads. While there have been tools we can use to manually purge our Google search histories, few of us remember to do so.

So I’m recommending that we all try Google’s new privacy tools. In May, the company introduced an option that lets us automatically delete data related to our Google searches, requests made with its virtual assistant and our location history.

On Wednesday, Google followed up by expanding the auto-delete ability to YouTube. In the coming weeks, it will begin rolling out a new private mode for when you’re navigating to a destination with its Google Maps app, which could come in handy if you’re going somewhere you want to keep secret, like a therapist’s office.

“All of this work is in service of having a great user experience,” Eric Miraglia, Google’s data protection officer, said about the new privacy features. “Part of that experience is, how does the user feel about the control they have?”

How do we best use Google’s new privacy tools? The company gave me a demonstration of the newest controls this week, and I tested the tools that it released earlier this year. Here’s what to know about them.

Brian X. Chen, NY Times |  Read more:
Image: Glenn Harvey

10 Hours of Cats Purring


[ed. There's a NY Times article out today about ASMR "boyfriends" that help you get to sleep: What Does Having a Boyfriend Have to Do With Sleep? Despite the overall creepiness of the story and the actual genre itself, the quote that got me was:
“My family is so supportive,” he said. “They thought it was cool I could get that many subscribers from whispering into a microphone.” And those subscribers mean views, which means money. But just how much? 
“For every 1,000 views, I make $3,” he said without a hint of braggadocio. The views on his role-play videos range from 155,000 to two million. “You can do the math.”
Wow. A quick curious look down the ASMR rabbit hole revealed all types of videos, even 10 hrs of cats purring. What a world (and distracting waste of time). It would be interesting to play one of these with a cat on your lap and see what happens. Another interesting fact: did you know that Bob Ross (of DYI painting fame) is considered by many to be the godfather of ASMR?]

My Astounding And Yet Not At All Unusual Day In Culture

9:00 a.m.: Wake from a dream in which François Villon and I are sharing a dream about Susan Sontag making out with Simone Weil. Hot. Yawn artfully. Make vain attempt to free my left arm, which is trapped beneath the slumbering bosom of “Marguerite,” whom I met last night at a party in Soho for a Hungarian rotogravure artist/DJ. No success.

9:02 a.m.: Ask my wife if she could perhaps assist. Merci!

9:25 a.m.: Post toilette, began my morning perusal of the Berlin papers. Sigh over inadequate coverage of my friend Gerhard’s production of Michel de Ghelderode’s Red Magic, which he has staged entirely in ecru. Philistines, the Germans. I will have to write stern letters to several editors. Possibly using my ostrich quill.

10:15 a.m.: Sex, hastily, then beignets.

T.S Elliot
10:30 a.m.: Prepare to enter my “writing mode.” Place one hand on a dictionary originally owned by T.S. Eliot (a fortune at auction, but worth it!). Place the other hand on a bathrobe belonging to Hart Crane. Place my feet in a laundry hamper thought to have been briefly in the possession of James Merrill’s dentist. Soak it in.

11:00 a.m.: Begin sketching thoughts about John Ashbery’s translation of Rimbaud into moleskin notebook. Ostrich quill? Oui. Oui indeed.

12:30 p.m.: Gaze poetically heavenward while sharing a light lunch of organic pearl onions and filet of local cassowary with James Franco and Harold Bloom at the Yale Club. Franco gets a little tipsy and punches a waiter while shouting something about “Twitter” (possibly “water” or “mother"; his enunciation was suffering). Waiter out cold. I cover waiter with my favorite made-to-measure ascot and flee.

2:35 p.m.: Sex, hastily, then petit-fours.

3:00pm: Drinks in Alphabet City with Greta, a Norwegian tea sculptor and amateur horticulturist whose great-grandfather invented the meatball. We agree that the state of Danish cinema is dire. Adrien Brody is seated beside us, and I deliberately order a Stella while smirking.

4:00 p.m.: Sex, hastily, then meatballs.

4:20 p.m.: Realize I’m a bit drunk. Decide to call on my friend Laurence, a philosopher cum structural engineer whose father invented the ounce. We debate the merits of capitalism in light of Dior’s recent scandals and the existence of Canada. I collapse on a settee and accidentally write three erotic short stories that will be falsely attributed to Michel Houellebecq by Le Monde.

6:30 p.m.: Realize that I am still a bit drunk. Realize that realizing that one is drunk is… banal? Yet what is banality but the infinite white space of sobriety? Write this down in Moleskine notebook for possible publication in N+1.

6:39 p.m.: Send text to Lorin Stein, editor of The Paris Review: “heymrfancyshrts.”

6:40 p.m.: Immediately regret text.

6:41 p.m.: Send text to Lorin Stein: “sorry mrfancyshrts.”

6:42 p.m.: Throw phone away.

6:43 p.m.: Retrieve phone and send text to James Franco that reads, in its entirety, “what.”

6:46 p.m.: Pre-prandial drinks with Joyce Carol Oates and Meghan O’Rourke. Both wearing black.

8:00pm: Dinner with Jonathan Franzen in his private arboretum. Franzen sporting blindfold again, has trouble with fork. Awkward scene involving prawns.

9:30 p.m.: Sex, hastily, then slightly bloody shrimp cocktail.

10:00 p.m.: Attend Wallace Shawn’s latest play, Yes, I Was in ‘The Princess Bride’ but my Dad Edited The New Yorker and My Plays are Huge in Europe, Also Remember ‘My Dinner with Andre,’ Which You Probably Haven’t Seen But Feel Vaguely that You Should Have, and Yes, You Should Have.

12:00 a.m.: Participate in standing ovation.

12:05 a.m.: Standing ovation still going on.

12:08 a.m.: Sex, hastily, then leg cramps.

1:00 a.m.: Post-play drinks with two matadors, Gore Vidal, a team of Belgian weightlifters, the last man to see John Berryman alive, and Peter Singer. Tense moment between matadors and Singer is rescued when Vidal challenges weightlifters to justify Flemish.

2:30 a.m.: Home at last. Fall into a dream in which Villon and I are having a dream about Susan Sontag having a dream about Edmund Wilson’s cat making out with Simone Weil. Wake in terror.

2:35 a.m.: Sex, hastily, then… repose.

by David Orr, The Awl |  Read more:
Image: T.S. Elliot, Wikimedia Commons
[ed. Repost]

Tuesday, October 1, 2019

Ingesting a Credit Card's Weight in Plastic Every Week

Globally, we are ingesting an average of 5 grams of plastic every week, the equivalent of a credit card, a new study suggests.

This plastic contamination comes from "microplastics" -- particles smaller than five millimeters -- which are making their way into our food, drinking water and even the air.

Around the world, people ingest an average of around 2,000 microplastic particles a week, according to the study by the University of Newcastle, in Australia.

These tiny particles can originate from a variety of sources, including artificial clothes fibers, microbeads found in some toothpastes, or bigger pieces of plastic which gradually break into smaller pieces when they're thrown away and exposed to the elements.

They make their way into our rivers and oceans, and can be eaten by fish and other marine animals, ending up as part of the food chain.

Microplastics have been found in many everyday foods and drinks, such as water, beer, shellfish and salt, co-lead researcher Kala Senathirajah told CNN.

"It is very clear that the issue of microplastics is a global one. Even if countries clean up their backyard, it doesn't mean they will be safe as those [microplastic] particles could be entering from other sources," she said.

The largest source of plastic ingestion is drinking water, according to the research, which reviews 52 existing studies to estimate plastic ingestion around the world.

The research was commissioned by the World Wildlife Fund (WWF) for its report "No Plastic in Nature: Assessing Plastic Ingestion from Nature to People."

It found that the average person consumes as many as 1,769 particles of plastic every week just by drinking water -- bottled or from the tap. But there could be large regional variations. It quotes a 2018 study that found twice as much plastic in water in the United States and India than in European or Indonesian tap water.

A separate study this month found that Americans eat, drink and breathe between 74,000 and 121,000 microplastic particles each year, and those who exclusively drink bottled water rather than tap water can add up to 90,000 plastic particles to their yearly total.

by Isabelle Gerretsen, CNN | Read more:
Image: Getty

El Camino: 95 Minutes With Aaron Paul

I can only show you this ’cause you’ve seen the film,” Aaron Paul says, grinning proudly as he scrolls through hundreds — maybe thousands? — of photos featuring his 19-month-old daughter, Story. He settles on a short video of the two of them taken during a break in the filming of El Camino: A Breaking Bad Movie. The clip finds the actor in full makeup as Jesse Pinkman, the emotionally pulverized, physically lacerated meth-maker. As Paul gently describes to Story the harrowing (and very top-secret) El Camino scene he has just filmed, his daughter gazes at her father’s bruised and grubby face with affection. “She’s totally fine when she sees Jesse’s scars,” Paul says, putting down his phone. “She looks past all of that and right into my eyes.”

It’s less than a month before the release of El Camino, and the 40-year-old Paul is sitting on the back terrace of his Los Feliz home, dressed in a beige linen shirt and matching slacks. A red Radio Flyer wagon — piloted by a lone teddy bear — is parked nearby at the foot of an immense artificial waterfall that cascades down an entire hillside. The nearly 100-year-old estate has been home to several Hollywood dignitaries over the years — including Kareem Abdul-Jabbar, Jim Parsons, and Twilight-era Robert Pattinson — and is anchored by a lush, panoramic garden so large Paul is still sorting through every plant. “When we moved in,” he says, “they gave us two binders of information about running this place.”

Paul and his wife, the anti-bullying-nonprofit founder Lauren Parsekian, took over the estate earlier this year, not long after a family trip to New Mexico. That’s where Paul had spent several months covertly reprising the role of Jesse, last seen in Breaking Bad’s 2013 conclusion. In his final onscreen moments, Jesse plows through the gates of a desert compound in a stolen El Camino, sobbing and howling after having escaped not only his Aryan Brotherhood captors but also the emotional clutches of his mentor turned manipulator, Walter White, played by Bryan Cranston.

For those who’d come to root for Jesse, the send-off felt victorious. But it also left some questions in the balance. Ever since the finale, Paul notes, “people always ask, ‘What happened to Jesse? Is he okay?’ And I’d say, ‘You know as much as I do.’ ”

Written and directed by Breaking Bad creator Vince Gilligan, El Camino, which debuts on Netflix on October 11, begins right after the massacre in the show’s finale, which left Walter dead and tipped off police to the vast drug empire Jesse had helped build. The movie is deeply satisfying on its own, featuring all the twists and pivots of a gnarly, on-the-run thriller; but for Breaking Bad devotees, there’s the added emotional investment of having watched (and worried about) Jesse Pinkman for five seasons. El Camino also pairs Paul with several of his former Breaking Bad cronies, though to name them, or to reveal even the haziest plot points, would violate Netflix’s demands of secrecy. Suffice it to say that many of the show’s hallmarks — revelatory flashbacks, grisly humor, and abrupt violence — are still very much in effect in El Camino.

Still, it gives away nothing to note that the sole focus of El Camino is Jesse Pinkman, whose heartaches and fears were like psychological open wounds made visible through Paul’s fidgety physicality and sad, searching eyes. “I couldn’t be more opposite of that guy,” notes the actor, “other than the fact that I wear my heart on my sleeve. I don’t bury anything.” That on-the-surface rawness made for one of the more intensely symbiotic performances on television (while also earning Paul a trio of Emmys during Breaking Bad’s run). So much so that, in the years after the show ended, Paul himself wondered what had become of his troubled old friend. “He was real to me,” the actor says. “I loved Jesse. I cared for him. I wanted him to be okay.” (...)

It’s possible, of course, that the traits that make Paul such a transfixing TV presence are too nuanced for the big screen: For all of Jesse’s spaz-outs and “bitch”-snaps, Paul carries much of the character’s pain and (minimal) joy in his face — the kind of subtle gestures that work best within the intimacy of a prime-time drama. There’s also the fact that Jesse has never actually gone away, since Breaking Bad is effectively in a state of perpetual reruns on Netflix. Television actors — even those with multiple Emmys — have always struggled to navigate the gulf between TV and film. That’s all the more difficult when your best-known character is being rediscovered on a daily basis.

by Brian Raftery, Vulture | Read more:
Image: Jay L. Clendenin/Los Angeles Times via Contour RA by Getty Images

Monday, September 30, 2019


Michele Mikesell
via:

Al Varlez
via:

Top 20 Acoustic Guitar Intros of All Time


The Backroom Deal That Could’ve Given Us Single-Payer

Back in March 2009, leaks from the White House made it clear that a single-payer health insurance system was “off the table” as an option for health care reform. By doing so, the President had ruled out the simplest and most obvious reform of the disaster that is US healthcare. Instituting single-payer would have meant putting US health insurance companies out of business and extending the existing Medicare or Medicaid to the entire population. Instead, over the following weeks the outlines of the bloated monstrosity known as Obamacare emerged; an impossibly complicated Rube Goldberg contraption, badly designed, incompetently executed, and whose intended beneficiaries increasingly seem to hate.

The decision to abandon the nationalization of perhaps the most unpopular companies in the US is correctly attributed to the fundamental conservatism of the Obama White House, and its unwillingness to take on the health insurers, pharmaceutical companies, or any interest group willing and able to spend millions lobbying, hiring former politicians, and donating to campaigns. Obama’s “wimpiness,” his need to always take the path of least resistance, became common tropes among the American left. Obamacare, liberals claim, is the best possible reform that could’ve been wrangled out of the health insurance industry.

But were the many backroom deals that make up Obamacare really an easier alternative to nationalization? A look at the financial details reveals the opposite conclusion. In strictly financial terms, nationalization would have been the easiest way forward, costing relatively little and delivering immediate savings while making access to health care truly universal. Politically, Obama could have counted on the support of a unlikely ally of progressive causes: health insurance shareholders, the theoretical owners of those very companies who would have been relieved of their then-dubious investments with a huge payout.

As of the end of 2008, the private insurance market covered 60 percent of the US population. For-profit insurers accounted for a large and growing share. The top five insurers accounted for 60 percent of the market — all but one of them for-profit companies. Absent a Bolshevik revolution, implementing a single-payer system would have required proper compensation for the owners of these institutions for their loss of future income — shareholders in the case of the for-profit insurers and, allegedly, the supposed policyholders in the case of most non-profits.

How much compensation? Well, in mid-2009, the total market capitalization of four out of the five top health insurers (the fifth is a nonprofit) amounted to about $60 billion. By then, the stock market had already rebounded nicely from the lows of the crisis, and the uncertainty over Obamacare had largely dissipated, so these were not particularly depressed valuations. Extrapolating this valuation to the rest of the health insurers would have a put a price tag of about $120 billion on the whole racket.

This means that buying out the entire health insurance industry at an enormously generous premium of, say, 100 percent, would have cost the Treasury $240 billion – about 2 percent of 2009 gross domestic product. And this figure is highly inflated —premiums for buying out well-established companies rarely exceed 50 percent and are usually closer to 20 percent. Also, I am valuing the dubious claims of non-profit policyholders on par with the more vigorously-enforced property rights of for-profit shareholders.

Other than the big smiles on the faces of health insurer shareholders across the country, what would have been the US Treasury’s payoff for writing a $240 billion check? Once again, the numbers are simple, and startling. US private insurance, whether for-profit or otherwise, may well be the most wasteful bureaucracy in human history, making the old Gosplan office look like a scrappy startup by comparison. Estimates of pure administrative waste range anywhere from 0.75 percent to 2.6 percent of total US economic output.

Extrapolating again from the biggest four for-profit insurers, in 2008, the industry as a whole claimed to spend 18.5 percent of the premiums it collected on things other than payments to providers. (The other 81.5% that is spent paying for actual care is known as medical loss ratio. Keeping this ratio down is a health insurer CEO’s top priority.) Medicare, by contrast, spends just 2 percent. The difference amounts to $130 billion, to which we must add the compliance costs the private insurers impose on health care providers — $28 billion, according to Health Affairs. The costs incurred by consumers are difficult to measure, although very real to anyone who’s spent an afternoon on the phone with a health insurance rep.

So, to recap, nationalization of the health insurance industry in 2009 would have cost no more (and almost certainly a lot less) than $240 billion. The savings in waste resulting from replacing the health insurance racket with an extension of Medicare would have resulted in no less than $158 billion a year. That’s an annualized return on investment of 66 percent. The entire operation would have paid for itself in less than 18 months, and after that, an eternity of administrative efficiency for free. And, of course, happy shareholders.

by Enrique Diaz-Alvarez, Jacobin | Read more:
Image: uncredited

Pain Patients Get Relief from War on Opioids

Ever since U.S. health authorities began cracking down on opioid prescriptions about five years ago, one vulnerable group has suffered serious collateral damage: the approximately 18 million Americans who have been taking opioids to manage their chronic pain. Pain specialists report that desperate patients are showing up in their offices, after being told by their regular physician, pharmacy or insurer that they can no longer receive the drugs or must shift to lower doses, no matter how severe their condition.

Abrupt changes in dosage can destabilize patients who have relied for many years on opioids, and the consequences can be dire, says Stefan Kertesz, an expert on opioids and addiction at the University of Alabama at Birmingham School of Medicine. “I’ve seen deaths from suicide and medical deterioration after opioids are cut.”

Last week, after roughly three years of intensive lobbying and alarming reports from the chronic pain community, the Food and Drug Administration (FDA) and the Centers for Disease Control and Prevention (CDC) took separate actions to tell clinicians that it is dangerous to abruptly curtail opioids for patients who have taken them longterm for pain. The FDA did so by requiring changes to opioid labels specifically warning about the risks of sudden and involuntary dose tapering. The agency cited reports of "serious withdrawal symptoms, uncontrolled pain, psychological distress, and suicide" among patients who have been inappropriately cut off from the painkillers.

One day later, CDC director Robert Redfield issued a clarification of the center’s 2016 “Guideline for Prescribing Opioids for Chronic Pain,” which includes cautions about prescribing doses above specific thresholds. Redfield’s letter emphasized that these thresholds were not intended for patients already taking high doses for chronic pain but were meant to guide first-time opioid prescriptions. The letter follows another recent clarification sent by the CDC to oncology and hematology groups, emphasizing that cancer patients and sickle cell patients were largely exempt from the guideline. (...)

Tougher rules on opioid prescriptions from federal and state authorities, health insurance companies and pharmacies, were an understandable response to the nation’s “opioid crisis,” an epidemic of abuse and overdose that led to a 345 percent spike in U.S. deaths related to legal and illicit opioids between 2001 and 2016. Since 2016, most fatal overdoses have involved illegally produced fentanyl sold on the street, according to CDC data, but past research has shown that many victims got started with a prescription opioid such as oxycodone.

The CDC’s 2016 guideline was aimed at reining in irresponsible prescribing practices. (The agency’s own analysis showed that prescriptions for opioids had quadrupled between 1999 and 2010.) The guideline stressed that the first-line treatments for chronic pain are non-opioid medications and non-drug approaches such as physical therapy. When resorting to opioids, the guideline urged doctors to prescribe “the lowest effective dosage,” to carefully size up risks versus benefits when raising doses above 50 morphine milligram equivalents (MME) a day, and to “carefully justify a decision” to go to 90 MME or above.

That advice on dosage was widely misinterpreted as a hard limit for all patients. Kertesz has collected multiple examples of letters from pharmacies, medical practices and insurers that incorrectly cite the guideline as a reason to cut off long-term opioid patients.

Frank Gawin, a retired psychiatrist in Hawaii, is one of many chronic pain sufferers ensnared by that kind of mistake. For 20 years he took high-dose opioids (about 400 MME daily) to manage extreme pain from complications of Lyme Disease. Gawin, an expert on addiction himself, was well aware of the risks but notes that he stayed on the same dose throughout those 20 years. “It helped me profoundly and probably extended my career by 10 to 15 years,” he says. About five months ago, his doctor, a pain specialist he prefers not to name, informed Gawin and other patients that she would be tapering everyone below 80 MMEs because she was concerned about running afoul of medical authorities. Gawin has not yet reached that goal, but his symptoms have already returned with a vengeance. “As I am talking to you, I am in pain,” he said in a phone interview. “I’m having trouble concentrating. I’m depleted. I’m not myself.”

Last week’s federal actions could go a long way in informing physicians not to cut off patients like Gawin. Of particular value, say patient advocates and experts, is the emphasis on working together with patients on any plan to taper the drugs. “It’s finally about patient consent,” says Andrea Anderson, former executive director of the Alliance for the Treatment of Intractable Pain, an advocacy group. She notes that the FDA urged doctors to create an individualized plan for patients who do wish to taper and that the agency stated that “No standard opioid tapering schedule exists that is suitable for all patients.”

by Claudia Wallis, Scientific American | Read more:
Image: Getty
[ed. Thanks government, medical community, insurers, prescribers, politicians and media, you've made life miserable for tens of millions of people. See also: Pain Patients to Congress: CDC's Opioid Guideline Is Hurting Us (MedPage Today); Suicides Associated With Forced Tapering of Opiate Pain Treatments (JEC); and How Opioid Critics and Law Firms Profit From Litigation (Pain News Network).]

A New Theory of Obesity

Nutrition researcher Kevin Hall strives to project a Zen-like state of equanimity. In his often contentious field, he says he is more bemused than frustrated by the tendency of other scientists to “cling to pet theories despite overwhelming evidence that they are mistaken.” Some of these experts, he tells me with a sly smile, “have a fascinating ability to rationalize away studies that don’t support their views.”

Among those views is the idea that particular nutrients such as fats, carbs or sugars are to blame for our alarming obesity pandemic. (Globally the prevalence of obesity nearly tripled between 1975 and 2016, according to the World Health Organization. The rise accompanies related health threats that include heart disease and diabetes.) But Hall, who works at the National Institute of Diabetes and Digestive and Kidney Diseases, where he runs the Integrative Physiology section, has run experiments that point fingers at a different culprit. His studies suggest that a dramatic shift in how we make the food we eat—pulling ingredients apart and then reconstituting them into things like frosted snack cakes and ready-to-eat meals from the supermarket freezer—bears the brunt of the blame. This “ultraprocessed” food, he and a growing number of other scientists think, disrupts gut-brain signals that normally tell us that we have had enough, and this failed signaling leads to overeating.

Hall has done two small but rigorous studies that contradict common wisdom that faults carbohydrates or fats by themselves. In both experiments, he kept participants in a hospital for several weeks, scrupulously controlling what they ate. His idea was to avoid the biases of typical diet studies that rely on people’s self-reports, which rarely match what they truly eat. The investigator, who has a physics doctorate, has that discipline’s penchant for precise measurements. His first study found that, contrary to many predictions, a diet that reduced carb consumption actually seemed to slow the rate of body fat loss. The second study, published this year, identified a new reason for weight gain. It found that people ate hundreds more calories of ultraprocessed than unprocessed foods when they were encouraged to eat as much or as little of each type as they desired. Participants chowing down on the ultraprocessed foods gained two pounds in just two weeks.

“Hall’s study is seminal—really as good a clinical trial as you can get,” says Barry M. Popkin, a professor of nutrition at the University of North Carolina at Chapel Hill, who focuses on diet and obesity. “His was the first to prove that ultraprocessed foods are not only highly seductive but that people tend to eat more of them.” The work has been well received, although it is possible that the carefully controlled experiment does not apply to the messy way people mix food types in the real world.

The man who designed the research says he is not on a messianic mission to improve America’s eating habits. Hall admits that his four-year-old son’s penchant for chicken nuggets and pizza remains unshakable and that his own diet could and probably should be improved. Still, he believes his study offers potent evidence that it is not any particular nutrient type but the way in which food is manipulated by manufacturers that plays the largest role in the world’s growing girth. He insists he has no dog in any diet wars fight but is simply following the evidence. “Once you’ve stepped into one camp and surrounded yourself by the selective biases of that camp, it becomes difficult to step out,” he says. Because his laboratory and research are paid for by the national institute whatever he finds, Hall notes that “I have the freedom to change my mind. Basically, I have the privilege to be persuaded by data.” (...)

Processed Calories

Hall likes to compare humans to automobiles, pointing out that both can operate on any number of energy sources. In the case of cars, it might be diesel, high-octane gasoline or electricity, depending on the make and model. Similarly, humans can and do thrive on any number of diets, depending on cultural norms and what is readily available. For example, a traditional high-fat/low-carb diet works well for the Inuit people of the Arctic, whereas a traditional low-fat/high-carb diet works well for the Japanese. But while humans have evolved to adapt to a wide variety of natural food environments, in recent decades the food supply has changed in ways to which our genes—and our brains—have had very little time to adapt. And it should come as no surprise that each of us reacts differently to that challenge.

At the end of the 19th century, most Americans lived in rural areas, and nearly half made their living on farms, where fresh or only lightly processed food was the norm. Today most Americans live in cities and buy rather than grow their food, increasingly in ready-to-eat form. An estimated 58 percent of the calories we consume and nearly 90 percent of all added sugars come from industrial food formulations made up mostly or entirely of ingredients—whether nutrients, fiber or chemical additives—that are not found in a similar form and combination in nature. These are the ultraprocessed foods, and they range from junk food such as chips, sugary breakfast cereals, candy, soda and mass-manufactured pastries to what might seem like benign or even healthful products such as commercial breads, processed meats, flavored yogurts and energy bars.

Ultraprocessed foods, which tend to be quite high in sugar, fat and salt, have contributed to an increase of more than 600 available calories per day for every American since 1970. Still, although the rise of these foods correlates with rising body weights, this correlation does not necessarily imply causation. There are plenty of delicious less processed foods—cheese, fatty meats, vegetable oil, cream—that could play an equal or even larger role. So Hall wanted to know whether it was something about ultraprocessing that led to weight gain. “Basically, we wondered whether people eat more calories when those calories come from ultraprocessed sources,” he says. (...)

A Gut-Brain Disconnect

Why are more of us tempted to overindulge in egg substitutes and turkey bacon than in real eggs and hash brown potatoes fried in real butter? Dana Small, a neuroscientist and professor of psychiatry at Yale University, believes she has found some clues. Small studies the impact of the modern food environment on brain circuitry. Nerve cells in the gut send signals to our brains via a large conduit called the vagus nerve, she says. Those signals include information about the amount of energy (calories) coming into the stomach and intestines. If information is scrambled, the mixed signal can result in overeating. If “the brain does not get the proper metabolic signal from the gut,” Small says, “the brain doesn’t really know that the food is even there.”

Neuroimaging studies of the human brain, done by Small and others, indicate that sensory cues—smells and colors and texture—that accompany foods with high-calorie density activate the striatum, a part of the brain involved in decision-making. Those decisions include choices about food consumption.

And that is where ultraprocessed foods become a problem, Small says. The energy used by the body after consuming these foods does not match the perceived energy ingested. As a result, the brain gets confused in a manner that encourages overeating. For example, natural sweeteners—such as honey, maple syrup and table sugar—provide a certain number of calories, and the anticipation of sweet taste prompted by these foods signals the body to expect and prepare for that calorie load. But artificial sweeteners such as saccharin offer the anticipation and experience of sweet taste without the energy boost. The brain, which had anticipated the calories and now senses something is missing, encourages us to keep eating.

To further complicate matters, ultraprocessed foods often contain a combination of nutritive and nonnutritive sweeteners that, Small says, produces surprising metabolic effects that result in a particularly potent reinforcement effect. That is, eating them causes us to want more of these foods. “What is clear is that the energetic value of food and beverages that contain both nutritive and nonnutritive sweeteners is not being accurately communicated to the brain,” Small notes. “What is also clear is that Hall has found evidence that people eat more when they are given highly processed foods. My take on this is that when we eat ultraprocessed foods we are not getting the metabolic signal we would get from less processed foods and that the brain simply doesn’t register the total calorie load and therefore keeps demanding more.”

by Ellen Ruppel Shell, Scientific American |  Read more:
Image: Jamie Chung (photo); Amy Henry (prop styling); Source: “NOVA. The Star Shines Bright,” by Carlos A. Monteiro et al., in World Nutrition, Vol. 7, No. 1; January-March 2016

How the Puffy Vest Became a Symbol of Power

In a recent episode of the HBO series "Succession", the powerful Roy clan at the centre of the show attend a conference for billionaires at an exclusive mountain resort.

The audience learns everything they need to know about the characters by their puffer vest. Kendall Roy, played by Jeremy Strong, wears a Cucinelli puffer vest, and his brother Roman (Kieran Culkin) wears a Ralph Lauren one. Their brother-in-law, Tom (Matthew Macfadyen) sports a shiny Moncler number. When they enter a cocktail party, they are surrounded by wealthy folk decked out in puffer vests of their own.

Michelle Matland, the costume designer for "Succession", which also airs on Sky Atlantic in the UK, told BoF the vests were chosen precisely because they have become so closely associated with the one percent. From tech titans like Amazon founder Jeff Bezos to billionaire investor John Henry to Lachlan Murdoch, son of media titan Rupert Murdoch and one of the rumoured inspirations for "Succession," the bulky, down-filled puffer vest has become the fashion item of choice for the ultra-wealthy.

“The costume was stolen directly from the world of billionaires,” Matland said. “[The Roys] are self-aware, and know how to take advantage of situations, so of course they are going to be wearing puffy vests. It’s their veneer of strength.”

In addition to serving as a status symbol, puffer vests are also big business for luxury brands. Searches for the item were up 7 percent on Lyst last year. Men who have zero fashion sensibility will happily drop $1,000 or more on a Moncler or Cucinelli puffer vest, said Victoria Hitchcock, a stylist who works with Silicon Valley professionals.

“A lot of these guys don’t want to be too ambitious with their style choices, but will still wear luxury vests because they can stand out with it and still keep their simplicity sort of style,” Hitchcock said.

Brands like Moncler, Herno, Canada Goose and Cucinelli incorporate the puffer vests into their permanent collections. Balenciaga, Burberry and Prada are among the luxury brands that also sell puffers.

Non-luxury brands like Patagonia, Uniqlo and the North Face count the puffer vest as some of their best sellers too (these brands are better known for the puffer vest’s popular cousin, the fleece vest, which have themselves become so popular in New York’s business and tech worlds that they are sometimes referred to as the “Midtown Uniform.”)

The puffer vest is an offshoot of the puffer jacket, invented by Australian chemist George Finch, who made a coat from balloon fabric and feather down for an early attempt by British explorers to climb Mount Everest in 1922. Brands like Eddie Bauer and the North Face took his design to the masses, but the product was mainly reserved for outdoor enthusiasts and the working class, said fashion historian Laura McLaws Helms.

“It was popular in the labour movement, at construction sites because it was a utilitarian garment,” she said. “That the richest men in America are wearing puffer vests is a huge leap from its roots.”

Over the last five years, though, the puffer vest has been co-opted by the tech industry, initially via brands like Patagonia. The item also rode the nostalgia trend, as men who grew up watching Marty McFly from"Back To The Future" in his red puffer entered the workforce.

by Chavie Lieber, BoF |  Read more:
Image: Rachel Deeley for BoF

Saturday, September 28, 2019

Metric


Genie Espinosa
via:

Ready, Fire, Aim: U.S. Interests in Afghanistan, Iraq, and Syria

I have been asked to join my fellow panelists in speaking about U.S. interests in Afghanistan, Iraq, and Syria. For some reason, our government has never been able to articulate these interests, but, judging by the fiscal priority Americans have assigned to these three countries in this century, they must be immense – almost transcendent. Since we invaded Afghanistan in 2001, we have spent more than $5 trillion and incurred liabilities for veterans’ disabilities and medical expenses of at least another trillion dollars, for a total of something over $6 trillion for military efforts alone.

This is money we didn’t spend on sustaining, still less improving, our own human and physical infrastructure or current and future well-being. We borrowed almost all of it. Estimates of the costs of servicing the resulting debt run to an additional $8 trillion over the next few decades. Future generations of Americans will suffer from our failure to invest in education, scientific research, and transportation. On top of that, we have put them in hock for at least $14 trillion in war debt. Who says foreign policy is irrelevant to ordinary Americans?

At the moment, it seems unlikely our descendants will feel they got their money’s worth. We have lost or are losing all our so-called “forever wars.” Nor are the people of West Asia and North Africa likely to remember our interventions favorably. Since we began them in 2001, well over one million individuals in West Asia have died violent deaths. Many times more than that have died as a result of sanctions, lost access to medical care, starvation, and other indirect effects of the battering of infrastructure, civil wars, and societal collapse our invasions have inflicted on Afghanistan, Iraq, Libya, and Syria and their neighbors.

The so-called “Global War on Terrorism” launched in Afghanistan in 2001 has metastasized. The US. Armed forces are now combating “terrorism” (and making new enemies) in eighty countries. In Syria alone, where since 2011 we have bombed and fueled proxy wars against both the Syrian government and its extremist foes, nearly 600,000 have died. 11 million have been driven from their homes, five million of them into refuge in other countries.

Future historians will struggle to explain how an originally limited post-9/11 punitive raid into Afghanistan morphed without debate into a failed effort to pacify and transform the country. Our intervention began on October 7, 2001. By December 17, when the battle of Tora Bora ended, we had accomplished our dual objectives of killing, capturing, or dispersing the al Qaeda architects of “9/11” and thrashing the Taliban to teach them that they could not afford to give safe haven to the enemies of the United States. We were well placed then to cut the deal we now belatedly seek to make, demanding that the governing authorities in Afghanistan deny their territory to terrorists with global reach as the price of our departure, and promising to return if they don’t.

Instead, carried away with our own brilliance in dislodging the Islamic Emirate from Kabul and the ninety percent of the rest of the country it then controlled, we nonchalantly moved the goal posts and committed ourselves to bringing Afghans the blessings of E PLURIBUS UNUM, liberty, and gender equality, whether they wanted these sacraments or not. Why? What interests of the United States – as opposed to ideological ambitions – justified this experiment in armed evangelism?

The success of policies is measurable only by the extent to which they achieve their objectives and serve a hierarchy of national interests. When, as in the case of the effort to pacify Afghanistan and reengineer Iraq, there is no coherent statement of war aims, one is left to evaluate policies in terms of their results. And one is also left to wonder what interests those policies were initially meant to support or advance.

In the end, our interests in Afghanistan seem to have come down to avoiding having to admit defeat, keeping faith with Afghans whose hopes we raised to unrealistic levels, and protecting those who have collaborated with us. In other words, we have acted in accordance with what behavioral economists call “the fallacy of sunk costs.” We have thrown good money after bad. We have doubled down on a losing game. We have reinforced failure.

To justify the continuation of costly but unsuccessful policies, our leaders have cited the definitive argument of all losers, the need to preserve “credibility.” This is the theory that steadfastness in counterproductive behavior is better for one’s reputation than acknowledging impasse and changing course. By hanging around in Afghanistan, we have indeed demonstrated that we value obduracy above strategy, wisdom, and tactical flexibility. It is hard to argue this this has enhanced our reputation internationally. (...)

By taking over Iraq, we successfully prevented Baghdad from transferring nonexistent weapons to terrorist groups that did not exist until our thoughtless vivisection of Iraqi society created them. We also destroyed Iraq as the balancer and check on Iran’s regional ambitions, an interest that had previously been a pillar of our policies in the Persian Gulf. This made continued offshore balancing impossible and compelled us for the first time to station U.S. forces in the region permanently. This, in turn, transformed the security relationship between the Gulf Arabs and Iran from regional rivalry into military confrontation, producing a series of proxy wars in which our Arab protégés have demanded and obtained our support.

Our intervention in Iraq ignited long-smoldering divisions between Shiite and Sunni Islam, fueling passions that have undermined religious tolerance and fostered terrorism both regionally and worldwide. The only gainers from our misadventures in Iraq were Iran and Israel, which saw their most formidable Arab rival flattened, and, of course, the U.S. defense and homeland security budgets, which fattened on the resulting threat of terrorist blowback. Ironically, the demise of Iraq as an effective adversary thrust Israel into enemy deprivation syndrome, leading to its (and later our) designation of Iran as the devil incarnate. Israel, joined by Saudi Arabia and the UAE, believes that the cure for its apprehensions about Iran is for the U.S. military to crush it on their behalf.

The other principal legacies of our lurch into strategy-free militarism, aside from debt and a bloated defense budget, are our now habitual pursuit of military solutions to non-military problems, our greatly diminished deference to foreign sovereignty and international law, domestic populism born of war weariness and disillusionment with Washington, declining willingness of allies to follow us, the incitement of violent anti-Americanism among the Muslim fourth of humanity, the entrenchment of Islamophobia in U.S. politics, and the paranoia and xenophobia these developments have catalyzed among Americans. (...)

To say, “we meant well” is true – as true of the members of our armed forces as it is of our diplomats and development specialists. But good intentions are not a persuasive excuse for the outcomes wars contrive. We have hoped that the many good things we have done to advance human and civil rights in Afghanistan and Iraq might survive our inevitable disengagement from both. They won’t. The years to come are less likely to gratify us than to force us to acknowledge that the harm we have done to our own country in this century vastly exceeds the good we have done abroad.

by Chas. W. Freeman |  Read more:
[ed. See also: 10 Ways that the Climate Crisis and Militarism are Intertwined (Counterpunch).]

For All Fankind

When Marvel Studios was founded in the summer of 1996, superheroes were close to irrelevant. Comic book sales were in decline, Marvel’s initially popular Saturday morning cartoons were waning, and the company’s attempts over the previous decades to break through in Hollywood had gone nowhere, with movies based on Daredevil, the Incredible Hulk, and Iron Man all having been optioned without any film being made. Backs against the wall, Marvel’s executives realized that their only chance of getting traction in La La Land was by doing the legwork themselves.

The company’s fortunes hardly turned around overnight. Marvel was forced to fire a third of its employees and declare bankruptcy a few months after launching its film studio, and the movie rights to Spider-Man—then the company’s most valuable piece of intellectual property—were sold off in the ensuing years in a frantic attempt to raise cash. It wasn’t until 2008 that Marvel Studios finally released an Iron Man movie—the choice of protagonist having less to do with that hero’s particular following than the ease with which the toy company that had taken control of Marvel during its bankruptcy could market action figure tie-ins. Against all expectations, Iron Man made half a billion dollars worldwide. Just over a year later, Disney purchased Marvel Studios for $4 billion. A decade after that, Avengers: Endgame would break the weekend box-office record—set by the previous Avengers installment—and net over $2 billion in less than two weeks.

New York magazine’s Vulture vertical was launched the year before Iron Man’s release, promising “a serious take on lowbrow culture.” A few months later, Chris Hardwick began a blog called “The Nerdist,” which quickly pivoted from its original raison d’etre of “palatable tech” to dispatches on ephemera from the original Transformers movie and guest posts about DC’s Silver Age reboot. Today, each site serves as a lodestar for overlapping fandoms, with Vulture hosting Game of Thrones, Stranger Things, and The Bachelor content, while Nerdist continues to concentrate on legacy franchises like Star Wars and Marvel. As their staffs crank out daily updates, prognostications, and YouTube clips on these and many other television and movie series, their success has pressured older outlets to shift from a more traditional, criticism-centric format to a menu of recaps and listicles, as well as inspiring newer, general interest sites like The Ringer and Vox to integrate fan-pleasing deeply into their pop culture coverage.

As the fandom press has risen, culture has been reorganized around a cluster of franchises that would have been dismissed by the critics of previous generations as the province of children, nerds, or—most especially—nerdy children. Success in Hollywood now has as much to do with the number of people who see a particular film or TV show as with how easily its intellectual property can be franchised. Why settle for one Iron Man when you could have over a decade of Avengers movies? For both Hollywood and the digital newsrooms of Vulture, Nerdist, and their imitators, the logic is obvious: cater to a readymade fanbase, and the dollars will take care of themselves.

Fishing for Eyeballs

In a 2016 Variety guest column, Hollywood’s shift from chasing viewers to pursuing fans was convincingly attributed to “digital empowerment” by the cultural anthropologist-cum-industry consultant Susan Kresnicka. Including herself among the new legions of fans, she writes that combining a capability for “consuming, connecting and creating on our own terms” with “access to multitudes of others who share our passion for a show, movie, book, story, character, sport, band, artist, video game, brand, product, hobby, etc.” galvanizes mere interest into a commercial force that drives enthusiasts to “watch more, share more, buy more, evangelize more, participate more, help more.”

“Marketing strategies are increasingly crafted to drive not just breadth but depth of engagement,” Kresnicka notes. “And the conversation has in large part moved from how to ‘manage’ fans to how to ‘relate’ to fans.” A classic example of this shift is the slow-drip of news that precedes every new Star Wars or superhero film, a process that typically begins more than two years ahead of a theatrical release. First comes the announcement about the movie itself. Next, rumors swirl about who will direct and star. In front of a ballroom of cosplayers at San Diego Comic Con, a teaser will ramp up speculation even further. The proper trailer will arrive months later, dropped online with no advance warning to incite delirium on social media. All the while, an armada of YouTube speculators cultivate theories, half-baked or coolly rational, about how this latest installment will fit into a sometimes branching, sometimes ouroborosian plot arc that spans decades.

Studios have come to understand that by lengthening each film’s advance publicity cycle, fans are given more opportunities to demonstrate their fandom, amplifying the FOMO of casual viewers such that they, too, are driven to see what all the fuss is about. Each new crumb of information becomes a reason to post on Facebook, a kernel of brand awareness to drive the decision to buy an overpriced hoodie at the mall. Multiplying that effect is the fact that the lead times for these films are now so long that there is never not a new movie to talk about. Solo: A Star Wars Story didn’t live up to your expectations? Good news, the cast for Episode IX has just been announced! (...)

Such mining of the smallest news drops for content is everywhere in the fandom press. But what really sets these outlets apart from buttoned-up operations like the New York Times and CNN—each more than happy to crib a few clicks by throwing a link to the newest Star Wars teaser up on their website—is the length to which they’ll go to dissect the utterly banal. The release of the Star Wars: The Rise of Skywalker trailer merited not only a quick embedded video post from Vulture but also a thousand-word follow-up analyzing its title.

Titles, as it turns out, are irresistible to the fandom press. Last December, Netflix released a clip that did nothing beyond reveal the names of each episode in the third season of Stranger Things, which flashed briefly onscreen while spooky music played. The one-minute video merited a blog post on Vulture. And Nerdist. And Entertainment Weekly. And Variety. Once a fandom has been identified, every piece of content, no matter how inconsequential, becomes an excuse to go fishing for eyeballs.

by Kyle Paoletta, The Baffler | Read more:
Image: Zoë van Dijk