Friday, October 18, 2019

Flacks and Figures

I'm getting paid $1,000 for this article. Last year, I made roughly $50,000 between a 7:30 a.m. to 3:30 p.m. freelance gig writing celebrity news and publishing some one-off articles. I grew up middle class, though my divorced father eventually worked his way well into the upper-middle class. Financially speaking, I’m fine, though I live alone in Toronto, and I likely won’t be able to afford a house unless my parents die or my dad provides the cash for a down payment. You probably don’t need to know these details, but it may color what I say next: it is my opinion that wealthy journalists should disclose their wealth when matters of finance, taxation, or any public policy they report on will affect their bottom line.

Back in January, Anderson Cooper, scion of the Vanderbilt family, conducted a one-on-one 60 Minutes interview with the newly sworn-in congressional representative from New York’s 14th District, Alexandria Ocasio-Cortez. The splashy interview generated its biggest moment when Cooper suggested that Ocasio-Cortez’s policy agenda of Medicare for All and the Green New Deal was “radical,” asking her, “Do you call yourself a radical?” “Yeah. You know, if that’s what radical means, call me a radical,” she responded, defiantly.

Less viral but more telling was the exchange leading up to that moment, with Cooper pressing Ocasio-Cortez about the revenue needed to pay for her programs. “This would require, though, raising taxes,” he said, as though the very notion were absurd. When Ocasio-Cortez agreed that “people are going to have to start paying their fair share in taxes,” Cooper pressed her again, almost annoyed: “Do you have a specific on the tax rate?” This gave the first-year congresswoman space to explain top marginal tax rates because Cooper and the 60 Minutes producers evidently had no interest in doing so themselves. Which gets to what was so clarifying about the back-and-forth: not Cooper’s questions about how a politician intended to pay for her agenda, but his disbelief verging on indignation at the prospect of a tax increase for the wealthiest Americans. It’s an idea with broad popular support, though perhaps not among the Vanderbilts.

Imagine, for a moment, if, at the top of the segment, Cooper had told his audience—reminded them—that he is a multimillionaire. That he is the primetime anchor at one of the country’s biggest cable news outlets. Though CNN and CBS don’t disclose the value of their contracts with on-air talent, pegging Cooper’s earnings in the tens of millions isn’t a stretch. Take a look at Megyn Kelly’s $30 million exit package from NBC News—after being fired for being racist, no less!—and you’ll get a good sense of the exorbitant salaries networks pay their top anchors. So, imagine it. Cooper, before launching into a loaded line of questioning about Ocasio-Cortez’s tax policy, openly states to the audience, “In the interest of full-disclosure: I, Anderson Cooper, heir to a vast fortune, currently make more money per year than you plebs at home could dream of, and I would be directly affected by Ocasio-Cortez’s proposed 70 percent marginal tax on incomes over $10 million.” Would he then have had the gall to highlight the tax increase? And would any reasonable viewer have bought into his bullshit?

Avoiding conflicts of interest is basic ethical practice for journalists. Check any news organization or journalism school’s handbook on ethics, and you’ll find the concept is central to maintaining credibility in journalism. “Any personal or professional interests that conflict with [our allegiance to the public], whether in appearance or in reality, risk compromising our credibility,” explains NPR’s Ethics Handbook. “We are vigilant in disclosing to both our supervisors and the public any circumstances where our loyalties may be divided—extending to the interests of spouses and other family members—and when necessary, we recuse ourselves from related coverage.”

Watching for potential conflicts, understanding them, acknowledging and disclosing them, publicly where necessary, are among the core jobs of any journalist with a shred of self-respect. Consumers of journalism, meanwhile, are already accustomed to such disclosures, which often come in the form of “so-and-so company is owned by our parent company.” When spouses or family members are involved, a recusal is usually in order, but it’s not unheard of for a journalist or news anchor to state that one of the subjects in a story is a friend. This is all a matter of simple honesty, though it’s not always adhered to in the strictest terms. Still, the prejudicial effects of a journalist’s net worth never enter into the equation at all.

Searching through various publications’ codes of ethics, from the Washington Post to the New York Times, directly named conflicts of interest tend to fall into categories of familial relation, partisan work, direct financial entanglements, work outside the organization, and the accepting of gifts, travel, or direct payment. Listed nowhere is the matter of salary or wealth. Given a few moments’ thought, it’s staggering to consider all of the effort that went into the New York Times’ eleven-thousand-word “Ethical Journalism” handbook without its writers ever considering, at least on the page, their salaries or inherited wealth as potential conflicts. Then again, the paper that employs Bari Weiss to garner hate-clicks may not be the ideal place to search for structural critiques of capitalism.

by Corey Atad, The Baffler |  Read more:
Image: Zoƫ van Dijk

Pentagon Budget Could Pay for Medicare for All While Creating Progressive Foreign Policy Americans Want

The Institute for Policy Studies on Thursday shared the results of extensive research into how the $750 billion U.S. military budget could be significantly slashed, freeing up annual funding to cover the cost of Medicare for All—calling into question the notion that the program needs to create any tax burden whatsoever for working families.

Lindsay Koshgarian, director of the National Priorities Project at the Institute for Policy Studies (IPS), took aim in a New York Times op-ed at a "chorus of scolds" from both sides of the aisle who say that raising middle class taxes is the only way to pay for Medicare for All. The pervasive claim was a primary focus of Tuesday night's debate, while Medicare for All proponents Sens. Bernie Sanders (I-Vt.) and Elizabeth Warren (D-Mass.) attempted to focus on the dire need for a universal healthcare program.

At the Democratic presidential primary debate on CNN Tuesday night, Sen. Elizabeth Warren (D-Mass.) was criticized by some opponents for saying that "costs will go down for hardworking, middle-class families" under Medicare for All, without using the word "taxes." Sen. Bernie Sanders (I-Vt.), on the other hand, clearly stated that taxes may go up for some middle class families but pointed out that the increase would be more than offset by the fact that they'll no longer have to pay monthly premiums, deductibles, and other medical costs.

"All these ambitious policies of course will come with a hefty price tag," wrote Koshgarian. "Proposals to fund Medicare for All have focused on raising taxes. But what if we could imagine another way entirely?"

"Over 18 years, the United States has spent $4.9 trillion on wars, with only more intractable violence in the Middle East and beyond to show for it," she added. "That's nearly the $300 billion per year over the current system that is estimated to cover Medicare for All (though estimates vary)."

"While we can't un-spend that $4.9 trillion," Koshgarian continued, "imagine if we could make different choices for the next 20 years."

Koshgarian outlined a multitude of areas in which the U.S. government could shift more than $300 billion per year, currently used for military spending, to pay for a government-run healthcare program. Closing just half of U.S. military bases, for example, would immediately free up $90 billion.

"What are we doing with that base in Aruba, anyway?" Koshgarian asked.

by Julia Conley, Common Dreams |  Read more:
Image: David B. Gleason/Flickr/cc

My Adventures in Psychedelia

It all began with a book review. Last year, I read an article by David Aaronovitch in The Times of London about Michael Pollan’s How to Change Your Mind. The book concerns a resurgence of interest in psychedelic drugs, which were widely banned after Timothy Leary’s antics with LSD, starting in the late 1960s, in which he encouraged American youth to “turn on, tune in, and drop out.” In recent years, though, scientists have started to test therapeutic uses of psychedelics for an extraordinary range of ailments, including depression, addiction, and end-of-life angst.

Aaronovitch mentioned in passing that he had been intrigued enough to book a “psychedelic retreat” in the Netherlands run by the British Psychedelic Society, though, in the event, his wife put her foot down and he canceled. To try psychedelics was something I’d secretly hankered after doing ever since I was a teenager, but I was always too cautious and risk-averse. As I got older, the moment seemed to pass. Today I am a middle-aged journalist working in London, the finance editor of The Economist, a wife, mother, and, to all appearances, a person totally devoid of countercultural tendencies.

And yet… on impulse, I arranged to go. Only after I booked did I tell my husband. He was bemused, but said it was fine by him, as long as I didn’t decide while I was under the influence that I didn’t love him anymore. My eighteen-year-old son thought the whole thing was hilarious (it turns out that your mother tripping is a good way to make drugs seem less cool).
***
One day, after closing that week’s finance and economics section of The Economist, I boarded a Eurostar train to Amsterdam. The next day, I met my fellow travelers—ten of them in all, from various parts of Europe and the United States—in a headshop in Amsterdam. Per the instructions we’d received, we each bought two one-ounce bags of “High Hawaiian” truffles—squishy, light brown fungi in a vacuum pack—at a discounted price of 40 euros, and headed off for four days in a converted barn in the countryside.

I had a foreboding that, besides whatever psychedelic experience I might have, there would also be a lot of chanting and holding strangers’ hands. I’m an atheist and devout skeptic: I don’t believe in chi or acupuncture, and have no time for crystals and chimes. But, mindful that it’s arrogant to remain aloof in such circumstances, I decided I would throw myself into whatever was asked of me.

And so, I not only did yoga and meditation, but also engaged in lengthy periods of shaking my whole body with my eyes closed and “vocal toning”—letting a sound, any sound, escape on every out-breath. I looked into the eyes of someone I had just met and asked, again and again, as instructed: “What does freedom mean to you?” I joined “sharing circles.” All this was intended to prepare us for the trip. The facilitators talked of the importance of your “set” (or state of mind) and of feeling safe and comfortable in your “setting” (where you are and who you’re with).

One of my fellow trippers had taken part in a psilocybin trial at King’s College London. He and three others received at random either a placebo or a low, normal, or high dose of the drug in pill form. It was obvious, he said, that he was the only one given the placebo. To make bad trips less likely, the researchers had advised the participants not to resist anything that happened: “If you see a dragon, go toward it.” The misery of sitting, stone sober, in a room with three people who were evidently having a fascinating time was why he had come on this retreat. “They all had dragons,” he told me. “I wanted a dragon, too.”

People who have taken psychedelics commonly rank the experience as among the most profound of their lives. For my part, I wasn’t searching for myself, or God, or transcendence; nor, with a happy, fulfilling life, was I looking for relief from depression or grief. But I was struck by something Pollan discusses in his book: studies in which therapists used trips to treat addiction.

I’ve never smoked and have no dramatic vices, but the habits of drinking coffee through the morning and a glass of wine or two most evenings had crept up on me in recent years. Neither seemed serious but both had come to feel like necessities—part of a larger pattern of a rushed, undeliberative life with too much done out of compulsion, rather than desire or pleasure. It is the middle-aged rather than the young who could most benefit from an “experience of the numinous,” said Carl Jung, quoted by Pollan.

by Helen Joyce, NYRB |  Read more:
Image: United Archives/Carl Simon/Bridgeman Images

Thursday, October 17, 2019

Bill Kirchen


[ed. Telecaster master and rockabilly legend. See: this clip where he talks about unique guitar modifications and techniques; and this one, demonstrating how he gets a "pop" out of guitar riffs.]

Alexander Kanoldt, Still Life With Guitar (Still Life VI), 1926

Eileen Williams, Whale Watching Alaska

Crash Course

How Boeing's Managerial Revolution Created the 737 MAX Disaster.

Nearly two decades before Boeing’s MCAS system crashed two of the plane-maker’s brand-new 737 MAX jets, Stan Sorscher knew his company’s increasingly toxic mode of operating would create a disaster of some kind. A long and proud “safety culture” was rapidly being replaced, he argued, with “a culture of financial bullshit, a culture of groupthink.”


Sorscher, a physicist who’d worked at Boeing more than two decades and had led negotiations there for the engineers’ union, had become obsessed with management culture. He said he didn’t previously imagine Boeing’s brave new managerial caste creating a problem as dumb and glaringly obvious as MCAS (or the Maneuvering Characteristics Augmentation System, as a handful of software wizards had dubbed it). Mostly he worried about shriveling market share driving sales and head count into the ground, the things that keep post-industrial American labor leaders up at night. On some level, though, he saw it all coming; he even demonstrated how the costs of a grounded plane would dwarf the short-term savings achieved from the latest outsourcing binge in one of his reports that no one read back in 2002.*

Sorscher had spent the early aughts campaigning to preserve the company’s estimable engineering legacy. He had mountains of evidence to support his position, mostly acquired via Boeing’s 1997 acquisition of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft plant in Long Beach and a CEO who liked to use what he called the “Hollywood model” for dealing with engineers: Hire them for a few months when project deadlines are nigh, fire them when you need to make numbers. In 2000, Boeing’s engineers staged a 40-day strike over the McDonnell deal’s fallout; while they won major material concessions from management, they lost the culture war. They also inherited a notoriously dysfunctional product line from the corner-cutting market gurus at McDonnell.


And while Boeing’s engineers toiled to get McDonnell’s lemon planes into the sky, their own hopes of designing a new plane to compete with Airbus, Boeing’s only global market rival, were shriveling. Under the sway of all the naysayers who had called out the folly of the McDonnell deal, the board had adopted a hard-line “never again” posture toward ambitious new planes. Boeing’s leaders began crying “crocodile tears,” Sorscher claimed, about the development costs of 1995’s 777, even though some industry insiders estimate that it became the most profitable plane of all time. The premise behind this complaining was silly, Sorscher contended in PowerPoint presentations and a Harvard Business School-style case study on the topic. A return to the “problem-solving” culture and managerial structure of yore, he explained over and over again to anyone who would listen, was the only sensible way to generate shareholder value. But when he brought that message on the road, he rarely elicited much more than an eye roll. “I’m not buying it,” was a common response. Occasionally, though, someone in the audience was outright mean, like the Wall Street analyst who cut him off mid-sentence:


“Look, I get it. What you’re telling me is that your business is different. That you’re special. Well, listen: Everybody thinks his business is different, because everybody is the same. Nobody. Is. Different.”


And indeed, that would appear to be the real moral of this story: Airplane manufacturing is no different from mortgage lending or insulin distribution or make-believe blood analyzing software—another cash cow for the one percent, bound inexorably for the slaughterhouse. In the now infamous debacle of the Boeing 737 MAX, the company produced a plane outfitted with a half-assed bit of software programmed to override all pilot input and nosedive when a little vane on the side of the fuselage told it the nose was pitching up. The vane was also not terribly reliable, possibly due to assembly line lapses reported by a whistle-blower, and when the plane processed the bad data it received, it promptly dove into the sea.


It is understood, now more than ever, that capitalism does half-assed things like that, especially in concert with computer software and oblivious regulators: AIG famously told investors it was hard for management to contemplate “a scenario within any kind of realm of reason that would see us losing one dollar in any of those transactions” that would, a few months later, lose the firm well over $100 billion—but hey, the risk management algorithms had been wrong. A couple of years later, a single JP Morgan trader lost $6 billion because someone had programmed one of the cells in the bank’s risk management spreadsheet to divide two numbers by their sum instead of their average. Boeing was not, of course, a hedge fund: It was way better, a stock that had more than doubled since the Trump inauguration, outperforming the Dow in the 22 months before Lion Air 610 plunged into the Java Sea.


And so there was something unsettlingly familiar when the world first learned of MCAS in November, about two weeks after the system’s unthinkable stupidity drove the two-month-old plane and all 189 people on it to a horrific death. It smacked of the sort of screwup a 23-year-old intern might have made—and indeed, much of the software on the MAX had been engineered by recent grads of Indian software-coding academies making as little as $9 an hour, part of Boeing management’s endless war on the unions that once represented more than half its employees. Down in South Carolina, a nonunion Boeing assembly line that opened in 2011 had for years churned out scores of whistle-blower complaints and wrongful termination lawsuits packed with scenes wherein quality-control documents were regularly forged, employees who enforced standards were sabotaged, and planes were routinely delivered to airlines with loose screws, scratched windows, and random debris everywhere. The MCAS crash was just the latest installment in a broader pattern so thoroughly ingrained in the business news cycle that the muckraking finance blog Naked Capitalism titled its first post about MCAS “Boeing, Crapification and the Lion Air Crash.”


But not everyone viewed the crash with such a jaundiced eye—it was, after all, the world’s first self-hijacking plane. Pilots were particularly stunned, because MCAS had been a big secret, largely kept from Boeing’s own test pilots, mentioned only once in the glossary of the plane’s 1,600-page manual, left entirely out of the 56-minute iPad refresher course that some 737-certified pilots took for MAX certification, and—in a last-minute edit—removed from the November 7 emergency airworthiness directive the Federal Aviation Administration had issued two weeks after the Lion Air crash, ostensibly to “remind” pilots of the protocol for responding to a “runaway stabilizer.” Most pilots first heard about MCAS from their unions, which had in turn gotten wind of the software from a supplementary bulletin Boeing sent airlines to accompany the airworthiness directive. Outraged, they took to message boards, and a few called veteran aerospace reporters like The Seattle Times’ Dominic Gates, The Wall Street Journal’s Andy Pasztor, and Sean Broderick at Aviation Week—who in turn interviewed engineers who seemed equally shocked. Other pilots, like Ethiopian Airlines instructor Bernd Kai von Hoesslin, vented to their own corporate management, pleading for more resources to train people on the scary new planes—just weeks before von Hoesslin’s carrier would suffer its own MAX-engineered mass tragedy.
 (...)

Simulator training for Southwest’s 9,000 pilots would have been a pain, but hardly ruinous; aviation industry analyst Kit Darby said it would cost about $2,000 a head. It was also unlikely: The FAA had three levels of “differences” training that wouldn’t have necessarily required simulators. But the No Sim Edict would haunt the program; it basically required any change significant enough for designers to worry about to be concealed, suppressed, or relegated to a footnote that would then be redacted from the final version of the MAX. And that was a predicament, because for every other airline buying the MAX, the selling point was a major difference from the last generation of 737: unprecedented fuel efficiency in line with the new Airbus A320neo.


The MAX and the Neo derived their fuel efficiency from the same source: massive “LEAP” engines manufactured by CFM, a 50-50 joint venture of GE and the French conglomerate Safran. The engines’ fans were 20 inches—or just over 40 percent larger in diameter than the original 737 Pratt & Whitneys, and the engines themselves weighed in at approximately 6,120 pounds, about twice the weight of the original engines. The planes were also considerably longer, heavier, and wider of wingspan. What they couldn’t be, without redesigning the landing gear and really jeopardizing the grandfathered FAA certification, was taller, and that was a problem. The engines were too big to tuck into their original spot underneath the wings, so engineers mounted them slightly forward, just in front of the wings.


This alteration created a shift in the plane’s center of gravity pronounced enough that it raised a red flag when the MAX was still just a model plane about the size of an eagle, running tests in a wind tunnel. The model kept botching certain extreme maneuvers, because the plane’s new aerodynamic profile was dragging its tail down and causing its nose to pitch up. So the engineers devised a software fix called MCAS, which pushed the nose down in response to an obscure set of circumstances in conjunction with the “speed trim system,” which Boeing had devised in the 1980s to smooth takeoffs. Once the 737 MAX materialized as a real-life plane about four years later, however, test pilots discovered new realms in which the plane was more stall-prone than its predecessors. So Boeing modified MCAS to turn down the nose of the plane whenever an angle-of-attack (AOA) sensor detected a stall, regardless of the speed. That involved giving the system more power and removing a safeguard, but not, in any formal or genuine way, running its modifications by the FAA, which might have had reservations with two critical traits of the revamped system: Firstly, that there are two AOA sensors on a 737, but only one, fatefully, was programmed to trigger MCAS. The former Boeing engineer Ludtke and an anonymous whistle-blower interviewed by 60 Minutes Australia both have a simple explanation for this: Any program coded to take data from both sensors would have had to account for the possibility the sensors might disagree with each other and devise a contingency for reconciling the mixed signals. Whatever that contingency, it would have involved some kind of cockpit alert, which would in turn have required additional training—probably not level-D training, but no one wanted to risk that. So the system was programmed to turn the nose down at the feedback of a single (and somewhat flimsy) sensor. And, for still unknown and truly mysterious reasons, it was programmed to nosedive again five seconds later, and again five seconds after that, over and over ad literal nauseam.


And then, just for good measure, a Boeing technical pilot emailed the FAA and casually asked that the reference to the software be deleted from the pilot manual.


So no more than a handful of people in the world knew MCAS even existed before it became infamous. Here, a generation after Boeing’s initial lurch into financialization, was the entirely predictable outcome of the byzantine process by which investment capital becomes completely abstracted from basic protocols of production and oversight: a flight-correction system that was essentially jerry-built to crash a plane. “If you’re looking for an example of late stage capitalism or whatever you want to call it,” said longtime aerospace consultant Richard Aboulafia, “it’s a pretty good one.”


by Maureen Tkacik, The New Republic |  Read more:
Image: Getty

Diplomacy for Third Graders


via: here (The Guardian) and here (New Yorker).
[ed. See also: Erdoğan Threw Trump's Insane Letter Right in the Trash (Vanity Fair); and The Madman Has No Clothes (TNR).]

Make Physics Real Again

Why have so many physicists shrugged off the paradoxes of quantum mechanics?

No other scientific theory can match the depth, range, and accuracy of quantum mechanics. It sheds light on deep theoretical questions — such as why matter doesn’t collapse — and abounds with practical applications — transistors, lasers, MRI scans. It has been validated by empirical tests with astonishing precision, comparable to predicting the distance between Los Angeles and New York to within the width of a human hair.

And no other theory is so weird: Light, electrons, and other fundamental constituents of the world sometimes behave as waves, spread out over space, and other times as particles, each localized to a certain place. These models are incompatible, and which one the world seems to reveal will be determined by what question is asked of it. The uncertainty principle says that trying to measure one property of an object more precisely will make measurements of other properties less precise. And the dominant interpretation of quantum mechanics says that those properties don’t even exist until they’re observed — the observation is what brings them about.

“I think I can safely say,” wrote Richard Feynman, one of the subject’s masters, “that nobody understands quantum mechanics.” He went on to add, “Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain,’ into a blind alley from which nobody has yet escaped.” Understandably, most working scientists would rather apply their highly successful tools than probe the perplexing question of what those tools mean.

The prevailing answer to that question has been the so-called Copenhagen interpretation, developed in the circle led by Niels Bohr, one of the founders of quantum mechanics. About this orthodoxy N. David Mermin, some intellectual generations removed from Bohr, famously complained, “If I were forced to sum up in one sentence what the Copenhagen interpretation says to me, it would be ‘Shut up and calculate!’” It works. Stop kvetching. Why fix what ain’t broke? Mermin later regretted sounding snotty, but re-emphasized that the question of meaning is important and remains open. The physicist Roderich Tumulka, as quoted in a 2016 interview, is more pugnacious: “Ptolemy’s theory” — of an earth-centered universe — “made perfect sense. It just happened not to be right. But Copenhagen quantum mechanics is incoherent, and thus is not even a reasonable theory to begin with.” This, you will not be surprised to learn, has been disputed.

In What Is Real? the physicist and science writer Adam Becker offers a history of what his subtitle calls “the unfinished quest for the meaning of quantum physics.” Although it is certainly unfinished, it is, as quests go, a few knights short of a Round Table. After the generation of pioneers, foundational work in quantum mechanics became stigmatized as a fringe pursuit, a career killer. So Becker’s well-written book is part science, part sociology (a study of the extrascientific forces that helped solidify the orthodoxy), and part drama (a story of the ideas and often vivid personalities of some dissenters and the shabby treatment they have often received).

The publisher’s blurb breathlessly promises “the untold story of the heretical thinkers who dared to question the nature of our quantum universe” and a “gripping story of this battle of ideas and the courageous scientists who dared to stand up for truth.” But What Is Real? doesn’t live down to that lurid black-and-white logline. It does make a heartfelt and persuasive case that serious problems with the foundations of quantum mechanics have been persistently, even disgracefully, swept under the carpet. (...)

At the end of the nineteenth century, fundamental physics modeled the constituents of the world as particles (discrete lumps of stuff localized in space) and fields (gravity and electromagnetism, continuous and spread throughout space). Particles traveled through the fields, interacting with them and with each other. Light was a wave rippling through the electromagnetic field.

Quantum mechanics arose when certain puzzling phenomena seemed explicable only by supposing that light, firmly established by Maxwell’s theory of electromagnetism as a wave, was acting as if composed of particles. French physicist Louis de Broglie then postulated that all the things believed to be particles could at times behave like waves.

Consider the famous “double-slit” experiment. The experimental apparatus consists of a device that sends electrons, one at a time, toward a barrier with a slit in it and, at some distance behind the barrier, a screen that glows wherever an electron strikes it. The journey of each electron can be usefully thought of in two parts. In the first, the electron either hits the barrier and stops, or it passes through the slit. In the second, if the electron does pass through the slit, it continues on to the screen. The flashes seen on the screen line up with the gun and slit, just as we’d expect from a particle fired like a bullet from the electron gun.

But if we now cut another slit in the barrier, it turns out that its mere existence somehow affects the second part of an electron’s journey. The screen lights up in unexpected places, not always lined up with either of the slits — as if, on reaching one slit, an electron checks whether it had the option of going through the other one and, if so, acquires permission to go anywhere it likes. Well, not quite anywhere: Although we can’t predict where any particular shot will strike the screen, we can statistically predict the overall results of many shots. Their accumulation produces a pattern that looks like the pattern formed by two waves meeting on the surface of a pond. Waves interfere with one another: When two crests or two troughs meet, they reinforce by making a taller crest or deeper trough; when a crest meets a trough, they cancel and leave the surface undisturbed. In the pattern that accumulates on the screen, bright places correspond to reinforcement, dim places to cancellation.

We rethink. Perhaps, taking the pattern as a clue, an electron is really like a wave, a ripple in some field. When the electron wave reaches the barrier, part of it passes through one slit, part through the other, and the pattern we see results from their interference.

There’s an obvious problem: Maybe a stream of electrons can act like a wave (as a stream of water molecules makes up a water wave), but our apparatus sends electrons one at a time. The electron-as-wave model thus requires that firing a single electron causes something to pass through both slits. To check that, we place beside each slit a monitor that will signal when it sees something pass. What we find on firing the gun is that one monitor or the other may signal, but never both; a single electron doesn’t go through both slits. Even worse, when the monitors are in place, no interference pattern forms on the screen. This attempt to observe directly how the pattern arose eliminates what we’re trying to explain. We have to rethink again.

At which point Copenhagen says: Stop! This is puzzling enough without creating unnecessary difficulties. All we actually observe is where an electron strikes the screen — or, if the monitors have been installed, which slit it passes through. If we insist on a theory that accounts for the electron’s journey — the purely hypothetical track of locations it passes through on the way to where it’s actually seen — that theory will be forced to account for where it is when we’re not looking. Pascual Jordan, an important member of Bohr’s circle, cut the Gordian knot: An electron does not have a position until it is observed; the observation is what compels it to assume one. Quantum mechanics makes statistical predictions about where it is more or less likely to be observed.

That move eliminates some awkward questions but sounds uncomfortably like an old joke: The patient lifts his arm and says, “Doc, it hurts when I do this.” The doctor responds, “So don’t do that.” But Jordan’s assertion was not gratuitous. The best available theory did not make it possible to refer to the current location of an unobserved electron, yet that did not prevent it from explaining experimental data or making accurate and testable predictions. Further, there seemed to be no obvious way to incorporate such references, and it was widely believed that it would be impossible to do so (about which more later). It seemed natural, if not quite logically obligatory, to take the leap of asserting that there is no such thing as the location of an electron that is not being observed. For many, this hardened into dogma — that quantum mechanics was a complete and final theory, and attempts to incorporate allegedly missing information were dangerously wrongheaded.

But what is an observation, and what gives it such magical power that it can force a particle to have a location? Is there something special about an observation that distinguishes it from any other physical interaction? Does an observation require an observer? (If so, what was the universe doing before we showed up to observe it?) This constellation of puzzles has come to be called “the measurement problem.”

Bohr postulated a distinction between the quantum world and the world of everyday objects. A “classical” object is an object of everyday experience. It has, for example, a definite position and momentum, whether observed or not. A “quantum” object, such as an electron, has a different status; it’s an abstraction. Some properties, such as electrical charge, belong to the electron abstraction intrinsically, but others can be said to exist only when they are measured or observed. An observation is an event that occurs when the two worlds interact: A quantum-mechanical measurement takes place at the boundary, when a (very small) quantum object interacts with a (much larger) classical object such as a measuring device in a lab.

Experiments have steadily pushed the boundary outward, having demonstrated the double-slit experiment not only with photons and electrons, but also with atoms and even with large molecules consisting of hundreds of atoms, thus millions of times more massive than electrons. Why shouldn’t the same laws of physics apply even to large, classical objects?

Enter Schrƶdinger’s cat...

by David Guaspari, The New Atlantis | Read more:
Image: Shutterstock

Tuesday, October 15, 2019

Tom Petty & The Heartbreakers

Scientists’ Declaration of Support for Non-Violent Direct Action Against Government Inaction Over the Climate and Ecological Emergency

THIS DECLARATION SETS OUT THE CURRENT SCIENTIFIC CONSENSUS CONCERNING THE CLIMATE AND ECOLOGICAL EMERGENCY AND HIGHLIGHTS THE NECESSITY FOR URGENT ACTION TO PREVENT FURTHER AND IRREVERSIBLE DAMAGE TO THE HABITABILITY OF OUR PLANET.

As scientists, we have dedicated our lives to the study and understanding of the world and our place in it. We declare that scientific evidence shows beyond any reasonable doubt that human-caused changes to the Earth’s land, sea and air are severely threatening the habitability of our planet. We further declare that overwhelming evidence shows that if global greenhouse gas emissions are not brought rapidly down to net zero and biodiversity loss is not halted, we risk catastrophic and irreversible damage to our planetary life-support systems, causing incalculable human suffering and many deaths.

We note that despite the scientific community first sounding the alarm on human-caused global warming more than four decades ago, no action taken by governments thus far has been sufficient to halt the steep rise in greenhouse gas emissions, nor address the ever-worsening loss of biodiversity. Therefore, we call for immediate and decisive action by governments worldwide to rapidly reduce global greenhouse gas emissions to net zero, to prevent further biodiversity loss, and to repair, to the fullest extent possible, the damage that has already been done. We further call upon governments to provide particular support to those who will be most affected by climate change and by the required transition to a sustainable economy.

As scientists, we have an obligation that extends beyond merely describing and understanding the natural world to taking an active part in helping to protect it. We note that the scientific community has already tried all conventional methods to draw attention to the crisis. We believe that the continued governmental inaction over the climate and ecological crisis now justifies peaceful and nonviolent protest and direct action, even if this goes beyond the bounds of the current law.

We therefore support those who are rising up peacefully against governments around the world that are failing to act proportionately to the scale of the crisis.

We believe it is our moral duty to act now, and we urge other scientists to join us in helping to protect humanity’s only home.

To show your support, please add your name to the list below and share with your colleagues. If you’d like to join us at the International Rebellion in London from October 7th (full list of global October Rebellions here), or to find out more, please join our Scientists for Extinction Rebellion Facebook group or email scientistsforxr@protonmail.com.

Signatories:

Signatures are invited from individuals holding a Master's Degree, or holding or studying for a Doctorate, in a field directly related to the sciences. Or those working in a scientific field. Please make explicitly clear if your research field is directly relevant to the climate and/or ecological emergencies. Please note: the views of individuals signing this document do not necessarily represent those of the university or organisation they work for.

[ed. List of signatories]

via: Google Docs
[ed. See also: Land Without Bread (The Baffler).]

Driverless Cars Are Stuck in a Jam

Few ideas have enthused technologists as much as the self-driving car. Advances in machine learning, a subfield of artificial intelligence (AI), would enable cars to teach themselves to drive by drawing on reams of data from the real world. The more they drove, the more data they would collect, and the better they would become. Robotaxis summoned with the flick of an app would make car ownership obsolete. Best of all, reflexes operating at the speed of electronics would drastically improve safety. Car- and tech-industry bosses talked of a world of “zero crashes”.

And the technology was just around the corner. In 2015 Elon Musk, Tesla’s boss, predicted his cars would be capable of “complete autonomy” by 2017. Mr Musk is famous for missing his own deadlines. But he is not alone. General Motors said in 2018 that it would launch a fleet of cars without steering wheels or pedals in 2019; in June it changed its mind. Waymo, the Alphabet subsidiary widely seen as the industry leader, committed itself to launching a driverless-taxi service in Phoenix, where it has been testing its cars, at the end of 2018. The plan has been a damp squib. Only part of the city is covered; only approved users can take part. Phoenix’s wide, sun-soaked streets are some of the easiest to drive on anywhere in the world; even so, Waymo’s cars have human safety drivers behind the wheel, just in case.

Jim Hackett, the boss of Ford, acknowledges that the industry “overestimated the arrival of autonomous vehicles”. Chris Urmson, a linchpin in Alphabet’s self-driving efforts (he left in 2016), used to hope his young son would never need a driving licence. Mr Urmson now talks of self-driving cars appearing gradually over the next 30 to 50 years. Firms are increasingly switching to a more incremental approach, building on technologies such as lane-keeping or automatic parking. A string of fatalities involving self-driving cars have scotched the idea that a zero-crash world is anywhere close. Markets are starting to catch on. In September Morgan Stanley, a bank, cut its valuation of Waymo by 40%, to $105bn, citing delays in its technology.

The future, in other words, is stuck in traffic. Partly that reflects the tech industry’s predilection for grandiose promises. But self-driving cars were also meant to be a flagship for the power of AI. Their struggles offer valuable lessons in the limits of the world’s trendiest technology.
Hit the brakes

One is that, for all the advances in machine learning, machines are still not very good at learning. Most humans need a few dozen hours to master driving. Waymo’s cars have had over 10m miles of practice, and still fall short. And once humans have learned to drive, even on the easy streets of Phoenix, they can, with a little effort, apply that knowledge anywhere, rapidly learning to adapt their skills to rush-hour Bangkok or a gravel-track in rural Greece. Computers are less flexible. AI researchers have expended much brow-sweat searching for techniques to help them match the quick-fire learning displayed by humans. So far, they have not succeeded.

Another lesson is that machine-learning systems are brittle. Learning solely from existing data means they struggle with situations that they have never seen before. Humans can use general knowledge and on-the-fly reasoning to react to things that are new to them—a light aircraft landing on a busy road, for instance, as happened in Washington state in August (thanks to humans’ cognitive flexibility, no one was hurt). Autonomous-car researchers call these unusual situations “edge cases”. Driving is full of them, though most are less dramatic. Mishandled edge cases seem to have been a factor in at least some of the deaths caused by autonomous cars to date. The problem is so hard that some firms, particularly in China, think it may be easier to re-engineer entire cities to support limited self-driving than to build fully autonomous cars (see article).

by The Economist |  Read more:
Image: uncredited

The Millennial Urban Lifestyle Is About to Get More Expensive

Several weeks ago, I met up with a friend in New York who suggested we grab a bite at a Scottish bar in the West Village. He had booked the table through something called Seated, a restaurant app that pays users who make reservations on the platform. We ordered two cocktails each, along with some food. And in exchange for the hard labor of drinking whiskey, the app awarded us $30 in credits redeemable at a variety of retailers.

I am never offended by freebies. But this arrangement seemed almost obscenely generous. To throw cash at people every time they walk into a restaurant does not sound like a business. It sounds like a plot to lose money as fast as possible—or to provide New Yorkers, who are constantly dining out, with a kind of minimum basic income.

“How does this thing make any sense?” I asked my friend.

I don’t know if it makes sense, and I don’t know how long it’s going to last,” he said, pausing to scroll through redemption options. “So, do you want your half in Amazon credits or Starbucks?”

Idon’t know if it makes sense, and I don’t know how long it’s going to last. Is there a better epitaph for this age of consumer technology?

Starting about a decade ago, a fleet of well-known start-ups promised to change the way we work, work out, eat, shop, cook, commute, and sleep. These lifestyle-adjustment companies were so influential that wannabe entrepreneurs saw them as a template, flooding Silicon Valley with “Uber for X” pitches.

But as their promises soared, their profits didn’t. It’s easy to spend all day riding unicorns whose most magical property is their ability to combine high valuations with persistently negative earnings—something I’ve pointed out before. If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates, you’ve interacted with seven companies that will collectively lose nearly $14 billion this year. If you use Lime scooters to bop around the city, download Wag to walk your dog, and sign up for Blue Apron to make a meal, that’s three more brands that have never record a dime in earnings, or have seen their valuations fall by more than 50 percent.

These companies don’t give away cold hard cash as blatantly as Seated. But they’re not so different from the restaurant app. To maximize customer growth they have strategically—or at least “strategically”—throttled their prices, in effect providing a massive consumer subsidy. You might call it the Millennial Lifestyle Sponsorship, in which consumer tech companies, along with their venture-capital backers, help fund the daily habits of their disproportionately young and urban user base. With each Uber ride, WeWork membership, and hand-delivered dinner, the typical consumer has been getting a sweetheart deal.

For consumers—if not for many beleaguered contract workers—the MLS is a magnificent deal, a capital-to-labor transfer of wealth in pursuit of long-term profit; the sort of thing that might simultaneously please Bernie Sanders and the ghost of Milton Friedman.

But this was never going to last forever. WeWork’s disastrous IPO attempt has triggered reverberations across the industry. The theme of consumer tech has shifted from magic to margins. Venture capitalists and start-up founders alike have re-embraced an old mantra: Profits matter.

And higher profits can only mean one thing: Urban lifestyles are about to get more expensive.

by Derek Thompson, The Atlantic |  Read more:
Image: Carlos Jasso/Reuters 

How the SoftBank Scheme Rips Open the Startup Bubble

The biggest force behind the startup bubble in the United States has been SoftBank Group, the Japanese publicly traded conglomerate. It has been the biggest force in driving up valuations of money-losing cash-burn machines to absurd levels. It has been the biggest force in flooding Silicon Valley, San Francisco, and many other startup hot spots with a tsunami of money from around the world — money that it borrowed, and money that other large investors committed to SoftBank’s investment funds to ride on its coattails. But the scheme has run into trouble, and a lot is at stake.

The thing is, SoftBank Group has nearly $100 billion in debt on a consolidated basis as a result of its aggressive acquisition binge in Japan, the US, and elsewhere. This includes permanently broke Sprint Nextel which is now trying to merge with T Mobile. It includes British chip designer ARM that it acquired in 2016 for over $32 billion, its largest acquisition ever. It includes Fortress Investment Group that it acquired in 2017 for $3.3 billion. In August 2017, it acquired a 21% stake in India’s largest e-commerce company Flipkart for $2.5 billion that it sold to Walmart less than a year later for what was said to be a 60% profit. And on and on.

In May 2017, Softbank partnered with Saudi Arabia’s Public Investment Fund to create the Vision Fund, which has obtained $97 billion in funding – well, not actual funding, some actual funding and a lot of promised funding, which made it the largest private venture capital fund ever.

Saudi Public Investment Fund promised to contribute $45 billion over the next few years. SoftBank promised to contribute $28 billion. Abu Dhabi’s Mubadala Investment promised to contribute $15 billion. Apple, Qualcomm, Foxconn, Sharp, and others also promised to contribute smaller amounts.

Over the past two years, the Vision Fund has invested in over 80 companies, including WeWork, Uber, and Slack.

But the Vision Fund needs cash on a constant basis because some of its investors receive interest payments of 7% annually on their investments in the fund. Yeah, that’s unusual, but hey, there is a lot of unusual stuff going on with SoftBank. (...)

SoftBank uses a leverage ratio that is based on the inflated “valuations” of its many investments that are not publicly traded, such as WeWork, into which SoftBank and the Vision Fund have plowed $10 billion. WeWork’s “valuation” is still $47 billion, though in reality, the company is now fighting for sheer survival, and no one has any idea what the company might be worth. Its entire business model has turned out to be just a magnificent cash-burn machine.

But SoftBank and the Vision Fund have already booked the gains from WeWork’s ascent to that $47 billion valuation.

How did they get to these gains?

In 2016, investors poured more money into WeWork by buying shares at a price that gave WeWork a valuation of $17 billion. These deals are negotiated behind closed doors and purposefully leaked to the financial press for effect.

In March 2017, SoftBank invested $300 million. In July 2017, WeWork raised another $760 million, now at a valuation of $20 billion. In July 2018, WeWork obtained $3 billion in funding from SoftBank. In January 2019, SoftBank invested another $2 billion in WeWork, now at a valuation that had been pumped up to $47 billion.

With this $2 billion investment at a valuation of $47 billion, SoftBank pushed all its prior investments up to the same share price, and thus booked a huge gain, more than doubling the value of its prior investments.

Now, I wasn’t in the room when this deal was hashed out. But I can imagine what it sounded like, with SoftBank saying:

We want to more than double the value of our prior investments, and we want to pay the maximum possible per share now, in order to book this huge gain on our prior investments, which will make us look like geniuses, and will allow us to start Vision Fund 2, and it will get the Saudis, which also picked up a huge gain, to increase their confidence in us and invest tens of billions of dollars in our Vision Fund 2.

In these investment rounds, the intent is not to buy low in order to sell high. The intent is to buy high and higher at each successive round. This makes everyone look good on paper. And they can all book gains. And these higher valuations beget hype, and hype begets the money via an IPO to bail out those investors.

By this method, SoftBank has driven up the “value” of its investments, which drives down its loan-to-value ratio. But S&P and Moody’s caught on to it, and now the market too – as demonstrated by the scuttled WeWork IPO – is catching up with SoftBank.

by Wolf Richter, Wolf Street |  Read more:
Image: Issei Kato/Reuters via

Printing Electronics Directly on Delicate Surfaces


Printing Electronics Directly on Delicate Surfaces—Like the Back of Your Hand (IEEE Spectrum). The gentle, low-temperature technique prints electric tattoos on skin and transistors on paper.
Image: Aaron Franklin/Duke University
[ed. See also: Flexible Wearable Reverses Baldness With Gentle Electric Pulses (IEEE Spectrum).]

Harold Bloom, Critic Who Championed Western Canon, Dies at 89

Harold Bloom, the prodigious literary critic who championed and defended the Western canon in an outpouring of influential books that appeared not only on college syllabuses but also — unusual for an academic — on best-seller lists, died on Monday at a hospital in New Haven. He was 89.

His death was confirmed by his wife, Jeanne Bloom, who said he taught his last class at Yale University on Thursday.

Professor Bloom was frequently called the most notorious literary critic in America. From a vaunted perch at Yale, he flew in the face of almost every trend in the literary criticism of his day. Chiefly he argued for the literary superiority of the Western giants like Shakespeare, Chaucer and Kafka — all of them white and male, his own critics pointed out — over writers favored by what he called “the School of Resentment,” by which he meant multiculturalists, feminists, Marxists, neoconservatives and others whom he saw as betraying literature’s essential purpose.

“He is, by any reckoning, one of the most stimulating literary presences of the last half-century — and the most protean,” Sam Tanenhaus wrote in 2011 in The New York Times Book Review, of which he was the editor at the time, “a singular breed of scholar-teacher-critic-prose-poet-pamphleteer.”

At the heart of Professor Bloom’s writing was a passionate love of literature and a relish for its heroic figures.

“Shakespeare is God,” he declared, and Shakespeare’s characters, he said, are as real as people and have shaped Western perceptions of what it is to be human — a view he propounded in the acclaimed “Shakespeare: The Invention of the Human” (1998). (...)

Gorging on Words

Professor Bloom called himself “a monster” of reading; he said he could read, and absorb, a 400-page book in an hour. His friend Richard Bernstein, a professor of philosophy at the New School, told a reporter that watching Professor Bloom read was “scary.”

Armed with a photographic memory, Professor Bloom could recite acres of poetry by heart — by his account, the whole of Shakespeare, Milton’s “Paradise Lost,” all of William Blake, the Hebraic Bible and Edmund Spenser’s monumental “The Fairie Queen.” He relished epigraphs, gnomic remarks and unusual words: kenosis (emptying), tessera (completing), askesis (diminishing) and clinamen (swerving). (...)

Like Dr. Johnson’s, his output was vast: more than 40 books of his own authorship and hundreds of volumes he edited. And he remained prolific to the end, publishing two books in 2017, two in 2018 and two this year: “Macbeth: A Dagger of the Mind” and “Possessed by Memory: The Inward Light of Criticism.” His final book is to be released on an unspecified date by Yale University Press, his wife said.

Perhaps Professor Bloom’s most influential work was one that discussed literary influence itself. The book, “The Anxiety of Influence,” published in 1973 and eventually in some 45 languages, borrows from Freudian theory in envisioning literary creation as an epochal, and Oedipal, struggle in which the young artist rebels against preceding traditions, seeking that burst of originality that distinguishes greatness. (...)

Professor Bloom crossed swords with other critical perspectives in “The Western Canon.” The eminent critic Frank Kermode, identifying those whom Professor Bloom saw as his antagonists, wrote in The London Review of Books, “He has in mind all who profess to regard the canon as an instrument of cultural, hence political, hegemony — as a subtle fraud devised by dead white males to reinforce ethnic and sexist oppression.”

Professor Bloom insisted that a literary work is not a social document — is not to be read for its political or historical content — but is to be enjoyed above all for the aesthetic pleasure it brings. “Bloom isn’t asking us to worship the great books,” the writer Adam Begley wrote in The New York Times Magazine in 1994. “He asks instead that we prize the astonishing mystery of creative genius.”

Professor Bloom himself said that “the canonical quality comes out of strangeness, comes out of the idiosyncratic, comes out of originality.” Mr. Begley noted further, “The canon, Bloom believes, answers an unavoidable question: What, in the little time we have, shall we read?”

“You must choose,” Professor Bloom himself wrote in “The Western Canon.” “Either there were aesthetic values or there are only the overdeterminations of race, class and gender.”

by Dinitia Smith, NY Times | Read more:
Image: Jim Wilson/The New York Times

Five Reasons the Diet Soda Myth Won’t Die

There’s a decent chance you’ll be reading about diet soda studies until the day you die. (The odds are exceedingly good it won’t be the soda that kills you.)

The latest batch of news reports came last month, based on another study linking diet soda to an increased risk of early death.

As usual, the study (and some of the articles) lacked some important context and caused more worry than was warranted. There are specific reasons that this cycle is unlikely to end.

1. If it’s artificial, it must be bad.

People suspect, and not always incorrectly, that putting things created in a lab into their bodies cannot be good. People worry about genetically modified organisms, and monosodium glutamate and, yes, artificial sweeteners because they sound scary.

But everything is a chemical, including dihydrogen monoxide (that’s another way of saying water). These are just words we use to describe ingredients. Some ingredients occur naturally, and some are coaxed into existence. That doesn’t inherently make one better than another. In fact, I’ve argued that research supports consuming artificial sweeteners over added sugars. (The latest study concludes the opposite.)

2. Soda is an easy target

In a health-conscious era, soda has become almost stigmatized in some circles (and sales have fallen as a result).

It’s true that no one “needs” soda. There are a million varieties, and almost none taste like anything in nature. Some, like Dr Pepper, defy description.

But there are many things we eat and drink that we don’t “need.” We don’t need ice cream or pie, but for a lot of people, life would be less enjoyable without those things.

None of this should be taken as a license to drink cases of soda a week. A lack of evidence of danger at normal amounts doesn’t mean that consuming any one thing is huge amounts is a good idea. Moderation still matters.

3. Scientists need to publish to keep their jobs

I’m a professor on the research tenure track, and I’m here to tell you that the coin of the realm is grants and papers. You need funding to survive, and you need to publish to get funding.

As a junior faculty member, or even as a doctoral student or postdoctoral fellow, you need to publish research. Often, the easiest step is to take a large data set and publish an analysis from it showing a correlation between some factor and some outcome.

This kind of research is rampant. That’s how we hear year after year that everyone is dehydrated and we need to drink more water. It’s how we hear that coffee is affecting health in this way or that. It’s how we wind up with a lot of nutritional studies that find associations in one way or another.

As long as the culture of science demands output as the measure of success, these studies will appear. And given that the news media also needs to publish to survive — if you didn’t know, people love to read about food and health — we’ll continue to read stories about how diet soda will kill us.

by Aaron E. Carroll, NY Times | Read more:
Image: Wilfredo Lee

Sunday, October 13, 2019


Daniel O'Shane, “Aib ene zogo ni pat (Aib & the sacred waterhole)
via:

Tom Gauld
via: