Thursday, October 17, 2019

Crash Course

How Boeing's Managerial Revolution Created the 737 MAX Disaster.

Nearly two decades before Boeing’s MCAS system crashed two of the plane-maker’s brand-new 737 MAX jets, Stan Sorscher knew his company’s increasingly toxic mode of operating would create a disaster of some kind. A long and proud “safety culture” was rapidly being replaced, he argued, with “a culture of financial bullshit, a culture of groupthink.”


Sorscher, a physicist who’d worked at Boeing more than two decades and had led negotiations there for the engineers’ union, had become obsessed with management culture. He said he didn’t previously imagine Boeing’s brave new managerial caste creating a problem as dumb and glaringly obvious as MCAS (or the Maneuvering Characteristics Augmentation System, as a handful of software wizards had dubbed it). Mostly he worried about shriveling market share driving sales and head count into the ground, the things that keep post-industrial American labor leaders up at night. On some level, though, he saw it all coming; he even demonstrated how the costs of a grounded plane would dwarf the short-term savings achieved from the latest outsourcing binge in one of his reports that no one read back in 2002.*

Sorscher had spent the early aughts campaigning to preserve the company’s estimable engineering legacy. He had mountains of evidence to support his position, mostly acquired via Boeing’s 1997 acquisition of McDonnell Douglas, a dysfunctional firm with a dilapidated aircraft plant in Long Beach and a CEO who liked to use what he called the “Hollywood model” for dealing with engineers: Hire them for a few months when project deadlines are nigh, fire them when you need to make numbers. In 2000, Boeing’s engineers staged a 40-day strike over the McDonnell deal’s fallout; while they won major material concessions from management, they lost the culture war. They also inherited a notoriously dysfunctional product line from the corner-cutting market gurus at McDonnell.


And while Boeing’s engineers toiled to get McDonnell’s lemon planes into the sky, their own hopes of designing a new plane to compete with Airbus, Boeing’s only global market rival, were shriveling. Under the sway of all the naysayers who had called out the folly of the McDonnell deal, the board had adopted a hard-line “never again” posture toward ambitious new planes. Boeing’s leaders began crying “crocodile tears,” Sorscher claimed, about the development costs of 1995’s 777, even though some industry insiders estimate that it became the most profitable plane of all time. The premise behind this complaining was silly, Sorscher contended in PowerPoint presentations and a Harvard Business School-style case study on the topic. A return to the “problem-solving” culture and managerial structure of yore, he explained over and over again to anyone who would listen, was the only sensible way to generate shareholder value. But when he brought that message on the road, he rarely elicited much more than an eye roll. “I’m not buying it,” was a common response. Occasionally, though, someone in the audience was outright mean, like the Wall Street analyst who cut him off mid-sentence:


“Look, I get it. What you’re telling me is that your business is different. That you’re special. Well, listen: Everybody thinks his business is different, because everybody is the same. Nobody. Is. Different.”


And indeed, that would appear to be the real moral of this story: Airplane manufacturing is no different from mortgage lending or insulin distribution or make-believe blood analyzing software—another cash cow for the one percent, bound inexorably for the slaughterhouse. In the now infamous debacle of the Boeing 737 MAX, the company produced a plane outfitted with a half-assed bit of software programmed to override all pilot input and nosedive when a little vane on the side of the fuselage told it the nose was pitching up. The vane was also not terribly reliable, possibly due to assembly line lapses reported by a whistle-blower, and when the plane processed the bad data it received, it promptly dove into the sea.


It is understood, now more than ever, that capitalism does half-assed things like that, especially in concert with computer software and oblivious regulators: AIG famously told investors it was hard for management to contemplate “a scenario within any kind of realm of reason that would see us losing one dollar in any of those transactions” that would, a few months later, lose the firm well over $100 billion—but hey, the risk management algorithms had been wrong. A couple of years later, a single JP Morgan trader lost $6 billion because someone had programmed one of the cells in the bank’s risk management spreadsheet to divide two numbers by their sum instead of their average. Boeing was not, of course, a hedge fund: It was way better, a stock that had more than doubled since the Trump inauguration, outperforming the Dow in the 22 months before Lion Air 610 plunged into the Java Sea.


And so there was something unsettlingly familiar when the world first learned of MCAS in November, about two weeks after the system’s unthinkable stupidity drove the two-month-old plane and all 189 people on it to a horrific death. It smacked of the sort of screwup a 23-year-old intern might have made—and indeed, much of the software on the MAX had been engineered by recent grads of Indian software-coding academies making as little as $9 an hour, part of Boeing management’s endless war on the unions that once represented more than half its employees. Down in South Carolina, a nonunion Boeing assembly line that opened in 2011 had for years churned out scores of whistle-blower complaints and wrongful termination lawsuits packed with scenes wherein quality-control documents were regularly forged, employees who enforced standards were sabotaged, and planes were routinely delivered to airlines with loose screws, scratched windows, and random debris everywhere. The MCAS crash was just the latest installment in a broader pattern so thoroughly ingrained in the business news cycle that the muckraking finance blog Naked Capitalism titled its first post about MCAS “Boeing, Crapification and the Lion Air Crash.”


But not everyone viewed the crash with such a jaundiced eye—it was, after all, the world’s first self-hijacking plane. Pilots were particularly stunned, because MCAS had been a big secret, largely kept from Boeing’s own test pilots, mentioned only once in the glossary of the plane’s 1,600-page manual, left entirely out of the 56-minute iPad refresher course that some 737-certified pilots took for MAX certification, and—in a last-minute edit—removed from the November 7 emergency airworthiness directive the Federal Aviation Administration had issued two weeks after the Lion Air crash, ostensibly to “remind” pilots of the protocol for responding to a “runaway stabilizer.” Most pilots first heard about MCAS from their unions, which had in turn gotten wind of the software from a supplementary bulletin Boeing sent airlines to accompany the airworthiness directive. Outraged, they took to message boards, and a few called veteran aerospace reporters like The Seattle Times’ Dominic Gates, The Wall Street Journal’s Andy Pasztor, and Sean Broderick at Aviation Week—who in turn interviewed engineers who seemed equally shocked. Other pilots, like Ethiopian Airlines instructor Bernd Kai von Hoesslin, vented to their own corporate management, pleading for more resources to train people on the scary new planes—just weeks before von Hoesslin’s carrier would suffer its own MAX-engineered mass tragedy.
 (...)

Simulator training for Southwest’s 9,000 pilots would have been a pain, but hardly ruinous; aviation industry analyst Kit Darby said it would cost about $2,000 a head. It was also unlikely: The FAA had three levels of “differences” training that wouldn’t have necessarily required simulators. But the No Sim Edict would haunt the program; it basically required any change significant enough for designers to worry about to be concealed, suppressed, or relegated to a footnote that would then be redacted from the final version of the MAX. And that was a predicament, because for every other airline buying the MAX, the selling point was a major difference from the last generation of 737: unprecedented fuel efficiency in line with the new Airbus A320neo.


The MAX and the Neo derived their fuel efficiency from the same source: massive “LEAP” engines manufactured by CFM, a 50-50 joint venture of GE and the French conglomerate Safran. The engines’ fans were 20 inches—or just over 40 percent larger in diameter than the original 737 Pratt & Whitneys, and the engines themselves weighed in at approximately 6,120 pounds, about twice the weight of the original engines. The planes were also considerably longer, heavier, and wider of wingspan. What they couldn’t be, without redesigning the landing gear and really jeopardizing the grandfathered FAA certification, was taller, and that was a problem. The engines were too big to tuck into their original spot underneath the wings, so engineers mounted them slightly forward, just in front of the wings.


This alteration created a shift in the plane’s center of gravity pronounced enough that it raised a red flag when the MAX was still just a model plane about the size of an eagle, running tests in a wind tunnel. The model kept botching certain extreme maneuvers, because the plane’s new aerodynamic profile was dragging its tail down and causing its nose to pitch up. So the engineers devised a software fix called MCAS, which pushed the nose down in response to an obscure set of circumstances in conjunction with the “speed trim system,” which Boeing had devised in the 1980s to smooth takeoffs. Once the 737 MAX materialized as a real-life plane about four years later, however, test pilots discovered new realms in which the plane was more stall-prone than its predecessors. So Boeing modified MCAS to turn down the nose of the plane whenever an angle-of-attack (AOA) sensor detected a stall, regardless of the speed. That involved giving the system more power and removing a safeguard, but not, in any formal or genuine way, running its modifications by the FAA, which might have had reservations with two critical traits of the revamped system: Firstly, that there are two AOA sensors on a 737, but only one, fatefully, was programmed to trigger MCAS. The former Boeing engineer Ludtke and an anonymous whistle-blower interviewed by 60 Minutes Australia both have a simple explanation for this: Any program coded to take data from both sensors would have had to account for the possibility the sensors might disagree with each other and devise a contingency for reconciling the mixed signals. Whatever that contingency, it would have involved some kind of cockpit alert, which would in turn have required additional training—probably not level-D training, but no one wanted to risk that. So the system was programmed to turn the nose down at the feedback of a single (and somewhat flimsy) sensor. And, for still unknown and truly mysterious reasons, it was programmed to nosedive again five seconds later, and again five seconds after that, over and over ad literal nauseam.


And then, just for good measure, a Boeing technical pilot emailed the FAA and casually asked that the reference to the software be deleted from the pilot manual.


So no more than a handful of people in the world knew MCAS even existed before it became infamous. Here, a generation after Boeing’s initial lurch into financialization, was the entirely predictable outcome of the byzantine process by which investment capital becomes completely abstracted from basic protocols of production and oversight: a flight-correction system that was essentially jerry-built to crash a plane. “If you’re looking for an example of late stage capitalism or whatever you want to call it,” said longtime aerospace consultant Richard Aboulafia, “it’s a pretty good one.”


by Maureen Tkacik, The New Republic |  Read more:
Image: Getty

Diplomacy for Third Graders


via: here (The Guardian) and here (New Yorker).
[ed. See also: Erdoğan Threw Trump's Insane Letter Right in the Trash (Vanity Fair); and The Madman Has No Clothes (TNR).]

Make Physics Real Again

Why have so many physicists shrugged off the paradoxes of quantum mechanics?

No other scientific theory can match the depth, range, and accuracy of quantum mechanics. It sheds light on deep theoretical questions — such as why matter doesn’t collapse — and abounds with practical applications — transistors, lasers, MRI scans. It has been validated by empirical tests with astonishing precision, comparable to predicting the distance between Los Angeles and New York to within the width of a human hair.

And no other theory is so weird: Light, electrons, and other fundamental constituents of the world sometimes behave as waves, spread out over space, and other times as particles, each localized to a certain place. These models are incompatible, and which one the world seems to reveal will be determined by what question is asked of it. The uncertainty principle says that trying to measure one property of an object more precisely will make measurements of other properties less precise. And the dominant interpretation of quantum mechanics says that those properties don’t even exist until they’re observed — the observation is what brings them about.

“I think I can safely say,” wrote Richard Feynman, one of the subject’s masters, “that nobody understands quantum mechanics.” He went on to add, “Do not keep saying to yourself, if you can possibly avoid it, ‘But how can it be like that?’ because you will get ‘down the drain,’ into a blind alley from which nobody has yet escaped.” Understandably, most working scientists would rather apply their highly successful tools than probe the perplexing question of what those tools mean.

The prevailing answer to that question has been the so-called Copenhagen interpretation, developed in the circle led by Niels Bohr, one of the founders of quantum mechanics. About this orthodoxy N. David Mermin, some intellectual generations removed from Bohr, famously complained, “If I were forced to sum up in one sentence what the Copenhagen interpretation says to me, it would be ‘Shut up and calculate!’” It works. Stop kvetching. Why fix what ain’t broke? Mermin later regretted sounding snotty, but re-emphasized that the question of meaning is important and remains open. The physicist Roderich Tumulka, as quoted in a 2016 interview, is more pugnacious: “Ptolemy’s theory” — of an earth-centered universe — “made perfect sense. It just happened not to be right. But Copenhagen quantum mechanics is incoherent, and thus is not even a reasonable theory to begin with.” This, you will not be surprised to learn, has been disputed.

In What Is Real? the physicist and science writer Adam Becker offers a history of what his subtitle calls “the unfinished quest for the meaning of quantum physics.” Although it is certainly unfinished, it is, as quests go, a few knights short of a Round Table. After the generation of pioneers, foundational work in quantum mechanics became stigmatized as a fringe pursuit, a career killer. So Becker’s well-written book is part science, part sociology (a study of the extrascientific forces that helped solidify the orthodoxy), and part drama (a story of the ideas and often vivid personalities of some dissenters and the shabby treatment they have often received).

The publisher’s blurb breathlessly promises “the untold story of the heretical thinkers who dared to question the nature of our quantum universe” and a “gripping story of this battle of ideas and the courageous scientists who dared to stand up for truth.” But What Is Real? doesn’t live down to that lurid black-and-white logline. It does make a heartfelt and persuasive case that serious problems with the foundations of quantum mechanics have been persistently, even disgracefully, swept under the carpet. (...)

At the end of the nineteenth century, fundamental physics modeled the constituents of the world as particles (discrete lumps of stuff localized in space) and fields (gravity and electromagnetism, continuous and spread throughout space). Particles traveled through the fields, interacting with them and with each other. Light was a wave rippling through the electromagnetic field.

Quantum mechanics arose when certain puzzling phenomena seemed explicable only by supposing that light, firmly established by Maxwell’s theory of electromagnetism as a wave, was acting as if composed of particles. French physicist Louis de Broglie then postulated that all the things believed to be particles could at times behave like waves.

Consider the famous “double-slit” experiment. The experimental apparatus consists of a device that sends electrons, one at a time, toward a barrier with a slit in it and, at some distance behind the barrier, a screen that glows wherever an electron strikes it. The journey of each electron can be usefully thought of in two parts. In the first, the electron either hits the barrier and stops, or it passes through the slit. In the second, if the electron does pass through the slit, it continues on to the screen. The flashes seen on the screen line up with the gun and slit, just as we’d expect from a particle fired like a bullet from the electron gun.

But if we now cut another slit in the barrier, it turns out that its mere existence somehow affects the second part of an electron’s journey. The screen lights up in unexpected places, not always lined up with either of the slits — as if, on reaching one slit, an electron checks whether it had the option of going through the other one and, if so, acquires permission to go anywhere it likes. Well, not quite anywhere: Although we can’t predict where any particular shot will strike the screen, we can statistically predict the overall results of many shots. Their accumulation produces a pattern that looks like the pattern formed by two waves meeting on the surface of a pond. Waves interfere with one another: When two crests or two troughs meet, they reinforce by making a taller crest or deeper trough; when a crest meets a trough, they cancel and leave the surface undisturbed. In the pattern that accumulates on the screen, bright places correspond to reinforcement, dim places to cancellation.

We rethink. Perhaps, taking the pattern as a clue, an electron is really like a wave, a ripple in some field. When the electron wave reaches the barrier, part of it passes through one slit, part through the other, and the pattern we see results from their interference.

There’s an obvious problem: Maybe a stream of electrons can act like a wave (as a stream of water molecules makes up a water wave), but our apparatus sends electrons one at a time. The electron-as-wave model thus requires that firing a single electron causes something to pass through both slits. To check that, we place beside each slit a monitor that will signal when it sees something pass. What we find on firing the gun is that one monitor or the other may signal, but never both; a single electron doesn’t go through both slits. Even worse, when the monitors are in place, no interference pattern forms on the screen. This attempt to observe directly how the pattern arose eliminates what we’re trying to explain. We have to rethink again.

At which point Copenhagen says: Stop! This is puzzling enough without creating unnecessary difficulties. All we actually observe is where an electron strikes the screen — or, if the monitors have been installed, which slit it passes through. If we insist on a theory that accounts for the electron’s journey — the purely hypothetical track of locations it passes through on the way to where it’s actually seen — that theory will be forced to account for where it is when we’re not looking. Pascual Jordan, an important member of Bohr’s circle, cut the Gordian knot: An electron does not have a position until it is observed; the observation is what compels it to assume one. Quantum mechanics makes statistical predictions about where it is more or less likely to be observed.

That move eliminates some awkward questions but sounds uncomfortably like an old joke: The patient lifts his arm and says, “Doc, it hurts when I do this.” The doctor responds, “So don’t do that.” But Jordan’s assertion was not gratuitous. The best available theory did not make it possible to refer to the current location of an unobserved electron, yet that did not prevent it from explaining experimental data or making accurate and testable predictions. Further, there seemed to be no obvious way to incorporate such references, and it was widely believed that it would be impossible to do so (about which more later). It seemed natural, if not quite logically obligatory, to take the leap of asserting that there is no such thing as the location of an electron that is not being observed. For many, this hardened into dogma — that quantum mechanics was a complete and final theory, and attempts to incorporate allegedly missing information were dangerously wrongheaded.

But what is an observation, and what gives it such magical power that it can force a particle to have a location? Is there something special about an observation that distinguishes it from any other physical interaction? Does an observation require an observer? (If so, what was the universe doing before we showed up to observe it?) This constellation of puzzles has come to be called “the measurement problem.”

Bohr postulated a distinction between the quantum world and the world of everyday objects. A “classical” object is an object of everyday experience. It has, for example, a definite position and momentum, whether observed or not. A “quantum” object, such as an electron, has a different status; it’s an abstraction. Some properties, such as electrical charge, belong to the electron abstraction intrinsically, but others can be said to exist only when they are measured or observed. An observation is an event that occurs when the two worlds interact: A quantum-mechanical measurement takes place at the boundary, when a (very small) quantum object interacts with a (much larger) classical object such as a measuring device in a lab.

Experiments have steadily pushed the boundary outward, having demonstrated the double-slit experiment not only with photons and electrons, but also with atoms and even with large molecules consisting of hundreds of atoms, thus millions of times more massive than electrons. Why shouldn’t the same laws of physics apply even to large, classical objects?

Enter Schrödinger’s cat...

by David Guaspari, The New Atlantis | Read more:
Image: Shutterstock

Tuesday, October 15, 2019

Tom Petty & The Heartbreakers

Scientists’ Declaration of Support for Non-Violent Direct Action Against Government Inaction Over the Climate and Ecological Emergency

THIS DECLARATION SETS OUT THE CURRENT SCIENTIFIC CONSENSUS CONCERNING THE CLIMATE AND ECOLOGICAL EMERGENCY AND HIGHLIGHTS THE NECESSITY FOR URGENT ACTION TO PREVENT FURTHER AND IRREVERSIBLE DAMAGE TO THE HABITABILITY OF OUR PLANET.

As scientists, we have dedicated our lives to the study and understanding of the world and our place in it. We declare that scientific evidence shows beyond any reasonable doubt that human-caused changes to the Earth’s land, sea and air are severely threatening the habitability of our planet. We further declare that overwhelming evidence shows that if global greenhouse gas emissions are not brought rapidly down to net zero and biodiversity loss is not halted, we risk catastrophic and irreversible damage to our planetary life-support systems, causing incalculable human suffering and many deaths.

We note that despite the scientific community first sounding the alarm on human-caused global warming more than four decades ago, no action taken by governments thus far has been sufficient to halt the steep rise in greenhouse gas emissions, nor address the ever-worsening loss of biodiversity. Therefore, we call for immediate and decisive action by governments worldwide to rapidly reduce global greenhouse gas emissions to net zero, to prevent further biodiversity loss, and to repair, to the fullest extent possible, the damage that has already been done. We further call upon governments to provide particular support to those who will be most affected by climate change and by the required transition to a sustainable economy.

As scientists, we have an obligation that extends beyond merely describing and understanding the natural world to taking an active part in helping to protect it. We note that the scientific community has already tried all conventional methods to draw attention to the crisis. We believe that the continued governmental inaction over the climate and ecological crisis now justifies peaceful and nonviolent protest and direct action, even if this goes beyond the bounds of the current law.

We therefore support those who are rising up peacefully against governments around the world that are failing to act proportionately to the scale of the crisis.

We believe it is our moral duty to act now, and we urge other scientists to join us in helping to protect humanity’s only home.

To show your support, please add your name to the list below and share with your colleagues. If you’d like to join us at the International Rebellion in London from October 7th (full list of global October Rebellions here), or to find out more, please join our Scientists for Extinction Rebellion Facebook group or email scientistsforxr@protonmail.com.

Signatories:

Signatures are invited from individuals holding a Master's Degree, or holding or studying for a Doctorate, in a field directly related to the sciences. Or those working in a scientific field. Please make explicitly clear if your research field is directly relevant to the climate and/or ecological emergencies. Please note: the views of individuals signing this document do not necessarily represent those of the university or organisation they work for.

[ed. List of signatories]

via: Google Docs
[ed. See also: Land Without Bread (The Baffler).]

Driverless Cars Are Stuck in a Jam

Few ideas have enthused technologists as much as the self-driving car. Advances in machine learning, a subfield of artificial intelligence (AI), would enable cars to teach themselves to drive by drawing on reams of data from the real world. The more they drove, the more data they would collect, and the better they would become. Robotaxis summoned with the flick of an app would make car ownership obsolete. Best of all, reflexes operating at the speed of electronics would drastically improve safety. Car- and tech-industry bosses talked of a world of “zero crashes”.

And the technology was just around the corner. In 2015 Elon Musk, Tesla’s boss, predicted his cars would be capable of “complete autonomy” by 2017. Mr Musk is famous for missing his own deadlines. But he is not alone. General Motors said in 2018 that it would launch a fleet of cars without steering wheels or pedals in 2019; in June it changed its mind. Waymo, the Alphabet subsidiary widely seen as the industry leader, committed itself to launching a driverless-taxi service in Phoenix, where it has been testing its cars, at the end of 2018. The plan has been a damp squib. Only part of the city is covered; only approved users can take part. Phoenix’s wide, sun-soaked streets are some of the easiest to drive on anywhere in the world; even so, Waymo’s cars have human safety drivers behind the wheel, just in case.

Jim Hackett, the boss of Ford, acknowledges that the industry “overestimated the arrival of autonomous vehicles”. Chris Urmson, a linchpin in Alphabet’s self-driving efforts (he left in 2016), used to hope his young son would never need a driving licence. Mr Urmson now talks of self-driving cars appearing gradually over the next 30 to 50 years. Firms are increasingly switching to a more incremental approach, building on technologies such as lane-keeping or automatic parking. A string of fatalities involving self-driving cars have scotched the idea that a zero-crash world is anywhere close. Markets are starting to catch on. In September Morgan Stanley, a bank, cut its valuation of Waymo by 40%, to $105bn, citing delays in its technology.

The future, in other words, is stuck in traffic. Partly that reflects the tech industry’s predilection for grandiose promises. But self-driving cars were also meant to be a flagship for the power of AI. Their struggles offer valuable lessons in the limits of the world’s trendiest technology.
Hit the brakes

One is that, for all the advances in machine learning, machines are still not very good at learning. Most humans need a few dozen hours to master driving. Waymo’s cars have had over 10m miles of practice, and still fall short. And once humans have learned to drive, even on the easy streets of Phoenix, they can, with a little effort, apply that knowledge anywhere, rapidly learning to adapt their skills to rush-hour Bangkok or a gravel-track in rural Greece. Computers are less flexible. AI researchers have expended much brow-sweat searching for techniques to help them match the quick-fire learning displayed by humans. So far, they have not succeeded.

Another lesson is that machine-learning systems are brittle. Learning solely from existing data means they struggle with situations that they have never seen before. Humans can use general knowledge and on-the-fly reasoning to react to things that are new to them—a light aircraft landing on a busy road, for instance, as happened in Washington state in August (thanks to humans’ cognitive flexibility, no one was hurt). Autonomous-car researchers call these unusual situations “edge cases”. Driving is full of them, though most are less dramatic. Mishandled edge cases seem to have been a factor in at least some of the deaths caused by autonomous cars to date. The problem is so hard that some firms, particularly in China, think it may be easier to re-engineer entire cities to support limited self-driving than to build fully autonomous cars (see article).

by The Economist |  Read more:
Image: uncredited

The Millennial Urban Lifestyle Is About to Get More Expensive

Several weeks ago, I met up with a friend in New York who suggested we grab a bite at a Scottish bar in the West Village. He had booked the table through something called Seated, a restaurant app that pays users who make reservations on the platform. We ordered two cocktails each, along with some food. And in exchange for the hard labor of drinking whiskey, the app awarded us $30 in credits redeemable at a variety of retailers.

I am never offended by freebies. But this arrangement seemed almost obscenely generous. To throw cash at people every time they walk into a restaurant does not sound like a business. It sounds like a plot to lose money as fast as possible—or to provide New Yorkers, who are constantly dining out, with a kind of minimum basic income.

“How does this thing make any sense?” I asked my friend.

I don’t know if it makes sense, and I don’t know how long it’s going to last,” he said, pausing to scroll through redemption options. “So, do you want your half in Amazon credits or Starbucks?”

Idon’t know if it makes sense, and I don’t know how long it’s going to last. Is there a better epitaph for this age of consumer technology?

Starting about a decade ago, a fleet of well-known start-ups promised to change the way we work, work out, eat, shop, cook, commute, and sleep. These lifestyle-adjustment companies were so influential that wannabe entrepreneurs saw them as a template, flooding Silicon Valley with “Uber for X” pitches.

But as their promises soared, their profits didn’t. It’s easy to spend all day riding unicorns whose most magical property is their ability to combine high valuations with persistently negative earnings—something I’ve pointed out before. If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates, you’ve interacted with seven companies that will collectively lose nearly $14 billion this year. If you use Lime scooters to bop around the city, download Wag to walk your dog, and sign up for Blue Apron to make a meal, that’s three more brands that have never record a dime in earnings, or have seen their valuations fall by more than 50 percent.

These companies don’t give away cold hard cash as blatantly as Seated. But they’re not so different from the restaurant app. To maximize customer growth they have strategically—or at least “strategically”—throttled their prices, in effect providing a massive consumer subsidy. You might call it the Millennial Lifestyle Sponsorship, in which consumer tech companies, along with their venture-capital backers, help fund the daily habits of their disproportionately young and urban user base. With each Uber ride, WeWork membership, and hand-delivered dinner, the typical consumer has been getting a sweetheart deal.

For consumers—if not for many beleaguered contract workers—the MLS is a magnificent deal, a capital-to-labor transfer of wealth in pursuit of long-term profit; the sort of thing that might simultaneously please Bernie Sanders and the ghost of Milton Friedman.

But this was never going to last forever. WeWork’s disastrous IPO attempt has triggered reverberations across the industry. The theme of consumer tech has shifted from magic to margins. Venture capitalists and start-up founders alike have re-embraced an old mantra: Profits matter.

And higher profits can only mean one thing: Urban lifestyles are about to get more expensive.

by Derek Thompson, The Atlantic |  Read more:
Image: Carlos Jasso/Reuters 

How the SoftBank Scheme Rips Open the Startup Bubble

The biggest force behind the startup bubble in the United States has been SoftBank Group, the Japanese publicly traded conglomerate. It has been the biggest force in driving up valuations of money-losing cash-burn machines to absurd levels. It has been the biggest force in flooding Silicon Valley, San Francisco, and many other startup hot spots with a tsunami of money from around the world — money that it borrowed, and money that other large investors committed to SoftBank’s investment funds to ride on its coattails. But the scheme has run into trouble, and a lot is at stake.

The thing is, SoftBank Group has nearly $100 billion in debt on a consolidated basis as a result of its aggressive acquisition binge in Japan, the US, and elsewhere. This includes permanently broke Sprint Nextel which is now trying to merge with T Mobile. It includes British chip designer ARM that it acquired in 2016 for over $32 billion, its largest acquisition ever. It includes Fortress Investment Group that it acquired in 2017 for $3.3 billion. In August 2017, it acquired a 21% stake in India’s largest e-commerce company Flipkart for $2.5 billion that it sold to Walmart less than a year later for what was said to be a 60% profit. And on and on.

In May 2017, Softbank partnered with Saudi Arabia’s Public Investment Fund to create the Vision Fund, which has obtained $97 billion in funding – well, not actual funding, some actual funding and a lot of promised funding, which made it the largest private venture capital fund ever.

Saudi Public Investment Fund promised to contribute $45 billion over the next few years. SoftBank promised to contribute $28 billion. Abu Dhabi’s Mubadala Investment promised to contribute $15 billion. Apple, Qualcomm, Foxconn, Sharp, and others also promised to contribute smaller amounts.

Over the past two years, the Vision Fund has invested in over 80 companies, including WeWork, Uber, and Slack.

But the Vision Fund needs cash on a constant basis because some of its investors receive interest payments of 7% annually on their investments in the fund. Yeah, that’s unusual, but hey, there is a lot of unusual stuff going on with SoftBank. (...)

SoftBank uses a leverage ratio that is based on the inflated “valuations” of its many investments that are not publicly traded, such as WeWork, into which SoftBank and the Vision Fund have plowed $10 billion. WeWork’s “valuation” is still $47 billion, though in reality, the company is now fighting for sheer survival, and no one has any idea what the company might be worth. Its entire business model has turned out to be just a magnificent cash-burn machine.

But SoftBank and the Vision Fund have already booked the gains from WeWork’s ascent to that $47 billion valuation.

How did they get to these gains?

In 2016, investors poured more money into WeWork by buying shares at a price that gave WeWork a valuation of $17 billion. These deals are negotiated behind closed doors and purposefully leaked to the financial press for effect.

In March 2017, SoftBank invested $300 million. In July 2017, WeWork raised another $760 million, now at a valuation of $20 billion. In July 2018, WeWork obtained $3 billion in funding from SoftBank. In January 2019, SoftBank invested another $2 billion in WeWork, now at a valuation that had been pumped up to $47 billion.

With this $2 billion investment at a valuation of $47 billion, SoftBank pushed all its prior investments up to the same share price, and thus booked a huge gain, more than doubling the value of its prior investments.

Now, I wasn’t in the room when this deal was hashed out. But I can imagine what it sounded like, with SoftBank saying:

We want to more than double the value of our prior investments, and we want to pay the maximum possible per share now, in order to book this huge gain on our prior investments, which will make us look like geniuses, and will allow us to start Vision Fund 2, and it will get the Saudis, which also picked up a huge gain, to increase their confidence in us and invest tens of billions of dollars in our Vision Fund 2.

In these investment rounds, the intent is not to buy low in order to sell high. The intent is to buy high and higher at each successive round. This makes everyone look good on paper. And they can all book gains. And these higher valuations beget hype, and hype begets the money via an IPO to bail out those investors.

By this method, SoftBank has driven up the “value” of its investments, which drives down its loan-to-value ratio. But S&P and Moody’s caught on to it, and now the market too – as demonstrated by the scuttled WeWork IPO – is catching up with SoftBank.

by Wolf Richter, Wolf Street |  Read more:
Image: Issei Kato/Reuters via

Printing Electronics Directly on Delicate Surfaces


Printing Electronics Directly on Delicate Surfaces—Like the Back of Your Hand (IEEE Spectrum). The gentle, low-temperature technique prints electric tattoos on skin and transistors on paper.
Image: Aaron Franklin/Duke University
[ed. See also: Flexible Wearable Reverses Baldness With Gentle Electric Pulses (IEEE Spectrum).]

Harold Bloom, Critic Who Championed Western Canon, Dies at 89

Harold Bloom, the prodigious literary critic who championed and defended the Western canon in an outpouring of influential books that appeared not only on college syllabuses but also — unusual for an academic — on best-seller lists, died on Monday at a hospital in New Haven. He was 89.

His death was confirmed by his wife, Jeanne Bloom, who said he taught his last class at Yale University on Thursday.

Professor Bloom was frequently called the most notorious literary critic in America. From a vaunted perch at Yale, he flew in the face of almost every trend in the literary criticism of his day. Chiefly he argued for the literary superiority of the Western giants like Shakespeare, Chaucer and Kafka — all of them white and male, his own critics pointed out — over writers favored by what he called “the School of Resentment,” by which he meant multiculturalists, feminists, Marxists, neoconservatives and others whom he saw as betraying literature’s essential purpose.

“He is, by any reckoning, one of the most stimulating literary presences of the last half-century — and the most protean,” Sam Tanenhaus wrote in 2011 in The New York Times Book Review, of which he was the editor at the time, “a singular breed of scholar-teacher-critic-prose-poet-pamphleteer.”

At the heart of Professor Bloom’s writing was a passionate love of literature and a relish for its heroic figures.

“Shakespeare is God,” he declared, and Shakespeare’s characters, he said, are as real as people and have shaped Western perceptions of what it is to be human — a view he propounded in the acclaimed “Shakespeare: The Invention of the Human” (1998). (...)

Gorging on Words

Professor Bloom called himself “a monster” of reading; he said he could read, and absorb, a 400-page book in an hour. His friend Richard Bernstein, a professor of philosophy at the New School, told a reporter that watching Professor Bloom read was “scary.”

Armed with a photographic memory, Professor Bloom could recite acres of poetry by heart — by his account, the whole of Shakespeare, Milton’s “Paradise Lost,” all of William Blake, the Hebraic Bible and Edmund Spenser’s monumental “The Fairie Queen.” He relished epigraphs, gnomic remarks and unusual words: kenosis (emptying), tessera (completing), askesis (diminishing) and clinamen (swerving). (...)

Like Dr. Johnson’s, his output was vast: more than 40 books of his own authorship and hundreds of volumes he edited. And he remained prolific to the end, publishing two books in 2017, two in 2018 and two this year: “Macbeth: A Dagger of the Mind” and “Possessed by Memory: The Inward Light of Criticism.” His final book is to be released on an unspecified date by Yale University Press, his wife said.

Perhaps Professor Bloom’s most influential work was one that discussed literary influence itself. The book, “The Anxiety of Influence,” published in 1973 and eventually in some 45 languages, borrows from Freudian theory in envisioning literary creation as an epochal, and Oedipal, struggle in which the young artist rebels against preceding traditions, seeking that burst of originality that distinguishes greatness. (...)

Professor Bloom crossed swords with other critical perspectives in “The Western Canon.” The eminent critic Frank Kermode, identifying those whom Professor Bloom saw as his antagonists, wrote in The London Review of Books, “He has in mind all who profess to regard the canon as an instrument of cultural, hence political, hegemony — as a subtle fraud devised by dead white males to reinforce ethnic and sexist oppression.”

Professor Bloom insisted that a literary work is not a social document — is not to be read for its political or historical content — but is to be enjoyed above all for the aesthetic pleasure it brings. “Bloom isn’t asking us to worship the great books,” the writer Adam Begley wrote in The New York Times Magazine in 1994. “He asks instead that we prize the astonishing mystery of creative genius.”

Professor Bloom himself said that “the canonical quality comes out of strangeness, comes out of the idiosyncratic, comes out of originality.” Mr. Begley noted further, “The canon, Bloom believes, answers an unavoidable question: What, in the little time we have, shall we read?”

“You must choose,” Professor Bloom himself wrote in “The Western Canon.” “Either there were aesthetic values or there are only the overdeterminations of race, class and gender.”

by Dinitia Smith, NY Times | Read more:
Image: Jim Wilson/The New York Times

Five Reasons the Diet Soda Myth Won’t Die

There’s a decent chance you’ll be reading about diet soda studies until the day you die. (The odds are exceedingly good it won’t be the soda that kills you.)

The latest batch of news reports came last month, based on another study linking diet soda to an increased risk of early death.

As usual, the study (and some of the articles) lacked some important context and caused more worry than was warranted. There are specific reasons that this cycle is unlikely to end.

1. If it’s artificial, it must be bad.

People suspect, and not always incorrectly, that putting things created in a lab into their bodies cannot be good. People worry about genetically modified organisms, and monosodium glutamate and, yes, artificial sweeteners because they sound scary.

But everything is a chemical, including dihydrogen monoxide (that’s another way of saying water). These are just words we use to describe ingredients. Some ingredients occur naturally, and some are coaxed into existence. That doesn’t inherently make one better than another. In fact, I’ve argued that research supports consuming artificial sweeteners over added sugars. (The latest study concludes the opposite.)

2. Soda is an easy target

In a health-conscious era, soda has become almost stigmatized in some circles (and sales have fallen as a result).

It’s true that no one “needs” soda. There are a million varieties, and almost none taste like anything in nature. Some, like Dr Pepper, defy description.

But there are many things we eat and drink that we don’t “need.” We don’t need ice cream or pie, but for a lot of people, life would be less enjoyable without those things.

None of this should be taken as a license to drink cases of soda a week. A lack of evidence of danger at normal amounts doesn’t mean that consuming any one thing is huge amounts is a good idea. Moderation still matters.

3. Scientists need to publish to keep their jobs

I’m a professor on the research tenure track, and I’m here to tell you that the coin of the realm is grants and papers. You need funding to survive, and you need to publish to get funding.

As a junior faculty member, or even as a doctoral student or postdoctoral fellow, you need to publish research. Often, the easiest step is to take a large data set and publish an analysis from it showing a correlation between some factor and some outcome.

This kind of research is rampant. That’s how we hear year after year that everyone is dehydrated and we need to drink more water. It’s how we hear that coffee is affecting health in this way or that. It’s how we wind up with a lot of nutritional studies that find associations in one way or another.

As long as the culture of science demands output as the measure of success, these studies will appear. And given that the news media also needs to publish to survive — if you didn’t know, people love to read about food and health — we’ll continue to read stories about how diet soda will kill us.

by Aaron E. Carroll, NY Times | Read more:
Image: Wilfredo Lee

Saturday, October 12, 2019

Artificial Intelligence: What’s to Fear?

In 2017, scientists at Carnegie Mellon University shocked the gaming world when they programmed a computer to beat experts in a poker game called no-limit hold ’em. People assumed a poker player’s intuition and creative thinking would give him or her the competitive edge. Yet by playing 24 trillion hands of poker every second for two months, the computer “taught” itself an unbeatable strategy.

Many people fear such events. It’s not just the potential job losses. If artificial intelligence (AI) can do everything better than a human being can, then human endeavor is pointless and human beings are valueless.

Computers long ago surpassed humans in certain skills—for example, in the ability to calculate and catalog. Yet they have traditionally been unable to reproduce people’s creative, imaginative, emotional, and intuitive skills. It is why personalized service workers such as coaches and physicians enjoy some of the sweetest sinecures in the economy. Their humanity, meaning their ability to individualize services and connect with others, which computers lack, adds value. Yet not only does AI win at cards now, it also creates art, writes poetry, and performs psychotherapy. Even lovemaking is at risk, as artificially intelligent robots stand poised to enter the market and provide sexual services and romantic intimacy. With the rise of AI, today’s human beings seem to be as vulnerable as yesterday’s apes, occupying a more primitive stage of evolution.

But not so fast. AI is not quite the threat it is made out to be. Take, for example, the computer’s victory in poker. The computer did not win because it had more intuition; it won because it played a strategy called “game theory optimal” (GTO). The computer simply calculated the optimal frequency for raising, betting, and folding using special equations, independent of whatever cards the other players held. People call what the computer displayed during the game “intelligence,” but it was not intelligence as we traditionally understand it.

Such a misinterpretation of AI seems subtle and unimportant. But over time, spread out over different areas of life, misinterpretations of this type launch a cascade of effects that have serious psychosocial consequences. People are right to fear AI robots taking their jobs. They may be right to fear AI killer robots. But AI presents other, smaller dangers that are less exciting but more corrosive in the long run.

by Ronald W. Dworkin, The American Interest |  Read more:
Image: Wikimedia Commons

Everything Going Wrong in Okinawa

On 23 February 2016 Admiral Harry Harris, then Commander US Forces Pacific, testifying before the Senate Armed Services Committee, was asked how the construction of the Futenma Replacement Facility was progressing. This refers to the super airbase the Japanese Defense Ministry is building at Henoko in northern Okinawa to house the units of the First Marine Air Wing now deployed at Futenma Air Station, in crowded central Okinawa.

Admiral Harris, his voice betraying irritation, replied, ”it’s . . . a little over two years late. It was going to be done by 2023, now we’re looking at 2025 . . . “

This made the front pages in Okinawa, though probably nowhere else. The next day Suga Yoshihide, Japan’s Chief Cabinet Secretary, was asked about this at a press conference. He wanted to say Admiral Harris was wrong, but attempted to put it more diplomatically: “It’s too early to say”, – which amounts to the same thing.

Harris was indeed wrong, but not in the way Suga wanted his listeners to believe. A year before this, in 2015, the Okinawa Defense Bureau, the Defense Ministry’s branch in Okinawa, completed a report stating that their soil tests of the sea bottom of Oura Bay, scheduled to be filled to support the new airstrips, had yielded an N-value of zero. N-value is derived by dropping a 140 pound hammer on a hollow drill resting on the sea bottom. The number of blows required to drive it down six inches is the N-value. Thirty or more is considered a firm base. Zero means no blows were required; the drill sank of its own weight.

This information was kept from the Okinawan Government and public for two years, until an independent engineer managed to obtain a copy of the report. Judging from Admiral Harris’ statement, the information had also been kept from the US, and had not been taken into account in Harris’ (as we now know, wildly optimistic) “two years”. Before anything can be built on the “mayonnaise sea bottom”, as it is popularly known in Okinawa, it must be firmed up. The preferred way to do this is by implanting “sand piles” (pillars) into the slime. Huge hollow drills filled with sand are driven down until they reach bedrock. The drills are raised, the sand is left behind. The Okinawa Defense Bureau estimates that if this operation is repeated 77,000 times, the sea bottom will be sufficiently firm to begin construction. This is expected to take as much as five years. That means that 2025, the year Harris predicted the base will be completed, will be the year the sand pillar operation will be completed and sea wall construction on Oura Bay can begin – if all goes well.

If all goes well – and if Murphy’s law ceases to operate (Murphy’s law, If there is anything that could go wrong, it will).

But from the standpoint of the Okinawa Defense Bureau, everything is going wrong. First of all, they have failed to persuade (or to force) the Okinawans to give up their opposition to the new base, which they see as a danger, an environmental catastrophe and an insult. From the Governor’s office through the Prefectural Assembly through Okinawa’s two newspapers down to the daily sit-ins at various points where trucks can be blocked, from every direction, and using every non-violent tactic, including lawsuits, construction is being slowed. Then there is the fact that the site is surrounded by dozens of structures that violate FAA and DOD height regulations for airports. Then there are the two earthquake faults beneath the site, which the Defense Bureau has addressed by going into denial.

But it is on Oura Bay where Murphy’s law is doing the most damage. The Okinawa Defense Bureau’s soil tests have shown that in some places the mayonnaise sea bottom extends to 90 meters below sea level. Sand pillar implantation to 90 meters has never been attempted in Japan (some say, never in the world), nor do rigs exist capable of drilling to that depth. It’s not clear how the Okinawa Defense Bureau plans to deal with that – unless the comment by a government official that “maybe 60 meters will be good enough” can be considered a plan.

by Doug Lummis, Counterpunch |  Read more:
[ed. They don't want it, we can't build it. Imagine what that money could do for infrastructure in the US. See also: The Pentagon is Pledging to Reform Itself, Again. It Won’t. (Counterpunch). $1.4 trillion/two years.]

Yuval Noah Harari & Steven Pinker in Conversation


[ed. Fascinating, highly recommended.] 

José Calheiros (JACAC)
via:

Night Pier
via: