Tuesday, October 15, 2019

Scientists’ Declaration of Support for Non-Violent Direct Action Against Government Inaction Over the Climate and Ecological Emergency

THIS DECLARATION SETS OUT THE CURRENT SCIENTIFIC CONSENSUS CONCERNING THE CLIMATE AND ECOLOGICAL EMERGENCY AND HIGHLIGHTS THE NECESSITY FOR URGENT ACTION TO PREVENT FURTHER AND IRREVERSIBLE DAMAGE TO THE HABITABILITY OF OUR PLANET.

As scientists, we have dedicated our lives to the study and understanding of the world and our place in it. We declare that scientific evidence shows beyond any reasonable doubt that human-caused changes to the Earth’s land, sea and air are severely threatening the habitability of our planet. We further declare that overwhelming evidence shows that if global greenhouse gas emissions are not brought rapidly down to net zero and biodiversity loss is not halted, we risk catastrophic and irreversible damage to our planetary life-support systems, causing incalculable human suffering and many deaths.

We note that despite the scientific community first sounding the alarm on human-caused global warming more than four decades ago, no action taken by governments thus far has been sufficient to halt the steep rise in greenhouse gas emissions, nor address the ever-worsening loss of biodiversity. Therefore, we call for immediate and decisive action by governments worldwide to rapidly reduce global greenhouse gas emissions to net zero, to prevent further biodiversity loss, and to repair, to the fullest extent possible, the damage that has already been done. We further call upon governments to provide particular support to those who will be most affected by climate change and by the required transition to a sustainable economy.

As scientists, we have an obligation that extends beyond merely describing and understanding the natural world to taking an active part in helping to protect it. We note that the scientific community has already tried all conventional methods to draw attention to the crisis. We believe that the continued governmental inaction over the climate and ecological crisis now justifies peaceful and nonviolent protest and direct action, even if this goes beyond the bounds of the current law.

We therefore support those who are rising up peacefully against governments around the world that are failing to act proportionately to the scale of the crisis.

We believe it is our moral duty to act now, and we urge other scientists to join us in helping to protect humanity’s only home.

To show your support, please add your name to the list below and share with your colleagues. If you’d like to join us at the International Rebellion in London from October 7th (full list of global October Rebellions here), or to find out more, please join our Scientists for Extinction Rebellion Facebook group or email scientistsforxr@protonmail.com.

Signatories:

Signatures are invited from individuals holding a Master's Degree, or holding or studying for a Doctorate, in a field directly related to the sciences. Or those working in a scientific field. Please make explicitly clear if your research field is directly relevant to the climate and/or ecological emergencies. Please note: the views of individuals signing this document do not necessarily represent those of the university or organisation they work for.

[ed. List of signatories]

via: Google Docs
[ed. See also: Land Without Bread (The Baffler).]

Driverless Cars Are Stuck in a Jam

Few ideas have enthused technologists as much as the self-driving car. Advances in machine learning, a subfield of artificial intelligence (AI), would enable cars to teach themselves to drive by drawing on reams of data from the real world. The more they drove, the more data they would collect, and the better they would become. Robotaxis summoned with the flick of an app would make car ownership obsolete. Best of all, reflexes operating at the speed of electronics would drastically improve safety. Car- and tech-industry bosses talked of a world of “zero crashes”.

And the technology was just around the corner. In 2015 Elon Musk, Tesla’s boss, predicted his cars would be capable of “complete autonomy” by 2017. Mr Musk is famous for missing his own deadlines. But he is not alone. General Motors said in 2018 that it would launch a fleet of cars without steering wheels or pedals in 2019; in June it changed its mind. Waymo, the Alphabet subsidiary widely seen as the industry leader, committed itself to launching a driverless-taxi service in Phoenix, where it has been testing its cars, at the end of 2018. The plan has been a damp squib. Only part of the city is covered; only approved users can take part. Phoenix’s wide, sun-soaked streets are some of the easiest to drive on anywhere in the world; even so, Waymo’s cars have human safety drivers behind the wheel, just in case.

Jim Hackett, the boss of Ford, acknowledges that the industry “overestimated the arrival of autonomous vehicles”. Chris Urmson, a linchpin in Alphabet’s self-driving efforts (he left in 2016), used to hope his young son would never need a driving licence. Mr Urmson now talks of self-driving cars appearing gradually over the next 30 to 50 years. Firms are increasingly switching to a more incremental approach, building on technologies such as lane-keeping or automatic parking. A string of fatalities involving self-driving cars have scotched the idea that a zero-crash world is anywhere close. Markets are starting to catch on. In September Morgan Stanley, a bank, cut its valuation of Waymo by 40%, to $105bn, citing delays in its technology.

The future, in other words, is stuck in traffic. Partly that reflects the tech industry’s predilection for grandiose promises. But self-driving cars were also meant to be a flagship for the power of AI. Their struggles offer valuable lessons in the limits of the world’s trendiest technology.
Hit the brakes

One is that, for all the advances in machine learning, machines are still not very good at learning. Most humans need a few dozen hours to master driving. Waymo’s cars have had over 10m miles of practice, and still fall short. And once humans have learned to drive, even on the easy streets of Phoenix, they can, with a little effort, apply that knowledge anywhere, rapidly learning to adapt their skills to rush-hour Bangkok or a gravel-track in rural Greece. Computers are less flexible. AI researchers have expended much brow-sweat searching for techniques to help them match the quick-fire learning displayed by humans. So far, they have not succeeded.

Another lesson is that machine-learning systems are brittle. Learning solely from existing data means they struggle with situations that they have never seen before. Humans can use general knowledge and on-the-fly reasoning to react to things that are new to them—a light aircraft landing on a busy road, for instance, as happened in Washington state in August (thanks to humans’ cognitive flexibility, no one was hurt). Autonomous-car researchers call these unusual situations “edge cases”. Driving is full of them, though most are less dramatic. Mishandled edge cases seem to have been a factor in at least some of the deaths caused by autonomous cars to date. The problem is so hard that some firms, particularly in China, think it may be easier to re-engineer entire cities to support limited self-driving than to build fully autonomous cars (see article).

by The Economist |  Read more:
Image: uncredited

The Millennial Urban Lifestyle Is About to Get More Expensive

Several weeks ago, I met up with a friend in New York who suggested we grab a bite at a Scottish bar in the West Village. He had booked the table through something called Seated, a restaurant app that pays users who make reservations on the platform. We ordered two cocktails each, along with some food. And in exchange for the hard labor of drinking whiskey, the app awarded us $30 in credits redeemable at a variety of retailers.

I am never offended by freebies. But this arrangement seemed almost obscenely generous. To throw cash at people every time they walk into a restaurant does not sound like a business. It sounds like a plot to lose money as fast as possible—or to provide New Yorkers, who are constantly dining out, with a kind of minimum basic income.

“How does this thing make any sense?” I asked my friend.

I don’t know if it makes sense, and I don’t know how long it’s going to last,” he said, pausing to scroll through redemption options. “So, do you want your half in Amazon credits or Starbucks?”

Idon’t know if it makes sense, and I don’t know how long it’s going to last. Is there a better epitaph for this age of consumer technology?

Starting about a decade ago, a fleet of well-known start-ups promised to change the way we work, work out, eat, shop, cook, commute, and sleep. These lifestyle-adjustment companies were so influential that wannabe entrepreneurs saw them as a template, flooding Silicon Valley with “Uber for X” pitches.

But as their promises soared, their profits didn’t. It’s easy to spend all day riding unicorns whose most magical property is their ability to combine high valuations with persistently negative earnings—something I’ve pointed out before. If you wake up on a Casper mattress, work out with a Peloton before breakfast, Uber to your desk at a WeWork, order DoorDash for lunch, take a Lyft home, and get dinner through Postmates, you’ve interacted with seven companies that will collectively lose nearly $14 billion this year. If you use Lime scooters to bop around the city, download Wag to walk your dog, and sign up for Blue Apron to make a meal, that’s three more brands that have never record a dime in earnings, or have seen their valuations fall by more than 50 percent.

These companies don’t give away cold hard cash as blatantly as Seated. But they’re not so different from the restaurant app. To maximize customer growth they have strategically—or at least “strategically”—throttled their prices, in effect providing a massive consumer subsidy. You might call it the Millennial Lifestyle Sponsorship, in which consumer tech companies, along with their venture-capital backers, help fund the daily habits of their disproportionately young and urban user base. With each Uber ride, WeWork membership, and hand-delivered dinner, the typical consumer has been getting a sweetheart deal.

For consumers—if not for many beleaguered contract workers—the MLS is a magnificent deal, a capital-to-labor transfer of wealth in pursuit of long-term profit; the sort of thing that might simultaneously please Bernie Sanders and the ghost of Milton Friedman.

But this was never going to last forever. WeWork’s disastrous IPO attempt has triggered reverberations across the industry. The theme of consumer tech has shifted from magic to margins. Venture capitalists and start-up founders alike have re-embraced an old mantra: Profits matter.

And higher profits can only mean one thing: Urban lifestyles are about to get more expensive.

by Derek Thompson, The Atlantic |  Read more:
Image: Carlos Jasso/Reuters 

How the SoftBank Scheme Rips Open the Startup Bubble

The biggest force behind the startup bubble in the United States has been SoftBank Group, the Japanese publicly traded conglomerate. It has been the biggest force in driving up valuations of money-losing cash-burn machines to absurd levels. It has been the biggest force in flooding Silicon Valley, San Francisco, and many other startup hot spots with a tsunami of money from around the world — money that it borrowed, and money that other large investors committed to SoftBank’s investment funds to ride on its coattails. But the scheme has run into trouble, and a lot is at stake.

The thing is, SoftBank Group has nearly $100 billion in debt on a consolidated basis as a result of its aggressive acquisition binge in Japan, the US, and elsewhere. This includes permanently broke Sprint Nextel which is now trying to merge with T Mobile. It includes British chip designer ARM that it acquired in 2016 for over $32 billion, its largest acquisition ever. It includes Fortress Investment Group that it acquired in 2017 for $3.3 billion. In August 2017, it acquired a 21% stake in India’s largest e-commerce company Flipkart for $2.5 billion that it sold to Walmart less than a year later for what was said to be a 60% profit. And on and on.

In May 2017, Softbank partnered with Saudi Arabia’s Public Investment Fund to create the Vision Fund, which has obtained $97 billion in funding – well, not actual funding, some actual funding and a lot of promised funding, which made it the largest private venture capital fund ever.

Saudi Public Investment Fund promised to contribute $45 billion over the next few years. SoftBank promised to contribute $28 billion. Abu Dhabi’s Mubadala Investment promised to contribute $15 billion. Apple, Qualcomm, Foxconn, Sharp, and others also promised to contribute smaller amounts.

Over the past two years, the Vision Fund has invested in over 80 companies, including WeWork, Uber, and Slack.

But the Vision Fund needs cash on a constant basis because some of its investors receive interest payments of 7% annually on their investments in the fund. Yeah, that’s unusual, but hey, there is a lot of unusual stuff going on with SoftBank. (...)

SoftBank uses a leverage ratio that is based on the inflated “valuations” of its many investments that are not publicly traded, such as WeWork, into which SoftBank and the Vision Fund have plowed $10 billion. WeWork’s “valuation” is still $47 billion, though in reality, the company is now fighting for sheer survival, and no one has any idea what the company might be worth. Its entire business model has turned out to be just a magnificent cash-burn machine.

But SoftBank and the Vision Fund have already booked the gains from WeWork’s ascent to that $47 billion valuation.

How did they get to these gains?

In 2016, investors poured more money into WeWork by buying shares at a price that gave WeWork a valuation of $17 billion. These deals are negotiated behind closed doors and purposefully leaked to the financial press for effect.

In March 2017, SoftBank invested $300 million. In July 2017, WeWork raised another $760 million, now at a valuation of $20 billion. In July 2018, WeWork obtained $3 billion in funding from SoftBank. In January 2019, SoftBank invested another $2 billion in WeWork, now at a valuation that had been pumped up to $47 billion.

With this $2 billion investment at a valuation of $47 billion, SoftBank pushed all its prior investments up to the same share price, and thus booked a huge gain, more than doubling the value of its prior investments.

Now, I wasn’t in the room when this deal was hashed out. But I can imagine what it sounded like, with SoftBank saying:

We want to more than double the value of our prior investments, and we want to pay the maximum possible per share now, in order to book this huge gain on our prior investments, which will make us look like geniuses, and will allow us to start Vision Fund 2, and it will get the Saudis, which also picked up a huge gain, to increase their confidence in us and invest tens of billions of dollars in our Vision Fund 2.

In these investment rounds, the intent is not to buy low in order to sell high. The intent is to buy high and higher at each successive round. This makes everyone look good on paper. And they can all book gains. And these higher valuations beget hype, and hype begets the money via an IPO to bail out those investors.

By this method, SoftBank has driven up the “value” of its investments, which drives down its loan-to-value ratio. But S&P and Moody’s caught on to it, and now the market too – as demonstrated by the scuttled WeWork IPO – is catching up with SoftBank.

by Wolf Richter, Wolf Street |  Read more:
Image: Issei Kato/Reuters via

Printing Electronics Directly on Delicate Surfaces


Printing Electronics Directly on Delicate Surfaces—Like the Back of Your Hand (IEEE Spectrum). The gentle, low-temperature technique prints electric tattoos on skin and transistors on paper.
Image: Aaron Franklin/Duke University
[ed. See also: Flexible Wearable Reverses Baldness With Gentle Electric Pulses (IEEE Spectrum).]

Harold Bloom, Critic Who Championed Western Canon, Dies at 89

Harold Bloom, the prodigious literary critic who championed and defended the Western canon in an outpouring of influential books that appeared not only on college syllabuses but also — unusual for an academic — on best-seller lists, died on Monday at a hospital in New Haven. He was 89.

His death was confirmed by his wife, Jeanne Bloom, who said he taught his last class at Yale University on Thursday.

Professor Bloom was frequently called the most notorious literary critic in America. From a vaunted perch at Yale, he flew in the face of almost every trend in the literary criticism of his day. Chiefly he argued for the literary superiority of the Western giants like Shakespeare, Chaucer and Kafka — all of them white and male, his own critics pointed out — over writers favored by what he called “the School of Resentment,” by which he meant multiculturalists, feminists, Marxists, neoconservatives and others whom he saw as betraying literature’s essential purpose.

“He is, by any reckoning, one of the most stimulating literary presences of the last half-century — and the most protean,” Sam Tanenhaus wrote in 2011 in The New York Times Book Review, of which he was the editor at the time, “a singular breed of scholar-teacher-critic-prose-poet-pamphleteer.”

At the heart of Professor Bloom’s writing was a passionate love of literature and a relish for its heroic figures.

“Shakespeare is God,” he declared, and Shakespeare’s characters, he said, are as real as people and have shaped Western perceptions of what it is to be human — a view he propounded in the acclaimed “Shakespeare: The Invention of the Human” (1998). (...)

Gorging on Words

Professor Bloom called himself “a monster” of reading; he said he could read, and absorb, a 400-page book in an hour. His friend Richard Bernstein, a professor of philosophy at the New School, told a reporter that watching Professor Bloom read was “scary.”

Armed with a photographic memory, Professor Bloom could recite acres of poetry by heart — by his account, the whole of Shakespeare, Milton’s “Paradise Lost,” all of William Blake, the Hebraic Bible and Edmund Spenser’s monumental “The Fairie Queen.” He relished epigraphs, gnomic remarks and unusual words: kenosis (emptying), tessera (completing), askesis (diminishing) and clinamen (swerving). (...)

Like Dr. Johnson’s, his output was vast: more than 40 books of his own authorship and hundreds of volumes he edited. And he remained prolific to the end, publishing two books in 2017, two in 2018 and two this year: “Macbeth: A Dagger of the Mind” and “Possessed by Memory: The Inward Light of Criticism.” His final book is to be released on an unspecified date by Yale University Press, his wife said.

Perhaps Professor Bloom’s most influential work was one that discussed literary influence itself. The book, “The Anxiety of Influence,” published in 1973 and eventually in some 45 languages, borrows from Freudian theory in envisioning literary creation as an epochal, and Oedipal, struggle in which the young artist rebels against preceding traditions, seeking that burst of originality that distinguishes greatness. (...)

Professor Bloom crossed swords with other critical perspectives in “The Western Canon.” The eminent critic Frank Kermode, identifying those whom Professor Bloom saw as his antagonists, wrote in The London Review of Books, “He has in mind all who profess to regard the canon as an instrument of cultural, hence political, hegemony — as a subtle fraud devised by dead white males to reinforce ethnic and sexist oppression.”

Professor Bloom insisted that a literary work is not a social document — is not to be read for its political or historical content — but is to be enjoyed above all for the aesthetic pleasure it brings. “Bloom isn’t asking us to worship the great books,” the writer Adam Begley wrote in The New York Times Magazine in 1994. “He asks instead that we prize the astonishing mystery of creative genius.”

Professor Bloom himself said that “the canonical quality comes out of strangeness, comes out of the idiosyncratic, comes out of originality.” Mr. Begley noted further, “The canon, Bloom believes, answers an unavoidable question: What, in the little time we have, shall we read?”

“You must choose,” Professor Bloom himself wrote in “The Western Canon.” “Either there were aesthetic values or there are only the overdeterminations of race, class and gender.”

by Dinitia Smith, NY Times | Read more:
Image: Jim Wilson/The New York Times

Five Reasons the Diet Soda Myth Won’t Die

There’s a decent chance you’ll be reading about diet soda studies until the day you die. (The odds are exceedingly good it won’t be the soda that kills you.)

The latest batch of news reports came last month, based on another study linking diet soda to an increased risk of early death.

As usual, the study (and some of the articles) lacked some important context and caused more worry than was warranted. There are specific reasons that this cycle is unlikely to end.

1. If it’s artificial, it must be bad.

People suspect, and not always incorrectly, that putting things created in a lab into their bodies cannot be good. People worry about genetically modified organisms, and monosodium glutamate and, yes, artificial sweeteners because they sound scary.

But everything is a chemical, including dihydrogen monoxide (that’s another way of saying water). These are just words we use to describe ingredients. Some ingredients occur naturally, and some are coaxed into existence. That doesn’t inherently make one better than another. In fact, I’ve argued that research supports consuming artificial sweeteners over added sugars. (The latest study concludes the opposite.)

2. Soda is an easy target

In a health-conscious era, soda has become almost stigmatized in some circles (and sales have fallen as a result).

It’s true that no one “needs” soda. There are a million varieties, and almost none taste like anything in nature. Some, like Dr Pepper, defy description.

But there are many things we eat and drink that we don’t “need.” We don’t need ice cream or pie, but for a lot of people, life would be less enjoyable without those things.

None of this should be taken as a license to drink cases of soda a week. A lack of evidence of danger at normal amounts doesn’t mean that consuming any one thing is huge amounts is a good idea. Moderation still matters.

3. Scientists need to publish to keep their jobs

I’m a professor on the research tenure track, and I’m here to tell you that the coin of the realm is grants and papers. You need funding to survive, and you need to publish to get funding.

As a junior faculty member, or even as a doctoral student or postdoctoral fellow, you need to publish research. Often, the easiest step is to take a large data set and publish an analysis from it showing a correlation between some factor and some outcome.

This kind of research is rampant. That’s how we hear year after year that everyone is dehydrated and we need to drink more water. It’s how we hear that coffee is affecting health in this way or that. It’s how we wind up with a lot of nutritional studies that find associations in one way or another.

As long as the culture of science demands output as the measure of success, these studies will appear. And given that the news media also needs to publish to survive — if you didn’t know, people love to read about food and health — we’ll continue to read stories about how diet soda will kill us.

by Aaron E. Carroll, NY Times | Read more:
Image: Wilfredo Lee

Saturday, October 12, 2019

Artificial Intelligence: What’s to Fear?

In 2017, scientists at Carnegie Mellon University shocked the gaming world when they programmed a computer to beat experts in a poker game called no-limit hold ’em. People assumed a poker player’s intuition and creative thinking would give him or her the competitive edge. Yet by playing 24 trillion hands of poker every second for two months, the computer “taught” itself an unbeatable strategy.

Many people fear such events. It’s not just the potential job losses. If artificial intelligence (AI) can do everything better than a human being can, then human endeavor is pointless and human beings are valueless.

Computers long ago surpassed humans in certain skills—for example, in the ability to calculate and catalog. Yet they have traditionally been unable to reproduce people’s creative, imaginative, emotional, and intuitive skills. It is why personalized service workers such as coaches and physicians enjoy some of the sweetest sinecures in the economy. Their humanity, meaning their ability to individualize services and connect with others, which computers lack, adds value. Yet not only does AI win at cards now, it also creates art, writes poetry, and performs psychotherapy. Even lovemaking is at risk, as artificially intelligent robots stand poised to enter the market and provide sexual services and romantic intimacy. With the rise of AI, today’s human beings seem to be as vulnerable as yesterday’s apes, occupying a more primitive stage of evolution.

But not so fast. AI is not quite the threat it is made out to be. Take, for example, the computer’s victory in poker. The computer did not win because it had more intuition; it won because it played a strategy called “game theory optimal” (GTO). The computer simply calculated the optimal frequency for raising, betting, and folding using special equations, independent of whatever cards the other players held. People call what the computer displayed during the game “intelligence,” but it was not intelligence as we traditionally understand it.

Such a misinterpretation of AI seems subtle and unimportant. But over time, spread out over different areas of life, misinterpretations of this type launch a cascade of effects that have serious psychosocial consequences. People are right to fear AI robots taking their jobs. They may be right to fear AI killer robots. But AI presents other, smaller dangers that are less exciting but more corrosive in the long run.

by Ronald W. Dworkin, The American Interest |  Read more:
Image: Wikimedia Commons

Everything Going Wrong in Okinawa

On 23 February 2016 Admiral Harry Harris, then Commander US Forces Pacific, testifying before the Senate Armed Services Committee, was asked how the construction of the Futenma Replacement Facility was progressing. This refers to the super airbase the Japanese Defense Ministry is building at Henoko in northern Okinawa to house the units of the First Marine Air Wing now deployed at Futenma Air Station, in crowded central Okinawa.

Admiral Harris, his voice betraying irritation, replied, ”it’s . . . a little over two years late. It was going to be done by 2023, now we’re looking at 2025 . . . “

This made the front pages in Okinawa, though probably nowhere else. The next day Suga Yoshihide, Japan’s Chief Cabinet Secretary, was asked about this at a press conference. He wanted to say Admiral Harris was wrong, but attempted to put it more diplomatically: “It’s too early to say”, – which amounts to the same thing.

Harris was indeed wrong, but not in the way Suga wanted his listeners to believe. A year before this, in 2015, the Okinawa Defense Bureau, the Defense Ministry’s branch in Okinawa, completed a report stating that their soil tests of the sea bottom of Oura Bay, scheduled to be filled to support the new airstrips, had yielded an N-value of zero. N-value is derived by dropping a 140 pound hammer on a hollow drill resting on the sea bottom. The number of blows required to drive it down six inches is the N-value. Thirty or more is considered a firm base. Zero means no blows were required; the drill sank of its own weight.

This information was kept from the Okinawan Government and public for two years, until an independent engineer managed to obtain a copy of the report. Judging from Admiral Harris’ statement, the information had also been kept from the US, and had not been taken into account in Harris’ (as we now know, wildly optimistic) “two years”. Before anything can be built on the “mayonnaise sea bottom”, as it is popularly known in Okinawa, it must be firmed up. The preferred way to do this is by implanting “sand piles” (pillars) into the slime. Huge hollow drills filled with sand are driven down until they reach bedrock. The drills are raised, the sand is left behind. The Okinawa Defense Bureau estimates that if this operation is repeated 77,000 times, the sea bottom will be sufficiently firm to begin construction. This is expected to take as much as five years. That means that 2025, the year Harris predicted the base will be completed, will be the year the sand pillar operation will be completed and sea wall construction on Oura Bay can begin – if all goes well.

If all goes well – and if Murphy’s law ceases to operate (Murphy’s law, If there is anything that could go wrong, it will).

But from the standpoint of the Okinawa Defense Bureau, everything is going wrong. First of all, they have failed to persuade (or to force) the Okinawans to give up their opposition to the new base, which they see as a danger, an environmental catastrophe and an insult. From the Governor’s office through the Prefectural Assembly through Okinawa’s two newspapers down to the daily sit-ins at various points where trucks can be blocked, from every direction, and using every non-violent tactic, including lawsuits, construction is being slowed. Then there is the fact that the site is surrounded by dozens of structures that violate FAA and DOD height regulations for airports. Then there are the two earthquake faults beneath the site, which the Defense Bureau has addressed by going into denial.

But it is on Oura Bay where Murphy’s law is doing the most damage. The Okinawa Defense Bureau’s soil tests have shown that in some places the mayonnaise sea bottom extends to 90 meters below sea level. Sand pillar implantation to 90 meters has never been attempted in Japan (some say, never in the world), nor do rigs exist capable of drilling to that depth. It’s not clear how the Okinawa Defense Bureau plans to deal with that – unless the comment by a government official that “maybe 60 meters will be good enough” can be considered a plan.

by Doug Lummis, Counterpunch |  Read more:
[ed. They don't want it, we can't build it. Imagine what that money could do for infrastructure in the US. See also: The Pentagon is Pledging to Reform Itself, Again. It Won’t. (Counterpunch). $1.4 trillion/two years.]

Yuval Noah Harari & Steven Pinker in Conversation


[ed. Fascinating, highly recommended.] 

José Calheiros (JACAC)
via:

Night Pier
via:

Friday, October 11, 2019

The Biggest Lie Tech People Tell Themselves — and the Rest of Us

Imagine you’re taking an online business class — the kind where you watch video lectures and then answer questions at the end. But this isn’t a normal class, and you’re not just watching the lectures: They’re watching you back. Every time the facial recognition system decides that you look bored, distracted, or tuned out, it makes a note. And after each lecture, it only asks you about content from those moments.

This isn’t a hypothetical system; it’s a real one deployed by a company called Nestor. And if you don’t like the sound of it, you’re not alone. Neither do the actual students.

When I asked the man behind the system, French inventor Marcel Saucet, how the students in these classes feel about being watched, he admitted that they didn’t like it. They felt violated and surveilled, he said, but he shrugged off any implication that it was his fault. “Everybody is doing this,” he told me. “It’s really early and shocking, but we cannot go against natural laws of evolution.”

As a reporter who covers technology and the future, I constantly hear variations of this line as technologists attempt to apply the theory Charles Darwin made famous in biology to their own work. I’m told that there is a progression of technology, a movement that is bigger than any individual inventor or CEO. They say they are simply caught in a tide, swept along in a current they cannot fight. They say it inevitably leads them to facial recognition (now even being deployed on children), smart speakers that record your intimate conversations, and doorbells that narc on your neighbors. They say we can’t blame these companies for the erosion of privacy or democracy or trust in public institutions — that was all going to happen sooner or later.

“When have we ever been able to keep the genie in the bottle?” they ask. Besides, they argue, people buy this stuff so they must want it. Companies are simply responding to “natural selection” by consumers. There is nobody to blame for this, they say. It’s as natural as gravity.

Perhaps no one states this belief more clearly than inventor and futurist Ray Kurzweil in his 2005 book The Singularity Is Near: “The ongoing acceleration of technology is the implication and inevitable result of what I call the law of accelerating returns, which describes the acceleration of the pace of and the exponential growth of the products of an evolutionary process.”

In fact, our world is shaped by humans who make decisions, and technology companies are no different.

To claim that these devices are the result of some kind of ever-improving natural process not only misunderstands how evolution works, but it also suggests that everything from biological weapons to fraudulent startups like Theranos to Juicero (the $400 machine that squeezed juice out of packets) are necessary and natural.

While these “innovations” range from the dangerous to the silly, they share a common thread: Nothing about them is “natural.” No natural process is creating a “smart” hairbrush or a “smart” flip flop or a “smart” condom. Or a Bluetooth-enabled toaster, a cryptocurrency from a photography company, or an internet-connected air freshener.

Evolution is a terrible metaphor for technology

Technologists’ desire to make a parallel to evolution is flawed at its very foundation. Evolution is driven by random mutation — mistakes, not plans. (And while some inventions may indeed be the result of mishaps, the decision of a company to patent, produce, and market those inventions is not.) Evolution doesn’t have meetings about the market, the environment, the customer base. Evolution doesn’t patent things or do focus groups. Evolution doesn’t spend millions of dollars lobbying Congress to ensure that its plans go unfettered.

In some situations, even if we can’t literally put a technological genie back in a bottle, we can artificially intervene to make sure the genie plays by specific rules.

There are clear laws about what companies can and can’t do in the realm of biological weapons. The FDA ensures drugs are tested for efficacy and safety before they can be sold. The USDA ensures new food research is done with care. We don’t let anybody frack or drill for oil or build nuclear power plants wherever they like. We don’t let just anybody make and sell cars or airplanes or guns.

So the assertion that technology companies can’t possibly be shaped or restrained with the public’s interest in mind is to argue that they are fundamentally different from any other industry. They’re not. (...)

This endless, punishing race in the name of “progress” is often what drives consumer behavior, too. Despite the “American dream” — security, safety, prosperity — being more and more out of reach for everyday Americans, the idea that it’s just around the corner drives people to purchase these products.

If you have the newest app, people think their lives will be easier, you’ll have more free time, more quality time. Commercials promise more backyard barbecues under sparklers and birthday surprise parties facilitated by internet-connected light bulbs.

And when we buy the products, tech companies take that as a green light to continue on their “inevitable” path, inching ever toward a world where Amazon knows exactly what you’re doing, thinking, feeling — perhaps even before you do. “It’s all a loop,” says Stark. “It’s weird. That’s what puts people in this bind. They think they should be able to have it all. They can’t, and technology is a kind of prophylactic to cope with this stuff.”

by Rose Eveleth, Vox | Read more:
Image: Zoë van Dijk

Jeff Bezos’s Master Plan

Where in the pantheon of American commercial titans does Jeffrey Bezos belong? Andrew Carnegie’s hearths forged the steel that became the skeleton of the railroad and the city. John D. Rockefeller refined 90 percent of American oil, which supplied the pre-electric nation with light. Bill Gates created a program that was considered a prerequisite for turning on a computer.

At 55, Bezos has never dominated a major market as thoroughly as any of these forebears, and while he is presently the richest man on the planet, he has less wealth than Gates did at his zenith. Yet Rockefeller largely contented himself with oil wells, pump stations, and railcars; Gates’s fortune depended on an operating system. The scope of the empire the founder and CEO of Amazon has built is wider. Indeed, it is without precedent in the long history of American capitalism.

Today, Bezos controls nearly 40 percent of all e-commerce in the United States. More product searches are conducted on Amazon than on Google, which has allowed Bezos to build an advertising business as valuable as the entirety of IBM. One estimate has Amazon Web Services controlling almost half of the cloud-computing industry—institutions as varied as General Electric, Unilever, and even the CIA rely on its servers. Forty-two percent of paper book sales and a third of the market for streaming video are controlled by the company; Twitch, its video platform popular among gamers, attracts 15 million users a day. Add The Washington Post to this portfolio and Bezos is, at a minimum, a rival to the likes of Disney’s Bob Iger or the suits at AT&T, and arguably the most powerful man in American culture.

I first grew concerned about Amazon’s power five years ago. I felt anxious about how the company bullied the book business, extracting ever more favorable terms from the publishers that had come to depend on it. When the conglomerate Hachette, with which I’d once published a book, refused to accede to Amazon’s demands, it was punished. Amazon delayed shipments of Hachette books; when consumers searched for some Hachette titles, it redirected them to similar books from other publishers. In 2014, I wrote a cover story for The New Republic with a pugilistic title: “Amazon Must Be Stopped.” Citing my article, the company subsequently terminated an advertising campaign for its political comedy, Alpha House, that had been running in the magazine.

Since that time, Bezos’s reach has only grown. To the U.S. president, he is a nemesis. To many Americans, he is a beneficent wizard of convenience and abundance. Over the course of just this past year, Amazon has announced the following endeavors: It will match potential home buyers with real-estate agents and integrate their new homes with Amazon devices; it will enable its voice assistant, Alexa, to access health-care data, such as the status of a prescription or a blood-sugar reading; it will build a 3-million-square-foot cargo airport outside Cincinnati; it will make next-day delivery standard for members of its Prime service; it will start a new chain of grocery stores, in addition to Whole Foods, which it already owns; it will stream Major League Baseball games; it will launch more than 3,000 satellites into orbit to supply the world with high-speed internet.

Bezos’s ventures are by now so large and varied that it is difficult to truly comprehend the nature of his empire, much less the end point of his ambitions. What exactly does Jeff Bezos want? Or, to put it slightly differently, what does he believe? Given his power over the world, these are not small questions. Yet he largely keeps his intentions to himself; many longtime colleagues can’t recall him ever expressing a political opinion. To replay a loop of his interviews from Amazon’s quarter century of existence is to listen to him retell the same unrevealing anecdotes over and over.

To better understand him, I spent five months speaking with current and former Amazon executives, as well as people at the company’s rivals and scholarly observers. Bezos himself declined to participate in this story, and current employees would speak to me only off the record. Even former staffers largely preferred to remain anonymous, assuming that they might eventually wish to work for a business somehow entwined with Bezos’s sprawling concerns.

In the course of these conversations, my view of Bezos began to shift. Many of my assumptions about the man melted away; admiration jostled with continued unease. And I was left with a new sense of his endgame.

Bezos loves the word relentless—it appears again and again in his closely read annual letters to shareholders—and I had always assumed that his aim was domination for its own sake. In an era that celebrates corporate gigantism, he seemed determined to be the biggest of them all. But to say that Bezos’s ultimate goal is dominion over the planet is to misunderstand him. His ambitions are not bound by the gravitational pull of the Earth. (...)

In a way, Bezos has already created a prototype of a cylindrical tube inhabited by millions, and it’s called Amazon.com. His creation is less a company than an encompassing system. If it were merely a store that sold practically all salable goods—and delivered them within 48 hours—it would still be the most awe-inspiring creation in the history of American business. But Amazon is both that tangible company and an abstraction far more powerful.

Bezos’s enterprise upends long-held precepts about the fundamental nature of capitalism—especially an idea enshrined by the great Austrian economist Friedrich Hayek. As World War II drew to its close, Hayek wrote the essay “The Use of Knowledge in Society,” a seminal indictment of centralized planning. Hayek argued that no bureaucracy could ever match the miracle of markets, which spontaneously and efficiently aggregate the knowledge of a society. When markets collectively set a price, that price reflects the discrete bits of knowledge scattered among executives, workers, and consumers. Any governmental attempt to replace this organic apparatus—to set prices unilaterally, or even to understand the disparate workings of an economy—is pure hubris.

Amazon, however, has acquired the God’s-eye view of the economy that Hayek never imagined any single entity could hope to achieve. At any moment, its website has more than 600 million items for sale and more than 3 million vendors selling them. With its history of past purchases, it has collected the world’s most comprehensive catalog of consumer desire, which allows it to anticipate both individual and collective needs. With its logistics business—and its growing network of trucks and planes—it has an understanding of the flow of goods around the world. In other words, if Marxist revolutionaries ever seized power in the United States, they could nationalize Amazon and call it a day.

What makes Amazon so fearsome to its critics isn’t purely its size but its trajectory. Amazon’s cache of knowledge gives it the capacity to build its own winning version of an astonishing array of businesses. In the face of its growth, long-dormant fears of monopoly have begun to surface—and Amazon has reportedly found itself under review by the Federal Trade Commission and the Department of Justice. But unlike Facebook, another object of government scrutiny, Bezos’s company remains deeply trusted by the public. A 2018 poll sponsored by Georgetown University and the Knight Foundation found that Amazon engendered greater confidence than virtually any other American institution. Despite Donald Trump’s jabs at Bezos, this widespread faith in the company makes for a source of bipartisan consensus, although the Democrats surveyed were a touch more enthusiastic than the Republicans were: They rated Amazon even more trustworthy than the U.S. military. In contrast to the dysfunction and cynicism that define the times, Amazon is the embodiment of competence, the rare institution that routinely works. (...)

In its current form, Amazon harkens back to Big Business as it emerged in the postwar years. When Charles E. Wilson, the president of General Motors, was nominated to be secretary of defense in 1953, he famously told a Senate confirmation panel, “I thought what was good for our country was good for General Motors, and vice versa.” For the most part, this was an aphorism earnestly accepted as a statement of good faith. To avert class warfare, the Goliaths of the day recognized unions; they bestowed health care and pensions upon employees. Liberal eminences such as John K. Galbraith hailed the corporation as the basis for a benign social order. Galbraith extolled the social utility of the corporation because he believed that it could be domesticated and harnessed to serve interests other than its own bottom line. He believed businesses behave beneficently when their self-serving impulses are checked by “countervailing power” in the form of organized labor and government.

Of course, those powers have receded. Unions, whose organizing efforts Amazon has routinely squashed, are an unassuming nub of their former selves; the regulatory state is badly out of practice. So while Amazon is trusted, no countervailing force has the inclination or capacity to restrain it. And while power could amass in a more villainous character than Jeff Bezos, that doesn’t alleviate the anxiety that accompanies such concentration. Amazon might be a vast corporation, with more than 600,000 employees, but it is also the extension of one brilliant, willful man with an incredible knack for bending the world to his values.

by Franklin Foer, The Atlantic |  Read more:
Image: Bloomberg/Landov via

Who Killed the American Arts?

The arts in America are dying. In the 20th century, Americans defined the world’s popular culture, but the 21st century world has no need of America’s arts. Through technology transfer, the world entertains itself with knock offs like Bollywood and K-Pop. In the 20th century, Americans created a new art form in jazz and its derivatives, and turned Hollywood into the world’s dream factory. In the 21st century, African American music has collapsed into monotone misogyny, and digital sex (see Julie Bindel) is America’s real movie business. Americans are in the gutter, looking up at porn stars. And the rest of the world is barely looking, or listening, or reading at all. (...)

Everything is derivative and nostalgic. Nothing of note happened in painting or dance — or criticism, because the task of the American critic is to write obituaries and rewrite press releases. In music, Taylor Swift, once the Great White Hope of a dying industry, emitted a scrupulously bland album by committee. The jazz album of the year was, as it was last year, a studio off cut from John Coltrane, who died in 1967. The show, or what remained of it, was stolen by Lizzo, an obese but self-affirming squawker who, befitting an age of irony and multi-tasking, is the first person to twerk and play the flute at the same time. Meanwhile at the Alamo of high culture, 87-year-old John Williams marked the Tanglewood Festival’s 80th anniversary by perpetrating selections from Star Wars and Saving Private Ryan for an audience of equally geriatric and tasteless boomers.

In a dying culture, the best cases, like Wynton Marsalis and Bob Dylan in music, are curators of the Museum of American Greatness. The worst reflect a spiral into coarse nostalgia, as the needle wears out the groove: Stadium Country, Quentin Tarantino, the decay of fiction into self-help and affirmative action. The worst of all subordinate aesthetic values to political dogma, which is why it’s an offense to point out that the decline from Duke Ellington and Aretha Franklin to A$AP Rocky and Lizzo is a slide from civilization to barbarism.

by Dominic Green, The Spectator | Read more:
Image: MTV
[ed. See also: Who’s Got the Country Music Blues? (The American Conservative).]