Sunday, July 23, 2017

Oh, Elon. Building Infrastructure Doesn't Work Like That

Is this … is this really happening?

On Wednesday morning, Elon Musk made a strange announcement on Twitter: His Boring Company (yes, that's what it's called) had received “verbal government approval” to build an underground hyperloop, he said. The maglev-powered train thing would pass through New York, Philadelphia, and Baltimore before terminating in good old swampy Washington, DC. All in twenty-nine minutes.

It sounded momentous—but “verbal government approval” isn’t a thing. A White House spokesperson said the administration had conducted “positive conversations” with Musk and Boring Company executives, but declined to comment beyond that.

Musk also acknowledged the project has a ways to go. “A verbal yes is obviously not the same as a formal, written yes,” Musk wrote in a Twitter direct message to WIRED. "It will probably take another four to six months to get formal approval, assuming this receives support from the general public.”

Bad news, Elon, my friend: The White House doesn’t have much power when it comes to rubber stamping gigantic, multi-state infrastructure projects.

“It means effectively nothing,” says Adie Tomer, who studies metropolitan infrastructure at the Brookings Institution. “The federal government owns some land, but they don’t own the Northeast corridor land, and they don’t own the right-of-way.” Sure, having presidential backing isn’t bad—but it is far, far from the ballgame.

Even Musk's four- to six-month timeline seriously stretches it. Because here's what it actually takes to get approval to build gigantic, multi-billion dollar, multi-state infrastructure projects in the United States of America:

(Spoiler: Something nearing an act of God.)

by AArian Marshall, Wired |  Read more:
Image: Hyperloop One
[ed. Some people believe God's already in the White House (including it seems, the present occupant).]

The Metaphysics of the Hangover

We commonly think of hangovers as the next-day result of too much alcohol. We overdo it the night before, and the following morning we pay. We develop flu-like symptoms. We get a headache; our joints hurt; it’s an unpleasant thing to stare too long at the light, which seems all too inclined to stare back—hard. Whatever optimism we might have stored away in the vault of our psyche seems to have disappeared. We’re down, sorry, sad, and grim. We feel as if we have succeeded in poisoning ourselves—and the word is that we have. The word toxic hides in the middle of intoxication, like a rat in gift box. We’ve infected our bodies with toxins, and at first we got a happy ride. Some scientists speculate that the euphoria induced by drinking may come from the way alcohol summons forth energies to fight against the possibility that we’ve been poisoned. Being drunk, or even tipsy, thus understood, is elation as the defenders come roaring into the breach like a wave of charging knights. Banners flap, armor clangs, the hautboys sound in the air.

But then comes the morning, and it is time to pay. We arrive at the downside of the event. As high as we have mounted in delight, as the poet puts it, in dejection do we sink as low. That really does seem to be the case. The higher we’ve flown under the influence, the more down and dirty is the experience of the morning after.

There are a number of memorable literary accounts of the hangover, but none I’ve encountered outdoes Kingsley Amis’s in Lucky Jim. Jim is a young university instructor trying to find a place in the world. But the strain of seeking is considerable. One night Jim drinks more than he should and then quite a bit more after that. The next morning, he faces the hangover. Jim “stood brooding by his bed.… The light did him harm, but not as much as looking at things did; he resolved, having done it once, never to move his eyeballs again. A dusty thudding in his head made the scene before him beat like a pulse. His mouth had been used as a latrine by some small creature of the night, and then as its mausoleum. During the night, too, he’d somehow been on a cross-country run and then been expertly beaten up by secret police. He felt bad.”

Indeed, he did. Generally, one can’t in good taste laugh at someone, even a fictional someone, who feels quite as bad as Jim does. But the hangover is different from most other kinds of suffering. As generations of mothers and fathers have said to their wayward kids about one sorrow or another, “You brought this on yourself.” If you hadn’t filled the third glass, then the fourth and then the—how many were there?—you wouldn’t have that washcloth on your head and it wouldn’t hurt quite so much as it does to look at things.

But really, wasn’t it worth it? The night before, it was a pleasure to look at things. It was a particular pleasure to look at a comely someone, and maybe be looked at in return. The possibilities seemed endless, or at least far improved over what they had been in the afternoon. And everything else you looked at, the barstools and the tables and even the beer glass, didn’t seem quite so alien, quite so other as they usually do. Somehow objects gave off an encouraging, almost amiable glow. And by contrast with that dusty thudding in the head the morning after, the night before there had been a serene and steady kind of subliminal sound—they don’t call it “getting a buzz on” for nothing. Even the thoughts that came through that serenely humming brain were good ones, kind and hopeful and upbeat. Wallace Stevens speaks of “the hum of thoughts evaded in the mind”—well, yes, that too. The bad thoughts were evaded, or at least didn’t seem half so bad in the roseate glow of a few drinks. (...)

Has anyone ever taken up the subject of the religious hangover? Is it possible that when the service is over, the spirituals have been sung, and the visitation is at an end, there is a sense of loss? The inner barriers have broken down and the worshiper has been made whole. But maybe the next day there’s also (to borrow from Lenson) a “vengeful rebuilding” of the interior walls not unlike the process that follows a bender. Or maybe that hangover comes only when the communicant has been thoroughly disillusioned with the faith. It does happen—people given to religious ecstasy are well known to move from one spiritual venue to the other in desperate search of inspiration. (One might more cruelly say that they move in desperate search of a fix.) Is there a religious hangover? Is there a morning after of faith?

I’d dare say that there is another area of experience that produces a hangover with some frequency—the experience of sexual love. Perhaps disillusionment in love comes in two doses, reduced and full strength. We’ve all heard about the more modest version. Freud unpleasantly says that every act of sexual enjoyment decreases the value the lover confers upon the beloved. Not uncommon among males, to be sure. But in our world, rife with more sexually adventurous females than Freud could have imagined, it may be the case with some women too. The larger dose comes at the end of an affair—the breakup, and the grief and sorrow that follow. One mourns the loss of the beloved. One mourns the loss of love in one’s life. Might not that be a sort of hangover too?

Once again, boundaries are being cruelly reconstructed. Once you and I were one, now we are, painfully, sadly, no longer united. We are two different nations, perhaps warring, perhaps at aggrieved peace. Does a fractured love affair induce some headaches, some lethargy, despondency, hatred for the world and all it contains? I dare say it often does. All philosophies seek to dominate the world, and often they should simply back off. But the philosophy of the hangover may proudly assert that hangovers are more common and pervasive than most people imagine. And—here is the crucial point—the hangover is not only an aftermath of booze and drugs. The hangover may also pertain to failed idealizations of many sorts: Religious disillusion (or fatigue) may qualify as a sort of hangover; erotic loss or disappointment may also be described with reference to the philosophy of the morning after.

Is there a political hangover? Perhaps the experience of helping to elect a candidate who looks like a redeemer but is simply a skilled player, and who has no more interest in rescuing the world than he does in flying to the moon, is one that ends in something akin to the hangover. Maybe great art leaves a hangover, for producer and consumer alike; maybe battle, even what appeared to be heroic battle, sometimes does.

Any aspiration that takes one out of straight and narrowly normal life may well end in hangover-like disillusion. Not to admire anything, Horace famously said, is the only way to feel really good about yourself. Do you want to live a contented, stable, normal, productive life? Don’t drink. And don’t engage in any of the activities that can be akin to drinking. Don’t fall in love, don’t swoon for God, don’t try to change the world, don’t attempt to dissolve the state and remake it again.

by Mark Edmundson, Hedgehog Review |  Read more:
Image: The Day After by Edvard Munch, 1894-95 via Wikipedia

Saturday, July 22, 2017

Friday, July 21, 2017

The Airports of the Future Are Here

No matter how well-regarded a particular airport happens to be, the slog from curb to cabin is pretty much the same wherever you go. A decades-old paradigm of queues, security screens, snack vendors, and gate-waiting prevails—the only difference is the level of stress. Transiting a modern hub such as Munich or Seoul is more easily endured than threading your way through the perpetual construction zones that pass for airports around New York.

The sky portal of the 2040s, however, is likely to be free of such delights. Many of us will be driven to the terminal by autonomous cars; our eyes, faces, and fingers will be scanned; and our bags will have a permanent ID that allows them to be whisked from our homes before we even set out. Some of these airports will no longer be relegated to the outskirts of town—they will merge with city centers, becoming new destination “cities” within a city for people without travel plans. Shall we get dinner, watch a movie, see a concert, shop? People will choose to go to the airport. Your employer may even relocate there.

These are the types of infrastructure investments and technologies that will, in theory, allow airports to largely eradicate the dreaded waiting. Travelers will migrate around the terminal faster and see fewer walls and physical barriers thanks to the abundance of sophisticated sensors, predicts Dallas-based architecture and design firm Corgan. The company recently assembled its concepts of how airports will evolve, based on extensive research of passenger experiences at various airports and the greater role technology may play.

One day, the airport will know “everything about everyone moving in the airport,” said Seth Young, director of the Center for Aviation Studies at Ohio State University. The goal will be to deploy “a security infrastructure that’s constantly screening people from the door to the gate, and not having this toll-booth mentality,” he said. “We know that 99.9 percent of the passengers are clean, so why are we wasting time screening all of those?”

Much of this technology is likely to be seen outside the U.S. first, given the advanced age of most American airports and the more robust infrastructure funding available in Asia, the Middle East, and Europe. In the 2017 Skytrax awards, only 14 airports in the U.S. even made the top 100.

One can look to Singapore for a glimpse of how airports will change over the next 20 years. Changi Airport, a pioneer of the industry, recently opened a “living lab” to pursue further innovation. In March, it was named the world’s best airport for the fifth consecutive year by Skytrax.

One reason airports tend to look and function remarkably alike is that they’re designed to accommodate air travel infrastructure—security, passenger ticketing, baggage, ground transport—with the primary concerns being safety and minimal overhead for their tenant airlines.

“Today it’s what you call a transient space—it’s not a space to be in, it’s a space for you to move through,” said Jonathan Massey, the aviation leader for Corgan, which has overseen the design of major terminals worldwide, including Atlanta, Dallas, Shanghai, Dalian, China, and Los Angeles. “We need to evolve the terminals into being little cities.”

by Justin Bachman, Bloomberg |  Read more:
Image: Corgan

Why the Web Needs 3Quarks Daily

[ed. And Duck Soup.]

The need for filters, aggregators and curators to navigate the web isn’t new. Arts and Letters Daily, the inspiration for 3QD, was founded by the late Denis Dutton way back in 1998. It in turn was inspired by the news aggregator, Drudge Report, which started in 1995. But each of these had their own niche (literary humanities and conservative politics respectively) while Raza envisioned something more all-embracing – which ironically turned out to be a niche of its own. His plan was to “collect only serious articles of intellectual interest from all over the web but never include merely amusing pieces, clickbait, or even the news of the day… to find and post deeper analysis… and explore the world of ideas… [to] cover all intellectual fields that might be of interest to a well-educated academic all-rounder without being afraid of difficult material… [and to] have an inclusive attitude about what is interesting and important and an international outlook, avoiding America-centrism in particular.” (...)

The editors gradually withdrew more and more from commenting on what they were sharing. “We moved”, says Varghese, “in a direction opposite to blogging, meaning we moved towards some virtual self-abnegation in favour of just letting the piece speak for itself. So much of blogging was and is about the blogger’s take on things.” Today, a post contains nothing more than a headline, a block quote from the article, an image and a link to the original publication.

Yet this seemingly spartan layout belies the fact that curation is authorship, just not of the texts themselves. Maria Popova, whose blog ‘Brainpickings’ was made a part of the Library of Congress permanent archive as a resource for researchers of the future, has become an unofficial spokesperson for the community on this issue. “If information discovery plays such a central role in how we make sense of the world in this new media landscape,” she wrote in an essay for Neiman Lab, “then it is a form of creative labour in and of itself. And yet our current normative models for crediting this kind of labour are completely inadequate, if they exist at all.”

Today, information discovery comes in all shapes and sizes – from the New Yorker Minute that does a number on the New Yorker, to Amazon’s book recommendation behemoth. There isn’t a doubt that the latter is a remarkable feat of software engineering, as are the algorithms employed by Netflix, Spotify, Facebook and Google. Netizens depend on these wonders – relying on them to suck in chaos and spit out order.

Yet these same sites are also examples of total moral capitulation. Underlying the logic of many algorithms is the idea that to find what people want, we need only look for what similar people have wanted. Apart from engendering near total surveillance, a mechanism built around the urgency of giving people what they want ignores the importance (or even the existence) of a responsibility to give people what they might need. This isn’t a surprising stance for profit-driven corporations to take. However, as citizens who value democratic access to resources and knowledge, it’s dangerous to allow ourselves to become complacent with gatekeepers who don’t acknowledge their own roles as stewards or see their power as weighted by responsibility to the community. It’s the logic of giving people what they want that’s made virality the metric for deciding what makes the news and triggered the current race for the bottom that has marked the new culture wars.

by Thomas Manuel, The Wire |  Read more:
Image: 3 Quarks Daily
[ed. Sounds like Duck Soup, but without the irreverance and mysterious disappearances that readers of this blog enjoy. I wasn't aware of 3QD for a while but it's amazing how similar the templates are, and how few sites like these exist.]

Thursday, July 20, 2017

Politics 101

The Supreme Court and the Law of Motion

Back in the late ’90s, a Supreme Court case on the validity of a police search produced five separate opinions and left considerable doubt about who would ultimately benefit from the decision, prosecutors or criminal defendants. In our stories the next day, my opposite number at The Washington Post and I described the decision, Minnesota v. Carter, quite differently. At a reception a few days later, we encountered Justice Stephen G. Breyer, who had written one of the separate opinions. We told him about our confusion and differing interpretations.

Which of us was right, we asked. Who actually won the case?

“I don’t know,” Justice Breyer replied.

“How can you not know?” I pushed back.

The justice replied: “It depends on what the lower courts make of it.”

At first, that answer struck me as coy, even evasive, but the more I thought about it — and I’ve actually thought about it over the years — the more accurate, even profound, it has come to appear. Ours is a common-law system, in which one case follows another and legal doctrine emerges from the crucible of decided cases. Accepting only about 65 cases a year, the Supreme Court sits on the top of a very big pyramid: thousands of cases pass through the system every year, and judges are tasked with finding and applying relevant Supreme Court precedents to new cases with facts that differ, slightly or quite a lot from those in the original case.

I thought about the conversation with Justice Breyer as I read a decision the Supreme Court issued on the final day of its term. In Trinity Lutheran Church v. Comer, the court ruled that a church had a constitutional right to be considered on the same basis as secular institutions for a state grant to improve the safety of its preschool playground. The decision, with a majority opinion by Chief Justice John G. Roberts Jr., was a sharp departure from the line the Supreme Court has maintained against the direct funding of churches (the church had described the preschool as part of its religious mission, so there was no dispute in the case that the school fully shared the church’s identity).

Like most other states, Missouri, where the case arose, has a constitutional prohibition against public money going to churches. Invoking that provision, Missouri had deemed Trinity Lutheran categorically ineligible for the playground grant, although its preschool otherwise met the specified criteria. That exclusion put the church “to the choice between being a church and receiving a government benefit,” Chief Justice Roberts wrote. The court held that the exclusion amounted to discrimination against religion that was “odious to our Constitution,” a violation of the church’s First Amendment right to the free exercise of religion.

Strong words and very broad implications, but the court stopped short of taking the obvious next step of declaring unconstitutional not only Missouri’s handling of a specific grant program, but also the state constitutional provision on which the state’s action was based. Instead of playing out the implications of its decision, the majority opinion contained a footnote: “This case involves express discrimination based on religious identity with respect to playground resurfacing. We do not address religious uses of funding or other forms of discrimination.”

To call this footnote odd is an understatement. The Supreme Court doesn’t usually summon the majestic, if opaque, phrases of the Constitution to swat a mouse (in this case, a program to use recycled tires as a playground surface). Of the seven justices in the majority (only Justices Sonia Sotomayor and Ruth Bader Ginsburg dissented), only four subscribed to the footnote: Justices Anthony M. Kennedy, Samuel A. Alito Jr., and Elena Kagan, in addition to the chief justice. Justice Breyer concurred only in the judgment, so the footnote was not part of an opinion that he signed. And Justices Clarence Thomas and Neil M. Gorsuch objected that the footnote narrowed the holding of the case unrealistically.

So the footnote doesn’t even speak for a majority of the nine-member court. Chief Justice Roberts probably added it to satisfy the demand of Justice Kagan or Justice Kennedy, or both, for some limitation on the decision; it was crucial to hold them, even at the price of alienating Justices Gorsuch and Thomas, who were going to stay with the majority in all other respects. No matter what the footnote meant when the decision was issued on June 26, the question now is: What will it mean in the future? And that brings me back to Justice Breyer’s long-ago answer: It depends on what the lower courts make of it.

History offers a lesson: There is a momentum to Supreme Court decisions, and efforts to cabin the logical progression of legal doctrine will fail if the political and cultural forces that led to the doctrine in the first place remain in play. It’s Newton’s law of motion in the legal context: A doctrine in motion will stay in motion unless met by an outside force — a backlash or a change of cast. The steps can be small, but they can add up to giant steps.

by Linda Greenhouse, NY Times |  Read more:
Image: Stephen Crowley

Dave McKean, Cover art for issue #11 of The Sandman
via:

Wednesday, July 19, 2017

Kurt Vonnegut Walks Into a Bar

I was on the corner of Third Avenue and Forty-eighth Street, and Kurt Vonnegut was coming toward me, walking his big, loose-boned walk. It was fall and turning cold and he looked a little unbalanced in his overcoat, handsome but tousled, with long curly hair and a heavy mustache that sometimes hid his grin. I could tell he saw me by his shrug, which he sometimes used as a greeting.

I was on my way to buy dinner for some Newsweek writers who were suspicious of me as their new assistant managing editor. I had been brought in from Rolling Stone, and no one at Newsweek had heard of me. I didn’t know them either, but I knew Kurt, who was one of the first people I met when I moved to New York. We were neighbors on Forty-eighth Street, where he lived in a big townhouse in the middle of the block, and he’d invite me over for drinks. I had gotten him to contribute to Rolling Stone by keeping an eye out for his speeches and radio appearances and then suggesting ways they could be retooled as essays.

“Come have dinner,” I said. “I’ve got some Newsweek writers who would love to meet you.”

“Not in the mood,” Kurt said.

“They’re fans,” I said. “It’s part of your job.”

Kurt lit a Pall Mall and gave me a look, one of his favorites, amused but somehow saddened by the situation. He could act, Kurt.

“Think of it as a favor to me,” I said. “They’re not sure about me, and I’ve edited you.”

“Sort of,” he said, and I knew he had already had a couple drinks.

He never got mean, but he got honest.

“What else are you doing for dinner?” I said, knowing he seldom made plans.

“The last thing I need is ass kissing,” Kurt said.

“That’s what I’m doing right now.”

“They’ll want to know which novel I like best.”

Cat’s Cradle,” I said.

“Wrong.” He flipped the Pall Mall into Forty-eighth Street, and we started walking together toward the restaurant.

The writers were already at the table, drinks in front of them. They looked up when we came in, surprised to see Kurt with me. There were six or eight of them, including the columnist Pete Axthelm, who was my only ally going into Newsweek because I knew him from Runyon’s, a bar in our neighborhood where everyone called him Ax.

I introduced Kurt around.

“Honored,” Ax said, or something like that, and the ass kissing began.

by Terry McDonell, Electric Lit | Read more:
Image: uncredited

The Limitations of Deep Learning

Deep learning: the geometric view

The most surprising thing about deep learning is how simple it is. Ten years ago, no one expected that we would achieve such amazing results on machine perception problems by using simple parametric models trained with gradient descent. Now, it turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples. As Feynman once said about the universe, "It's not complicated, it's just a lot of it".

In deep learning, everything is a vector, i.e. everything is a point in a geometric space. Model inputs (it could be text, images, etc) and targets are first "vectorized", i.e. turned into some initial input vector space and target vector space. Each layer in a deep learning model operates one simple geometric transformation on the data that goes through it. Together, the chain of layers of the model forms one very complex geometric transformation, broken down into a series of simple ones. This complex transformation attempts to maps the input space to the target space, one point at a time. This transformation is parametrized by the weights of the layers, which are iteratively updated based on how well the model is currently performing. A key characteristic of this geometric transformation is that it must be differentiable, which is required in order for us to be able to learn its parameters via gradient descent. Intuitively, this means that the geometric morphing from inputs to outputs must be smooth and continuous—a significant constraint.

The whole process of applying this complex geometric transformation to the input data can be visualized in 3D by imagining a person trying to uncrumple a paper ball: the crumpled paper ball is the manifold of the input data that the model starts with. Each movement operated by the person on the paper ball is similar to a simple geometric transformation operated by one layer. The full uncrumpling gesture sequence is the complex transformation of the entire model. Deep learning models are mathematical machines for uncrumpling complicated manifolds of high-dimensional data.

That's the magic of deep learning: turning meaning into vectors, into geometric spaces, then incrementally learning complex geometric transformations that map one space to another. All you need are spaces of sufficiently high dimensionality in order to capture the full scope of the relationships found in the original data.

The limitations of deep learning

The space of applications that can be implemented with this simple strategy is nearly infinite. And yet, many more applications are completely out of reach for current deep learning techniques—even given vast amounts of human-annotated data. Say, for instance, that you could assemble a dataset of hundreds of thousands—even millions—of English language descriptions of the features of a software product, as written by a product manager, as well as the corresponding source code developed by a team of engineers to meet these requirements. Even with this data, you could not train a deep learning model to simply read a product description and generate the appropriate codebase. That's just one example among many. In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult.

This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task, or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex, or there may not be appropriate data available to learn it.

Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold.

The risk of anthropomorphizing machine learning models

One very real risk with contemporary AI is that of misinterpreting what deep learning models do, and overestimating their abilities. A fundamental feature of the human mind is our "theory of mind", our tendency to project intentions, beliefs and knowledge on the things around us. Drawing a smiley face on a rock suddenly makes it "happy"—in our minds. Applied to deep learning, this means that when we are able to somewhat successfully train a model to generate captions to describe pictures, for instance, we are led to believe that the model "understands" the contents of the pictures, as well as the captions it generates. We then proceed to be very surprised when any slight departure from the sort of images present in the training data causes the model to start generating completely absurd captions. (...)

Humans are capable of far more than mapping immediate stimuli to immediate responses, like a deep net, or maybe an insect, would do. They maintain complex, abstract models of their current situation, of themselves, of other people, and can use these models to anticipate different possible futures and perform long-term planning. They are capable of merging together known concepts to represent something they have never experienced before—like picturing a horse wearing jeans, for instance, or imagining what they would do if they won the lottery. This ability to handle hypotheticals, to expand our mental model space far beyond what we can experience directly, in a word, to perform abstraction and reasoning, is arguably the defining characteristic of human cognition. I call it "extreme generalization": an ability to adapt to novel, never experienced before situations, using very little data or even no new data at all.

This stands in sharp contrast with what deep nets do, which I would call "local generalization": the mapping from inputs to outputs performed by deep nets quickly stops making sense if new inputs differ even slightly from what they saw at training time. Consider, for instance, the problem of learning the appropriate launch parameters to get a rocket to land on the moon. If you were to use a deep net for this task, whether training using supervised learning or reinforcement learning, you would need to feed it with thousands or even millions of launch trials, i.e. you would need to expose it to a dense sampling of the input space, in order to learn a reliable mapping from input space to output space. By contrast, humans can use their power of abstraction to come up with physical models—rocket science—and derive an exact solution that will get the rocket on the moon in just one or few trials. Similarly, if you developed a deep net controlling a human body, and wanted it to learn to safely navigate a city without getting hit by cars, the net would have to die many thousands of times in various situations until it could infer that cars are dangerous, and develop appropriate avoidance behaviors. Dropped into a new city, the net would have to relearn most of what it knows. On the other hand, humans are able to learn safe behaviors without having to die even once—again, thanks to their power of abstract modeling of hypothetical situations. (...)

Take-aways

Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data. Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.

To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction. A likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs. We have said before (Note: in Deep Learning with Python) that machine learning models could be defined as "learnable programs"; currently we can only learn programs that belong to a very narrow and specific subset of all possible programs. But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like.

You can read the second part here: The future of deep learning.

by By Francois Chollet, Keras Blog | Read more:
Image: Getty

Long Strange Trip

Amir Bar-Lev’s rockumentary, Long Strange Trip, about the Grateful Dead, is aptly named for what is arguably the band’s most famous lyric: What a long, strange trip it’s been. The film takes you on a four-hour ride (much like the band's live shows) but this is not just another indulgent music doc.

Executive-produced by Martin Scorsese, the film digs deeply into the bizarre phenomenon that surrounded “The Dead” for decades—obsessive fans, called Deadheads, became a cult-like following that elevated the band’s ringmaster, Jerry Garcia (Aug. 1, 1942–Aug. 9, 1995), to a status he never wanted.

The must-see film includes 17 interviews, 1,100 rare photos and loads of footage you’ve never seen. Deadheads will be ecstatic. Bar-Lev doesn’t tell you what to think. Instead he offers many points of view. One theory is that the die-hard Deadheads were the major cause of Garcia’s descent into heroin. I didn’t buy that so I reached out to Grateful Dead insider Dennis McNally, whose book, A Long Strange Trip: The Inside History of the Grateful Dead, provided Bar-Lev with much of the band’s story. McNally spent 30 years with the band beginning when Garcia invited him to become their biographer in 1981.

When I asked McNally if he thought it was the Deadheads that drove Garcia to abuse heroin, or if he felt, as I do, that it was a progression from one addiction to another. McNally answered:

“I don’t think there’s an inherent progression [of addiction], I mean everybody starts with milk, too, you know? He turned to self-medication for any number of reasons…. His father died when he was four, he didn’t get the attention from his mother that he felt he deserved. Eventually, yes, but not specifically the fame. It was the responsibility. Jerry wanted to be Huckleberry Finn, well, if Huckleberry Finn was allowed to smoke joints and play guitar and cruise down the river on a raft.”

McNally pointed out that Garcia “employed 50 people, me among them. We expected paychecks every couple of weeks. There was a weight of responsibility on him for employees, their families, but also the million Deadheads who were addicted to the music and the shows. They expected him to come out and play 80 shows a year. That wore on him. He was 53-years-old and a walking candidate for a heart attack. Still smoked cigarettes, had a terrible diet. He was a diabetic who did not take it seriously.” (...)

The movie mimicked Garcia’s life. It began as a celebration but ended with a trip down the dark cellar of no return. Garcia probably didn’t set out to become a heroin addict. Maybe he just thought he could handle it. Or maybe, like my own drug use, just got to a point where he wanted out. Many people didn’t know the flip side of his jolly exterior was a dark depression.

The Dead’s casualties also included Ron “Pigpen” McKernan who drank himself to death in 1973 at age 27. Keith Godchaux died at age 32 in 1980 from a car crash after he and friend Courtenay Pollock had partied for hours to celebrate Godchaux’s birthday. The intoxicated driver—Pollock—survived the accident. Brent Mydland, mostly known as a drinker, died from a “speedball” (morphine and cocaine) in 1990. After Mydland’s death, keyboardist Vince Welnick joined the band but died in 2006 when he committed suicide. Phil Lesh’s drug use led to contracting hepatitis C. In the fall of 1998, his life was saved by a liver transplant.

Next I tracked down former president of Warner Bros. Records, Joe Smith, the executive who first signed the Dead. His presence brought a lot of fun into the film during the celebratory first half. “They were totally insane at times,” said Smith. “Trying to corral them was a very difficult thing, but we developed a friendship and Jerry Garcia was very bright. They were all bright. I established relationships with all of them, but not without difficulty because they didn’t want relationships. They were stoned most of the time. Phil Lesh was a particularly difficult guy.”

“How so?” I asked.

“He disputed and contested everything. One time, when I was trying to promote them, he said, ‘No. Let’s go out and record 30 minutes of heavy air on a smoggy day in L.A. Then we’re gonna record 30 minutes of clear air on a beautiful day, and we’ll mix it and that’ll be a rhythm soundtrack.”

by Dorri Olds, The Fix |  Read more:
Image: Michael Conway

Tuesday, July 18, 2017

Letting Robots Teach Schoolkids

For all the talk about whether robots will take our jobs, a new worry is emerging, namely whether we should let robots teach our kids. As the capabilities of smart software and artificial intelligence advance, parents, teachers, teachers’ unions and the children themselves will all have stakes in the outcome.

I, for one, say bring on the robots, or at least let us proceed with the experiments. You can imagine robots in schools serving as pets, peers, teachers, tutors, monitors and therapists, among other functions. They can store and communicate vast troves of knowledge, or provide a virtually inexhaustible source of interactive exchange on any topic that can be programmed into software.

But perhaps more important in the longer run, robots also bring many introverted or disabled or non-conforming children into greater classroom participation. They are less threatening, always available, and they never tire or lose patience.

Human teachers sometimes feel the need to bully or put down their students. That’s a way of maintaining classroom control, but it also harms children and discourages learning. A robot in contrast need not resort to tactics of psychological intimidation.

The pioneer in robot education so far is, not surprisingly, Singapore. The city-state has begun experiments with robotic aides at the kindergarten level, mostly as instructor aides and for reading stories and also teaching social interactions. In the U.K., researchers have developed a robot to help autistic children better learn how to interact with their peers.

I can imagine robots helping non-English-speaking children make the transition to bilingualism. Or how about using robots in Asian classrooms where the teachers themselves do not know enough English to teach the language effectively?

A big debate today is how we can teach ourselves to work with artificial intelligence, so as to prevent eventual widespread technological unemployment. Exposing children to robots early, and having them grow accustomed to human-machine interaction, is one path toward this important goal.

In a recent Financial Times interview, Sherry Turkle, a professor of social psychology at MIT, and a leading expert on cyber interactions, criticized robot education. “The robot can never be in an authentic relationship," she said. "Why should we normalize what is false and in the realm of [a] pretend relationship from the start?” She’s opposed to robot companions more generally, again for their artificiality.

Yet K-12 education itself is a highly artificial creation, from the chalk to the schoolhouses to the standardized achievement tests, not to mention the internet learning and the classroom TV. Thinking back on my own experience, I didn’t especially care if my teachers were “authentic” (in fact, I suspected quite a few were running a kind of personality con), provided they communicated their knowledge and radiated some charisma.  (...)

Keep in mind that robot instructors are going to come through toys and the commercial market in any case, whether schools approve or not. Is it so terrible an idea for some of those innovations to be supervised by, and combined with, the efforts of teachers and the educational establishment?

by Tyler Cowen, Bloomberg |  Read more:
Image: Nigel Treblin/Getty Images
[ed. See also: Give robots an 'ethical black box' to track and explain decisions]

Sunday, July 16, 2017

Portugal. The Man


Natsunoya Tea House

photo: markk

Saturday, July 15, 2017

Sir Sly

Friday, July 14, 2017

Trump's Russian Laundromat

In 1984, a Russian émigré named David Bogatin went shopping for apartments in New York City. The 38-year-old had arrived in America seven years before, with just $3 in his pocket. But for a former pilot in the Soviet Army—his specialty had been shooting down Americans over North Vietnam—he had clearly done quite well for himself. Bogatin wasn’t hunting for a place in Brighton Beach, the Brooklyn enclave known as “Little Odessa” for its large population of immigrants from the Soviet Union. Instead, he was fixated on the glitziest apartment building on Fifth Avenue, a gaudy, 58-story edifice with gold-plated fixtures and a pink-marble atrium: Trump Tower.

A monument to celebrity and conspicuous consumption, the tower was home to the likes of Johnny Carson, Steven Spielberg, and Sophia Loren. Its brash, 38-year-old developer was something of a tabloid celebrity himself. Donald Trump was just coming into his own as a serious player in Manhattan real estate, and Trump Tower was the crown jewel of his growing empire. From the day it opened, the building was a hit—all but a few dozen of its 263 units had sold in the first few months. But Bogatin wasn’t deterred by the limited availability or the sky-high prices. The Russian plunked down $6 million to buy not one or two, but five luxury condos. The big check apparently caught the attention of the owner. According to Wayne Barrett, who investigated the deal for the Village Voice, Trump personally attended the closing, along with Bogatin.

If the transaction seemed suspicious—multiple apartments for a single buyer who appeared to have no legitimate way to put his hands on that much money—there may have been a reason. At the time, Russian mobsters were beginning to invest in high-end real estate, which offered an ideal vehicle to launder money from their criminal enterprises. “During the ’80s and ’90s, we in the U.S. government repeatedly saw a pattern by which criminals would use condos and high-rises to launder money,” says Jonathan Winer, a deputy assistant secretary of state for international law enforcement in the Clinton administration. “It didn’t matter that you paid too much, because the real estate values would rise, and it was a way of turning dirty money into clean money. It was done very systematically, and it explained why there are so many high-rises where the units were sold but no one is living in them.” When Trump Tower was built, as David Cay Johnston reports in The Making of Donald Trump, it was only the second high-rise in New York that accepted anonymous buyers.

In 1987, just three years after he attended the closing with Trump, Bogatin pleaded guilty to taking part in a massive gasoline-bootlegging scheme with Russian mobsters. After he fled the country, the government seized his five condos at Trump Tower, saying that he had purchased them to “launder money, to shelter and hide assets.” A Senate investigation into organized crime later revealed that Bogatin was a leading figure in the Russian mob in New York. His family ties, in fact, led straight to the top: His brother ran a $150 million stock scam with none other than Semion Mogilevich, whom the FBI considers the “boss of bosses” of the Russian mafia. At the time, Mogilevich—feared even by his fellow gangsters as “the most powerful mobster in the world”—was expanding his multibillion-dollar international criminal syndicate into America.

In 1987, on his first trip to Russia, Trump visited the Winter Palace with Ivana. The Soviets flew him to Moscow—all expenses paid—to discuss building a luxury hotel across from the Kremlin.

Since Trump’s election as president, his ties to Russia have become the focus of intense scrutiny, most of which has centered on whether his inner circle colluded with Russia to subvert the U.S. election. A growing chorus in Congress is also asking pointed questions about how the president built his business empire. Rep. Adam Schiff, the ranking Democrat on the House Intelligence Committee, has called for a deeper inquiry into “Russian investment in Trump’s businesses and properties.”

The very nature of Trump’s businesses—all of which are privately held, with few reporting requirements—makes it difficult to root out the truth about his financial deals. And the world of Russian oligarchs and organized crime, by design, is shadowy and labyrinthine. For the past three decades, state and federal investigators, as well as some of America’s best investigative journalists, have sifted through mountains of real estate records, tax filings, civil lawsuits, criminal cases, and FBI and Interpol reports, unearthing ties between Trump and Russian mobsters like Mogilevich. To date, no one has documented that Trump was even aware of any suspicious entanglements in his far-flung businesses, let alone that he was directly compromised by the Russian mafia or the corrupt oligarchs who are closely allied with the Kremlin. So far, when it comes to Trump’s ties to Russia, there is no smoking gun.

But even without an investigation by Congress or a special prosecutor, there is much we already know about the president’s debt to Russia. A review of the public record reveals a clear and disturbing pattern: Trump owes much of his business success, and by extension his presidency, to a flow of highly suspicious money from Russia. Over the past three decades, at least 13 people with known or alleged links to Russian mobsters or oligarchs have owned, lived in, and even run criminal activities out of Trump Tower and other Trump properties. Many used his apartments and casinos to launder untold millions in dirty money. Some ran a worldwide high-stakes gambling ring out of Trump Tower—in a unit directly below one owned by Trump. Others provided Trump with lucrative branding deals that required no investment on his part. Taken together, the flow of money from Russia provided Trump with a crucial infusion of financing that helped rescue his empire from ruin, burnish his image, and launch his career in television and politics. “They saved his bacon,” says Kenneth McCallion, a former assistant U.S. attorney in the Reagan administration who investigated ties between organized crime and Trump’s developments in the 1980s.

It’s entirely possible that Trump was never more than a convenient patsy for Russian oligarchs and mobsters, with his casinos and condos providing easy pass-throughs for their illicit riches. At the very least, with his constant need for new infusions of cash and his well-documented troubles with creditors, Trump made an easy “mark” for anyone looking to launder money. But whatever his knowledge about the source of his wealth, the public record makes clear that Trump built his business empire in no small part with a lot of dirty money from a lot of dirty Russians—including the dirtiest and most feared of them all.

by Craig Unger, New Republic |  Read more:
Image: Alex Nabaum

Thursday, July 13, 2017


Kurt Jackson
via:

Preston Buchtel, Cave Drawings I (or a Brief History of Everything), 2015.
via:

Josh Olins
via: