Friday, July 21, 2017

Why the Web Needs 3Quarks Daily

[ed. And Duck Soup.]

The need for filters, aggregators and curators to navigate the web isn’t new. Arts and Letters Daily, the inspiration for 3QD, was founded by the late Denis Dutton way back in 1998. It in turn was inspired by the news aggregator, Drudge Report, which started in 1995. But each of these had their own niche (literary humanities and conservative politics respectively) while Raza envisioned something more all-embracing – which ironically turned out to be a niche of its own. His plan was to “collect only serious articles of intellectual interest from all over the web but never include merely amusing pieces, clickbait, or even the news of the day… to find and post deeper analysis… and explore the world of ideas… [to] cover all intellectual fields that might be of interest to a well-educated academic all-rounder without being afraid of difficult material… [and to] have an inclusive attitude about what is interesting and important and an international outlook, avoiding America-centrism in particular.” (...)

The editors gradually withdrew more and more from commenting on what they were sharing. “We moved”, says Varghese, “in a direction opposite to blogging, meaning we moved towards some virtual self-abnegation in favour of just letting the piece speak for itself. So much of blogging was and is about the blogger’s take on things.” Today, a post contains nothing more than a headline, a block quote from the article, an image and a link to the original publication.

Yet this seemingly spartan layout belies the fact that curation is authorship, just not of the texts themselves. Maria Popova, whose blog ‘Brainpickings’ was made a part of the Library of Congress permanent archive as a resource for researchers of the future, has become an unofficial spokesperson for the community on this issue. “If information discovery plays such a central role in how we make sense of the world in this new media landscape,” she wrote in an essay for Neiman Lab, “then it is a form of creative labour in and of itself. And yet our current normative models for crediting this kind of labour are completely inadequate, if they exist at all.”

Today, information discovery comes in all shapes and sizes – from the New Yorker Minute that does a number on the New Yorker, to Amazon’s book recommendation behemoth. There isn’t a doubt that the latter is a remarkable feat of software engineering, as are the algorithms employed by Netflix, Spotify, Facebook and Google. Netizens depend on these wonders – relying on them to suck in chaos and spit out order.

Yet these same sites are also examples of total moral capitulation. Underlying the logic of many algorithms is the idea that to find what people want, we need only look for what similar people have wanted. Apart from engendering near total surveillance, a mechanism built around the urgency of giving people what they want ignores the importance (or even the existence) of a responsibility to give people what they might need. This isn’t a surprising stance for profit-driven corporations to take. However, as citizens who value democratic access to resources and knowledge, it’s dangerous to allow ourselves to become complacent with gatekeepers who don’t acknowledge their own roles as stewards or see their power as weighted by responsibility to the community. It’s the logic of giving people what they want that’s made virality the metric for deciding what makes the news and triggered the current race for the bottom that has marked the new culture wars.

by Thomas Manuel, The Wire |  Read more:
Image: 3 Quarks Daily
[ed. Sounds like Duck Soup, but without the irreverance and mysterious disappearances that readers of this blog enjoy. I wasn't aware of 3QD for a while but it's amazing how similar the templates are, and how few sites like these exist.]

Thursday, July 20, 2017

Politics 101

The Supreme Court and the Law of Motion

Back in the late ’90s, a Supreme Court case on the validity of a police search produced five separate opinions and left considerable doubt about who would ultimately benefit from the decision, prosecutors or criminal defendants. In our stories the next day, my opposite number at The Washington Post and I described the decision, Minnesota v. Carter, quite differently. At a reception a few days later, we encountered Justice Stephen G. Breyer, who had written one of the separate opinions. We told him about our confusion and differing interpretations.

Which of us was right, we asked. Who actually won the case?

“I don’t know,” Justice Breyer replied.

“How can you not know?” I pushed back.

The justice replied: “It depends on what the lower courts make of it.”

At first, that answer struck me as coy, even evasive, but the more I thought about it — and I’ve actually thought about it over the years — the more accurate, even profound, it has come to appear. Ours is a common-law system, in which one case follows another and legal doctrine emerges from the crucible of decided cases. Accepting only about 65 cases a year, the Supreme Court sits on the top of a very big pyramid: thousands of cases pass through the system every year, and judges are tasked with finding and applying relevant Supreme Court precedents to new cases with facts that differ, slightly or quite a lot from those in the original case.

I thought about the conversation with Justice Breyer as I read a decision the Supreme Court issued on the final day of its term. In Trinity Lutheran Church v. Comer, the court ruled that a church had a constitutional right to be considered on the same basis as secular institutions for a state grant to improve the safety of its preschool playground. The decision, with a majority opinion by Chief Justice John G. Roberts Jr., was a sharp departure from the line the Supreme Court has maintained against the direct funding of churches (the church had described the preschool as part of its religious mission, so there was no dispute in the case that the school fully shared the church’s identity).

Like most other states, Missouri, where the case arose, has a constitutional prohibition against public money going to churches. Invoking that provision, Missouri had deemed Trinity Lutheran categorically ineligible for the playground grant, although its preschool otherwise met the specified criteria. That exclusion put the church “to the choice between being a church and receiving a government benefit,” Chief Justice Roberts wrote. The court held that the exclusion amounted to discrimination against religion that was “odious to our Constitution,” a violation of the church’s First Amendment right to the free exercise of religion.

Strong words and very broad implications, but the court stopped short of taking the obvious next step of declaring unconstitutional not only Missouri’s handling of a specific grant program, but also the state constitutional provision on which the state’s action was based. Instead of playing out the implications of its decision, the majority opinion contained a footnote: “This case involves express discrimination based on religious identity with respect to playground resurfacing. We do not address religious uses of funding or other forms of discrimination.”

To call this footnote odd is an understatement. The Supreme Court doesn’t usually summon the majestic, if opaque, phrases of the Constitution to swat a mouse (in this case, a program to use recycled tires as a playground surface). Of the seven justices in the majority (only Justices Sonia Sotomayor and Ruth Bader Ginsburg dissented), only four subscribed to the footnote: Justices Anthony M. Kennedy, Samuel A. Alito Jr., and Elena Kagan, in addition to the chief justice. Justice Breyer concurred only in the judgment, so the footnote was not part of an opinion that he signed. And Justices Clarence Thomas and Neil M. Gorsuch objected that the footnote narrowed the holding of the case unrealistically.

So the footnote doesn’t even speak for a majority of the nine-member court. Chief Justice Roberts probably added it to satisfy the demand of Justice Kagan or Justice Kennedy, or both, for some limitation on the decision; it was crucial to hold them, even at the price of alienating Justices Gorsuch and Thomas, who were going to stay with the majority in all other respects. No matter what the footnote meant when the decision was issued on June 26, the question now is: What will it mean in the future? And that brings me back to Justice Breyer’s long-ago answer: It depends on what the lower courts make of it.

History offers a lesson: There is a momentum to Supreme Court decisions, and efforts to cabin the logical progression of legal doctrine will fail if the political and cultural forces that led to the doctrine in the first place remain in play. It’s Newton’s law of motion in the legal context: A doctrine in motion will stay in motion unless met by an outside force — a backlash or a change of cast. The steps can be small, but they can add up to giant steps.

by Linda Greenhouse, NY Times |  Read more:
Image: Stephen Crowley

Dave McKean, Cover art for issue #11 of The Sandman
via:

Wednesday, July 19, 2017

Kurt Vonnegut Walks Into a Bar

I was on the corner of Third Avenue and Forty-eighth Street, and Kurt Vonnegut was coming toward me, walking his big, loose-boned walk. It was fall and turning cold and he looked a little unbalanced in his overcoat, handsome but tousled, with long curly hair and a heavy mustache that sometimes hid his grin. I could tell he saw me by his shrug, which he sometimes used as a greeting.

I was on my way to buy dinner for some Newsweek writers who were suspicious of me as their new assistant managing editor. I had been brought in from Rolling Stone, and no one at Newsweek had heard of me. I didn’t know them either, but I knew Kurt, who was one of the first people I met when I moved to New York. We were neighbors on Forty-eighth Street, where he lived in a big townhouse in the middle of the block, and he’d invite me over for drinks. I had gotten him to contribute to Rolling Stone by keeping an eye out for his speeches and radio appearances and then suggesting ways they could be retooled as essays.

“Come have dinner,” I said. “I’ve got some Newsweek writers who would love to meet you.”

“Not in the mood,” Kurt said.

“They’re fans,” I said. “It’s part of your job.”

Kurt lit a Pall Mall and gave me a look, one of his favorites, amused but somehow saddened by the situation. He could act, Kurt.

“Think of it as a favor to me,” I said. “They’re not sure about me, and I’ve edited you.”

“Sort of,” he said, and I knew he had already had a couple drinks.

He never got mean, but he got honest.

“What else are you doing for dinner?” I said, knowing he seldom made plans.

“The last thing I need is ass kissing,” Kurt said.

“That’s what I’m doing right now.”

“They’ll want to know which novel I like best.”

Cat’s Cradle,” I said.

“Wrong.” He flipped the Pall Mall into Forty-eighth Street, and we started walking together toward the restaurant.

The writers were already at the table, drinks in front of them. They looked up when we came in, surprised to see Kurt with me. There were six or eight of them, including the columnist Pete Axthelm, who was my only ally going into Newsweek because I knew him from Runyon’s, a bar in our neighborhood where everyone called him Ax.

I introduced Kurt around.

“Honored,” Ax said, or something like that, and the ass kissing began.

by Terry McDonell, Electric Lit | Read more:
Image: uncredited

The Limitations of Deep Learning

Deep learning: the geometric view

The most surprising thing about deep learning is how simple it is. Ten years ago, no one expected that we would achieve such amazing results on machine perception problems by using simple parametric models trained with gradient descent. Now, it turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples. As Feynman once said about the universe, "It's not complicated, it's just a lot of it".

In deep learning, everything is a vector, i.e. everything is a point in a geometric space. Model inputs (it could be text, images, etc) and targets are first "vectorized", i.e. turned into some initial input vector space and target vector space. Each layer in a deep learning model operates one simple geometric transformation on the data that goes through it. Together, the chain of layers of the model forms one very complex geometric transformation, broken down into a series of simple ones. This complex transformation attempts to maps the input space to the target space, one point at a time. This transformation is parametrized by the weights of the layers, which are iteratively updated based on how well the model is currently performing. A key characteristic of this geometric transformation is that it must be differentiable, which is required in order for us to be able to learn its parameters via gradient descent. Intuitively, this means that the geometric morphing from inputs to outputs must be smooth and continuous—a significant constraint.

The whole process of applying this complex geometric transformation to the input data can be visualized in 3D by imagining a person trying to uncrumple a paper ball: the crumpled paper ball is the manifold of the input data that the model starts with. Each movement operated by the person on the paper ball is similar to a simple geometric transformation operated by one layer. The full uncrumpling gesture sequence is the complex transformation of the entire model. Deep learning models are mathematical machines for uncrumpling complicated manifolds of high-dimensional data.

That's the magic of deep learning: turning meaning into vectors, into geometric spaces, then incrementally learning complex geometric transformations that map one space to another. All you need are spaces of sufficiently high dimensionality in order to capture the full scope of the relationships found in the original data.

The limitations of deep learning

The space of applications that can be implemented with this simple strategy is nearly infinite. And yet, many more applications are completely out of reach for current deep learning techniques—even given vast amounts of human-annotated data. Say, for instance, that you could assemble a dataset of hundreds of thousands—even millions—of English language descriptions of the features of a software product, as written by a product manager, as well as the corresponding source code developed by a team of engineers to meet these requirements. Even with this data, you could not train a deep learning model to simply read a product description and generate the appropriate codebase. That's just one example among many. In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult.

This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task, or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex, or there may not be appropriate data available to learn it.

Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold.

The risk of anthropomorphizing machine learning models

One very real risk with contemporary AI is that of misinterpreting what deep learning models do, and overestimating their abilities. A fundamental feature of the human mind is our "theory of mind", our tendency to project intentions, beliefs and knowledge on the things around us. Drawing a smiley face on a rock suddenly makes it "happy"—in our minds. Applied to deep learning, this means that when we are able to somewhat successfully train a model to generate captions to describe pictures, for instance, we are led to believe that the model "understands" the contents of the pictures, as well as the captions it generates. We then proceed to be very surprised when any slight departure from the sort of images present in the training data causes the model to start generating completely absurd captions. (...)

Humans are capable of far more than mapping immediate stimuli to immediate responses, like a deep net, or maybe an insect, would do. They maintain complex, abstract models of their current situation, of themselves, of other people, and can use these models to anticipate different possible futures and perform long-term planning. They are capable of merging together known concepts to represent something they have never experienced before—like picturing a horse wearing jeans, for instance, or imagining what they would do if they won the lottery. This ability to handle hypotheticals, to expand our mental model space far beyond what we can experience directly, in a word, to perform abstraction and reasoning, is arguably the defining characteristic of human cognition. I call it "extreme generalization": an ability to adapt to novel, never experienced before situations, using very little data or even no new data at all.

This stands in sharp contrast with what deep nets do, which I would call "local generalization": the mapping from inputs to outputs performed by deep nets quickly stops making sense if new inputs differ even slightly from what they saw at training time. Consider, for instance, the problem of learning the appropriate launch parameters to get a rocket to land on the moon. If you were to use a deep net for this task, whether training using supervised learning or reinforcement learning, you would need to feed it with thousands or even millions of launch trials, i.e. you would need to expose it to a dense sampling of the input space, in order to learn a reliable mapping from input space to output space. By contrast, humans can use their power of abstraction to come up with physical models—rocket science—and derive an exact solution that will get the rocket on the moon in just one or few trials. Similarly, if you developed a deep net controlling a human body, and wanted it to learn to safely navigate a city without getting hit by cars, the net would have to die many thousands of times in various situations until it could infer that cars are dangerous, and develop appropriate avoidance behaviors. Dropped into a new city, the net would have to relearn most of what it knows. On the other hand, humans are able to learn safe behaviors without having to die even once—again, thanks to their power of abstract modeling of hypothetical situations. (...)

Take-aways

Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data. Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.

To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction. A likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs. We have said before (Note: in Deep Learning with Python) that machine learning models could be defined as "learnable programs"; currently we can only learn programs that belong to a very narrow and specific subset of all possible programs. But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like.

You can read the second part here: The future of deep learning.

by By Francois Chollet, Keras Blog | Read more:
Image: Getty

Long Strange Trip

Amir Bar-Lev’s rockumentary, Long Strange Trip, about the Grateful Dead, is aptly named for what is arguably the band’s most famous lyric: What a long, strange trip it’s been. The film takes you on a four-hour ride (much like the band's live shows) but this is not just another indulgent music doc.

Executive-produced by Martin Scorsese, the film digs deeply into the bizarre phenomenon that surrounded “The Dead” for decades—obsessive fans, called Deadheads, became a cult-like following that elevated the band’s ringmaster, Jerry Garcia (Aug. 1, 1942–Aug. 9, 1995), to a status he never wanted.

The must-see film includes 17 interviews, 1,100 rare photos and loads of footage you’ve never seen. Deadheads will be ecstatic. Bar-Lev doesn’t tell you what to think. Instead he offers many points of view. One theory is that the die-hard Deadheads were the major cause of Garcia’s descent into heroin. I didn’t buy that so I reached out to Grateful Dead insider Dennis McNally, whose book, A Long Strange Trip: The Inside History of the Grateful Dead, provided Bar-Lev with much of the band’s story. McNally spent 30 years with the band beginning when Garcia invited him to become their biographer in 1981.

When I asked McNally if he thought it was the Deadheads that drove Garcia to abuse heroin, or if he felt, as I do, that it was a progression from one addiction to another. McNally answered:

“I don’t think there’s an inherent progression [of addiction], I mean everybody starts with milk, too, you know? He turned to self-medication for any number of reasons…. His father died when he was four, he didn’t get the attention from his mother that he felt he deserved. Eventually, yes, but not specifically the fame. It was the responsibility. Jerry wanted to be Huckleberry Finn, well, if Huckleberry Finn was allowed to smoke joints and play guitar and cruise down the river on a raft.”

McNally pointed out that Garcia “employed 50 people, me among them. We expected paychecks every couple of weeks. There was a weight of responsibility on him for employees, their families, but also the million Deadheads who were addicted to the music and the shows. They expected him to come out and play 80 shows a year. That wore on him. He was 53-years-old and a walking candidate for a heart attack. Still smoked cigarettes, had a terrible diet. He was a diabetic who did not take it seriously.” (...)

The movie mimicked Garcia’s life. It began as a celebration but ended with a trip down the dark cellar of no return. Garcia probably didn’t set out to become a heroin addict. Maybe he just thought he could handle it. Or maybe, like my own drug use, just got to a point where he wanted out. Many people didn’t know the flip side of his jolly exterior was a dark depression.

The Dead’s casualties also included Ron “Pigpen” McKernan who drank himself to death in 1973 at age 27. Keith Godchaux died at age 32 in 1980 from a car crash after he and friend Courtenay Pollock had partied for hours to celebrate Godchaux’s birthday. The intoxicated driver—Pollock—survived the accident. Brent Mydland, mostly known as a drinker, died from a “speedball” (morphine and cocaine) in 1990. After Mydland’s death, keyboardist Vince Welnick joined the band but died in 2006 when he committed suicide. Phil Lesh’s drug use led to contracting hepatitis C. In the fall of 1998, his life was saved by a liver transplant.

Next I tracked down former president of Warner Bros. Records, Joe Smith, the executive who first signed the Dead. His presence brought a lot of fun into the film during the celebratory first half. “They were totally insane at times,” said Smith. “Trying to corral them was a very difficult thing, but we developed a friendship and Jerry Garcia was very bright. They were all bright. I established relationships with all of them, but not without difficulty because they didn’t want relationships. They were stoned most of the time. Phil Lesh was a particularly difficult guy.”

“How so?” I asked.

“He disputed and contested everything. One time, when I was trying to promote them, he said, ‘No. Let’s go out and record 30 minutes of heavy air on a smoggy day in L.A. Then we’re gonna record 30 minutes of clear air on a beautiful day, and we’ll mix it and that’ll be a rhythm soundtrack.”

by Dorri Olds, The Fix |  Read more:
Image: Michael Conway

Tuesday, July 18, 2017

Letting Robots Teach Schoolkids

For all the talk about whether robots will take our jobs, a new worry is emerging, namely whether we should let robots teach our kids. As the capabilities of smart software and artificial intelligence advance, parents, teachers, teachers’ unions and the children themselves will all have stakes in the outcome.

I, for one, say bring on the robots, or at least let us proceed with the experiments. You can imagine robots in schools serving as pets, peers, teachers, tutors, monitors and therapists, among other functions. They can store and communicate vast troves of knowledge, or provide a virtually inexhaustible source of interactive exchange on any topic that can be programmed into software.

But perhaps more important in the longer run, robots also bring many introverted or disabled or non-conforming children into greater classroom participation. They are less threatening, always available, and they never tire or lose patience.

Human teachers sometimes feel the need to bully or put down their students. That’s a way of maintaining classroom control, but it also harms children and discourages learning. A robot in contrast need not resort to tactics of psychological intimidation.

The pioneer in robot education so far is, not surprisingly, Singapore. The city-state has begun experiments with robotic aides at the kindergarten level, mostly as instructor aides and for reading stories and also teaching social interactions. In the U.K., researchers have developed a robot to help autistic children better learn how to interact with their peers.

I can imagine robots helping non-English-speaking children make the transition to bilingualism. Or how about using robots in Asian classrooms where the teachers themselves do not know enough English to teach the language effectively?

A big debate today is how we can teach ourselves to work with artificial intelligence, so as to prevent eventual widespread technological unemployment. Exposing children to robots early, and having them grow accustomed to human-machine interaction, is one path toward this important goal.

In a recent Financial Times interview, Sherry Turkle, a professor of social psychology at MIT, and a leading expert on cyber interactions, criticized robot education. “The robot can never be in an authentic relationship," she said. "Why should we normalize what is false and in the realm of [a] pretend relationship from the start?” She’s opposed to robot companions more generally, again for their artificiality.

Yet K-12 education itself is a highly artificial creation, from the chalk to the schoolhouses to the standardized achievement tests, not to mention the internet learning and the classroom TV. Thinking back on my own experience, I didn’t especially care if my teachers were “authentic” (in fact, I suspected quite a few were running a kind of personality con), provided they communicated their knowledge and radiated some charisma.  (...)

Keep in mind that robot instructors are going to come through toys and the commercial market in any case, whether schools approve or not. Is it so terrible an idea for some of those innovations to be supervised by, and combined with, the efforts of teachers and the educational establishment?

by Tyler Cowen, Bloomberg |  Read more:
Image: Nigel Treblin/Getty Images
[ed. See also: Give robots an 'ethical black box' to track and explain decisions]

Friday, July 14, 2017

Trump's Russian Laundromat

In 1984, a Russian émigré named David Bogatin went shopping for apartments in New York City. The 38-year-old had arrived in America seven years before, with just $3 in his pocket. But for a former pilot in the Soviet Army—his specialty had been shooting down Americans over North Vietnam—he had clearly done quite well for himself. Bogatin wasn’t hunting for a place in Brighton Beach, the Brooklyn enclave known as “Little Odessa” for its large population of immigrants from the Soviet Union. Instead, he was fixated on the glitziest apartment building on Fifth Avenue, a gaudy, 58-story edifice with gold-plated fixtures and a pink-marble atrium: Trump Tower.

A monument to celebrity and conspicuous consumption, the tower was home to the likes of Johnny Carson, Steven Spielberg, and Sophia Loren. Its brash, 38-year-old developer was something of a tabloid celebrity himself. Donald Trump was just coming into his own as a serious player in Manhattan real estate, and Trump Tower was the crown jewel of his growing empire. From the day it opened, the building was a hit—all but a few dozen of its 263 units had sold in the first few months. But Bogatin wasn’t deterred by the limited availability or the sky-high prices. The Russian plunked down $6 million to buy not one or two, but five luxury condos. The big check apparently caught the attention of the owner. According to Wayne Barrett, who investigated the deal for the Village Voice, Trump personally attended the closing, along with Bogatin.

If the transaction seemed suspicious—multiple apartments for a single buyer who appeared to have no legitimate way to put his hands on that much money—there may have been a reason. At the time, Russian mobsters were beginning to invest in high-end real estate, which offered an ideal vehicle to launder money from their criminal enterprises. “During the ’80s and ’90s, we in the U.S. government repeatedly saw a pattern by which criminals would use condos and high-rises to launder money,” says Jonathan Winer, a deputy assistant secretary of state for international law enforcement in the Clinton administration. “It didn’t matter that you paid too much, because the real estate values would rise, and it was a way of turning dirty money into clean money. It was done very systematically, and it explained why there are so many high-rises where the units were sold but no one is living in them.” When Trump Tower was built, as David Cay Johnston reports in The Making of Donald Trump, it was only the second high-rise in New York that accepted anonymous buyers.

In 1987, just three years after he attended the closing with Trump, Bogatin pleaded guilty to taking part in a massive gasoline-bootlegging scheme with Russian mobsters. After he fled the country, the government seized his five condos at Trump Tower, saying that he had purchased them to “launder money, to shelter and hide assets.” A Senate investigation into organized crime later revealed that Bogatin was a leading figure in the Russian mob in New York. His family ties, in fact, led straight to the top: His brother ran a $150 million stock scam with none other than Semion Mogilevich, whom the FBI considers the “boss of bosses” of the Russian mafia. At the time, Mogilevich—feared even by his fellow gangsters as “the most powerful mobster in the world”—was expanding his multibillion-dollar international criminal syndicate into America.

In 1987, on his first trip to Russia, Trump visited the Winter Palace with Ivana. The Soviets flew him to Moscow—all expenses paid—to discuss building a luxury hotel across from the Kremlin.

Since Trump’s election as president, his ties to Russia have become the focus of intense scrutiny, most of which has centered on whether his inner circle colluded with Russia to subvert the U.S. election. A growing chorus in Congress is also asking pointed questions about how the president built his business empire. Rep. Adam Schiff, the ranking Democrat on the House Intelligence Committee, has called for a deeper inquiry into “Russian investment in Trump’s businesses and properties.”

The very nature of Trump’s businesses—all of which are privately held, with few reporting requirements—makes it difficult to root out the truth about his financial deals. And the world of Russian oligarchs and organized crime, by design, is shadowy and labyrinthine. For the past three decades, state and federal investigators, as well as some of America’s best investigative journalists, have sifted through mountains of real estate records, tax filings, civil lawsuits, criminal cases, and FBI and Interpol reports, unearthing ties between Trump and Russian mobsters like Mogilevich. To date, no one has documented that Trump was even aware of any suspicious entanglements in his far-flung businesses, let alone that he was directly compromised by the Russian mafia or the corrupt oligarchs who are closely allied with the Kremlin. So far, when it comes to Trump’s ties to Russia, there is no smoking gun.

But even without an investigation by Congress or a special prosecutor, there is much we already know about the president’s debt to Russia. A review of the public record reveals a clear and disturbing pattern: Trump owes much of his business success, and by extension his presidency, to a flow of highly suspicious money from Russia. Over the past three decades, at least 13 people with known or alleged links to Russian mobsters or oligarchs have owned, lived in, and even run criminal activities out of Trump Tower and other Trump properties. Many used his apartments and casinos to launder untold millions in dirty money. Some ran a worldwide high-stakes gambling ring out of Trump Tower—in a unit directly below one owned by Trump. Others provided Trump with lucrative branding deals that required no investment on his part. Taken together, the flow of money from Russia provided Trump with a crucial infusion of financing that helped rescue his empire from ruin, burnish his image, and launch his career in television and politics. “They saved his bacon,” says Kenneth McCallion, a former assistant U.S. attorney in the Reagan administration who investigated ties between organized crime and Trump’s developments in the 1980s.

It’s entirely possible that Trump was never more than a convenient patsy for Russian oligarchs and mobsters, with his casinos and condos providing easy pass-throughs for their illicit riches. At the very least, with his constant need for new infusions of cash and his well-documented troubles with creditors, Trump made an easy “mark” for anyone looking to launder money. But whatever his knowledge about the source of his wealth, the public record makes clear that Trump built his business empire in no small part with a lot of dirty money from a lot of dirty Russians—including the dirtiest and most feared of them all.

by Craig Unger, New Republic |  Read more:
Image: Alex Nabaum

Thursday, July 13, 2017


Kurt Jackson
via:

Preston Buchtel, Cave Drawings I (or a Brief History of Everything), 2015.
via:

Josh Olins
via:

The Brave New World of Gene Editing

In the last few years, genetic testing has entered the commercial mainstream. Direct-to-consumer testing is now commonplace, performed by companies such as 23andMe (humans have twenty-three pairs of chromosomes). Much of the interest in such tests is based not only on the claim that they enable us to trace our ancestry, but also on the insight into our future health that they purport to provide. At the beginning of April, 23andMe received FDA approval to sell a do-it-yourself genetic test for ten diseases, including Parkinson’s and late-onset Alzheimer’s. You spit in a tube, send it off to the company, and after a few days you get your results. But as Steven Heine, a Canadian professor of social and cultural psychology who undertook several such tests on himself, explains in DNA Is Not Destiny, that is where the problems begin.

Some diseases are indeed entirely genetically determined—Huntington’s disease, Duchenne muscular dystrophy, and so on. If you have the faulty gene, you will eventually have the disease. Whether you want to be told by e-mail that you will develop a life-threatening disease is something you need to think hard about before doing the test. But for the vast majority of diseases, our future is not written in our genes, and the results of genetic tests can be misleading. (...)

More troublingly still, however imperfect its predictive value, the tsunami of human genetic information now pouring from DNA sequencers all over the planet raises the possibility that our DNA could be used against us. The Genetic Information Nondiscrimination Act of 2008 made it illegal for US medical insurance companies to discriminate on the basis of genetic information (although strikingly not for life insurance or long-term care). However, the health care reform legislation recently passed by the House (the American Health Care Act, known as Trumpcare) allows insurers to charge higher premiums for people with a preexisting condition. It is hard to imagine anything more preexisting than a gene that could or, even worse, will lead to your getting a particular disease; and under such a health system, insurance companies would have every incentive to find out the risks present in your DNA. If this component of the Republican health care reform becomes law, the courts may conclude that a genetic test qualifies as proof of a preexisting condition. If genes end up affecting health insurance payments, some people might choose not to take these tests.

But of even greater practical and moral significance is the second part of the revolution in genetics: our ability to modify or “edit” the DNA sequences of humans and other creatures. This technique, known as CRISPR (pronounced “crisper”), was first applied to human cells in 2013, and has already radically changed research in the life sciences. It works in pretty much every species in which it has been tried and is currently undergoing its first clinical trials. HIV, leukemia, and sickle-cell anemia will probably soon be treated using CRISPR.

In A Crack in Creation, one of the pioneers of this technique, the biochemist Jennifer Doudna of the University of California at Berkeley, together with her onetime student Samuel Sternberg, describes the science behind CRISPR and the history of its discovery. This guidebook to the CRISPR revolution gives equal weight to the science of CRISPR and the profound ethical questions it raises. The book is required reading for every concerned citizen—the material it covers should be discussed in schools, colleges, and universities throughout the country. Community and patient groups need to understand the implications of this technology and help decide how it should and should not be applied, while politicians must confront the dramatic challenges posed by gene editing.

The story of CRISPR is a case study in how scientific inquiry that is purely driven by curiosity can lead to major advances. Beginning in the 1980s, scientists noticed that parts of the genomes of microbes contained regular DNA sequences that were repeated and consisted of approximate palindromes. (In fact, in general only a few motifs are roughly repeated within each “palindrome.”) Eventually, these sequences were given the snappy acronym CRISPR—clustered regularly interspersed short palindromic repeats. A hint about their function emerged when it became clear that the bits of DNAfound in the spaces between the repeats—called spacer DNA—were not some random bacterial junk, but instead had come from viruses and had been integrated into the microbe’s genome.

These bits of DNA turned out to be very important in the life of the microbe. In 2002, scientists discovered that the CRISPR sequences activate a series of proteins—known as CRISPR-associated (or Cas) proteins—that can unravel and attack DNA. Then in 2007, it was shown that the CRISPR sequence and one particular protein (often referred to as CRISPR-Cas9) act together as a kind of immune system for microbes: if a particular virus’s DNA is incorporated into a microbe’s CRISPR sequences, the microbe can recognize an invasion by that virus and activate Cas proteins to snip it up.

This was a pretty big deal for microbiologists, but the excitement stems from the realization that the CRISPR-associated proteins could be used to alter any DNA to achieve a desired sequence. At the beginning in 2013, three groups of researchers, from the University of California at Berkeley (led by Jennifer Doudna), Harvard Medical School (led by George Church), and the Broad Institute of MIT and Harvard (led by Feng Zhang), independently showed that the CRISPR technique could be used to modify human cells. Gene editing was born.

The possibilities of CRISPR are immense. If you know a DNA sequence from a given organism, you can chop it up, delete it, and change it at will, much like what a word-processing program can do with texts. You can even use CRISPR to introduce additional control elements—for example to engineer a gene so that it is activated by light stimulation. In experimental organisms this can provide an extraordinary degree of control in studies of gene function, enabling scientists to explore the consequences of gene expression at a particular moment in the organism’s life or in a particular environment.

There appear to be few limits to how CRISPR might be used. One is technical: it can be difficult to deliver the specially constructed CRISPR DNA sequences to specific cells in order to change their genes. But a larger and more intractable concern is ethical: Where and when should this technology be used? In 2016, the power of gene editing and the relative ease of its application led James Clapper, President Obama’s director of national intelligence, to describe CRISPR as a weapon of mass destruction. Well-meaning biohackers are already selling kits over the Internet that enable anyone with high school biology to edit the genes of bacteria. The plotline of a techno-thriller may be writing itself in real time. (...)

The second half of A Crack in Creation deals with the profound ethical issues that are raised by gene editing. These pages are not dry or abstract—Doudna uses her own shifting positions on these questions as a way for the reader to explore different possibilities. However, she often offers no clear way forward, beyond the fairly obvious warning that we need to be careful. For example, Doudna was initially deeply opposed to any manipulation of the human genome that could be inherited by future generations—this is called germline manipulation, and is carried out on eggs or sperm, or on a single-cell embryo. (Genetic changes produced by all currently envisaged human uses of CRISPR, for example on blood cells, would not be passed to the patient’s children because these cells are not passed on.)

Although laws and guidelines differ among countries, for the moment implantation of genetically edited embryos is generally considered to be wrong, and in 2015 a nonbinding international moratorium on the manipulation of the human germline was reached at a meeting held in Washington by the National Academy of Sciences, the Institute of Medicine, the Royal Society of London, and the Chinese Academy of Sciences. Yet it seems inevitable that the world’s first CRISPR baby will be born sometime in the next decade, most likely as a result of a procedure that is intended to permanently remove genes that cause a particular disease. (...)

Like many scientists and the vast majority of the general public, Doudna remains hostile to changing the germline in an attempt to make humans smarter, more beautiful, or stronger, but she recognizes that it is extremely difficult to draw a line between remedial action and enhancement. Reassuringly, both A Crack in Creation and DNA Is Not Destiny show that these eugenic fantasies will not succeed—such characteristics are highly complex, and to the extent that they have a genetic component, it is encoded by a large number of genes each of which has a very small effect, and which interact in unknown ways. We are not on the verge of the creation of a CRISPR master race.

Nevertheless, Doudna does accept that there is a danger that the new technology will “transcribe our societies’ financial inequality into our genetic code,” as the rich will be able to use it to enhance their offspring while the poor will not. Unfortunately, her only solution is to suggest that we should start planning for international guidelines governing germline gene editing, with researchers and lawmakers (the public are not mentioned) encouraged to find “the right balance between regulation and freedom.”

by Matthew Cobb, NY Review of Books |  Read more:
Image: Anthony A. James/UC Irvine
[ed. Ever get the feeling that there's some kind of morbid inevitability to technological progress? I'm not sure what'll kill us first: nuclear warheads, climate change, gene editing, artificial intelligence, biological weapons, or politicians.]

Understanding Poetry

Most people don’t read poetry. Press them about this and they’ll usually say something like, “I don’t ‘get’ it,” or “It’s just so pretentious,” or “The poets have degraded our society’s moral fiber, and they killed my baby.” You should never accept the first two reasons as an excuse. As Matthew Zapruder writes, there’s no reason to believe that poetry isn’t straightforward or that you can’t understand it, even if you regard yourself as a rube: “Like classical music, poetry has an unfortunate reputation for requiring special training and education to appreciate, which takes readers away from its true strangeness, and makes most of us feel as if we haven’t studied enough to read it … The art of reading poetry doesn’t begin with thinking about historical moments or great philosophies. It begins with reading the words of the poems themselves … Good poets do not deliberately complicate something just to make it harder for a reader to understand. Unfortunately, young readers, and young poets too, are taught to think that this is exactly what poets do. This has, in turn, created certain habits in the writing of contemporary poetry. Bad information about poetry in, bad poetry out, a kind of poetic obscurity feedback loop. It often takes poets a long time to unlearn this. Some never do. They continue to write in this way, deliberately obscure and esoteric, because it is a shortcut to being mysterious. The so-called effect of their poems relies on hidden meaning, keeping something away from the reader.”

by Editors, Paris Review |  Read more:
Image: Tamara Shopsin

I'm So Bored I Could Die


via: Sex and the City

Are You Ready To Consider That Capitalism Is The Real Problem?

In February, college sophomore Trevor Hill stood up during a televised town hall meeting in New York and posed a simple question to Nancy Pelosi, the leader of the Democrats in the House of Representatives. He cited a study by Harvard University showing that 51% of Americans between the ages of 18 and 29 no longer support the system of capitalism, and asked whether the Democrats could embrace this fast-changing reality and stake out a clearer contrast to right-wing economics.

Pelosi was visibly taken aback. “I thank you for your question,” she said, “but I’m sorry to say we’re capitalists, and that’s just the way it is.”

The footage went viral. It was powerful because of the clear contrast it set up. Trevor Hill is no hardened left-winger. He’s just your average millennial—bright, informed, curious about the world, and eager to imagine a better one. But Pelosi, a figurehead of establishment politics, refused to–or was just unable to–entertain his challenge to the status quo.

It’s not only young voters who feel this way. A YouGov poll in 2015 found that 64% of Britons believe that capitalism is unfair, that it makes inequality worse. Even in the U.S., it’s as high as 55%. In Germany, a solid 77% are skeptical of capitalism. Meanwhile, a full three-quarters of people in major capitalist economies believe that big businesses are basically corrupt.

Why do people feel this way? Probably not because they deny the abundant material benefits of modern life that many are able to enjoy. Or because they want to travel back in time and live in the U.S.S.R. It’s because they realize—either consciously or at some gut level—that there’s something fundamentally flawed about a system that has a prime directive to churn nature and humans into capital, and do it more and more each year, regardless of the costs to human well-being and to the environment we depend on.

Because let’s be clear: That’s what capitalism is, at its root. That is the sum total of the plan. We can see this embodied in the imperative to grow GDP, everywhere, year on year, at a compound rate, even though we know that GDP growth, on its own, does nothing to reduce poverty or to make people happier or healthier. Global GDP has grown 630% since 1980, and in that same time, by some measures, inequality, poverty, and hunger have all risen. (...)

It all proceeds from the same deep logic. It’s the same logic that sold lives for profit in the Atlantic slave trade, it’s the logic that gives us sweatshops and oil spills, and it’s the logic that is right now pushing us headlong toward ecological collapse and climate change.

Once we realize this, we can start connecting the dots between our different struggles. There are people in the U.S. fighting against the Keystone pipeline. There are people in Britain fighting against the privatization of the National Health Service. There are people in India fighting against corporate land grabs. There are people in Brazil fighting against the destruction of the Amazon rainforest. There are people in China fighting against poverty wages. These are all noble and important movements in their own right. But by focusing on all these symptoms we risk missing the underlying cause. And the cause is capitalism. It’s time to name the thing.

What’s so exciting about our present moment is that people are starting to do exactly that. And they are hungry for something different. For some, this means socialism. That YouGov poll showed that Americans under the age of 30 tend to have a more favorable view of socialism than they do of capitalism, which is surprising given the sheer scale of the propaganda out there designed to convince people that socialism is evil. But millennials aren’t bogged down by these dusty old binaries. For them the matter is simple: They can see that capitalism isn’t working for the majority of humanity, and they’re ready to invent something better.

What might a better world look like? There are a million ideas out there. We can start by changing how we understand and measure progress. As Robert Kennedy famously said, GDP “does not allow for the health of our children, the quality of their education, or the joy of their play . . . it measures everything, in short, except that which makes life worthwhile.”

We can change that. People want health care and education to be social goods, not market commodities, so we can choose to put public goods back in public hands. People want the fruits of production and the yields of our generous planet to benefit everyone, rather than being siphoned up by the super-rich, so we can change tax laws and introduce potentially transformative measures like a universal basic income. People want to live in balance with the environment on which we all depend for our survival; so we can adopt regenerative agricultural solutions and even choose, as Ecuador did in 2008, to recognize in law, at the level of the nation’s constitution, that nature has “the right to exist, persist, maintain, and regenerate its vital cycles.”

Measures like these could dethrone capitalism’s prime directive and replace it with a more balanced logic, that recognizes the many factors required for a healthy and thriving civilization. If done systematically enough, they could consign one-dimensional capitalism to the dustbin of history.

by Jason Hickel and Martin Kirk, Fast Company | Read more:
Image: Kseniya_Milner/iStock