Thursday, July 20, 2017
Wednesday, July 19, 2017
Kurt Vonnegut Walks Into a Bar
I was on the corner of Third Avenue and Forty-eighth Street, and Kurt Vonnegut was coming toward me, walking his big, loose-boned walk. It was fall and turning cold and he looked a little unbalanced in his overcoat, handsome but tousled, with long curly hair and a heavy mustache that sometimes hid his grin. I could tell he saw me by his shrug, which he sometimes used as a greeting.
I was on my way to buy dinner for some Newsweek writers who were suspicious of me as their new assistant managing editor. I had been brought in from Rolling Stone, and no one at Newsweek had heard of me. I didn’t know them either, but I knew Kurt, who was one of the first people I met when I moved to New York. We were neighbors on Forty-eighth Street, where he lived in a big townhouse in the middle of the block, and he’d invite me over for drinks. I had gotten him to contribute to Rolling Stone by keeping an eye out for his speeches and radio appearances and then suggesting ways they could be retooled as essays.
“Come have dinner,” I said. “I’ve got some Newsweek writers who would love to meet you.”
“Not in the mood,” Kurt said.
“They’re fans,” I said. “It’s part of your job.”
Kurt lit a Pall Mall and gave me a look, one of his favorites, amused but somehow saddened by the situation. He could act, Kurt.
“Think of it as a favor to me,” I said. “They’re not sure about me, and I’ve edited you.”
“Sort of,” he said, and I knew he had already had a couple drinks.
He never got mean, but he got honest.
“What else are you doing for dinner?” I said, knowing he seldom made plans.
“The last thing I need is ass kissing,” Kurt said.
“That’s what I’m doing right now.”
“They’ll want to know which novel I like best.”
“Cat’s Cradle,” I said.
“Wrong.” He flipped the Pall Mall into Forty-eighth Street, and we started walking together toward the restaurant.
The writers were already at the table, drinks in front of them. They looked up when we came in, surprised to see Kurt with me. There were six or eight of them, including the columnist Pete Axthelm, who was my only ally going into Newsweek because I knew him from Runyon’s, a bar in our neighborhood where everyone called him Ax.
I introduced Kurt around.
“Honored,” Ax said, or something like that, and the ass kissing began.
by Terry McDonell, Electric Lit | Read more:
Image: uncredited

“Come have dinner,” I said. “I’ve got some Newsweek writers who would love to meet you.”
“Not in the mood,” Kurt said.
“They’re fans,” I said. “It’s part of your job.”
Kurt lit a Pall Mall and gave me a look, one of his favorites, amused but somehow saddened by the situation. He could act, Kurt.
“Think of it as a favor to me,” I said. “They’re not sure about me, and I’ve edited you.”
“Sort of,” he said, and I knew he had already had a couple drinks.
He never got mean, but he got honest.
“What else are you doing for dinner?” I said, knowing he seldom made plans.
“The last thing I need is ass kissing,” Kurt said.
“That’s what I’m doing right now.”
“They’ll want to know which novel I like best.”
“Cat’s Cradle,” I said.
“Wrong.” He flipped the Pall Mall into Forty-eighth Street, and we started walking together toward the restaurant.
The writers were already at the table, drinks in front of them. They looked up when we came in, surprised to see Kurt with me. There were six or eight of them, including the columnist Pete Axthelm, who was my only ally going into Newsweek because I knew him from Runyon’s, a bar in our neighborhood where everyone called him Ax.
I introduced Kurt around.
“Honored,” Ax said, or something like that, and the ass kissing began.
by Terry McDonell, Electric Lit | Read more:
Image: uncredited
The Limitations of Deep Learning
Deep learning: the geometric view
The most surprising thing about deep learning is how simple it is. Ten years ago, no one expected that we would achieve such amazing results on machine perception problems by using simple parametric models trained with gradient descent. Now, it turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples. As Feynman once said about the universe, "It's not complicated, it's just a lot of it".
In deep learning, everything is a vector, i.e. everything is a point in a geometric space. Model inputs (it could be text, images, etc) and targets are first "vectorized", i.e. turned into some initial input vector space and target vector space. Each layer in a deep learning model operates one simple geometric transformation on the data that goes through it. Together, the chain of layers of the model forms one very complex geometric transformation, broken down into a series of simple ones. This complex transformation attempts to maps the input space to the target space, one point at a time. This transformation is parametrized by the weights of the layers, which are iteratively updated based on how well the model is currently performing. A key characteristic of this geometric transformation is that it must be differentiable, which is required in order for us to be able to learn its parameters via gradient descent. Intuitively, this means that the geometric morphing from inputs to outputs must be smooth and continuous—a significant constraint.
The whole process of applying this complex geometric transformation to the input data can be visualized in 3D by imagining a person trying to uncrumple a paper ball: the crumpled paper ball is the manifold of the input data that the model starts with. Each movement operated by the person on the paper ball is similar to a simple geometric transformation operated by one layer. The full uncrumpling gesture sequence is the complex transformation of the entire model. Deep learning models are mathematical machines for uncrumpling complicated manifolds of high-dimensional data.
That's the magic of deep learning: turning meaning into vectors, into geometric spaces, then incrementally learning complex geometric transformations that map one space to another. All you need are spaces of sufficiently high dimensionality in order to capture the full scope of the relationships found in the original data.
The limitations of deep learning
The space of applications that can be implemented with this simple strategy is nearly infinite. And yet, many more applications are completely out of reach for current deep learning techniques—even given vast amounts of human-annotated data. Say, for instance, that you could assemble a dataset of hundreds of thousands—even millions—of English language descriptions of the features of a software product, as written by a product manager, as well as the corresponding source code developed by a team of engineers to meet these requirements. Even with this data, you could not train a deep learning model to simply read a product description and generate the appropriate codebase. That's just one example among many. In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult.
This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task, or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex, or there may not be appropriate data available to learn it.
Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold.
The risk of anthropomorphizing machine learning models
One very real risk with contemporary AI is that of misinterpreting what deep learning models do, and overestimating their abilities. A fundamental feature of the human mind is our "theory of mind", our tendency to project intentions, beliefs and knowledge on the things around us. Drawing a smiley face on a rock suddenly makes it "happy"—in our minds. Applied to deep learning, this means that when we are able to somewhat successfully train a model to generate captions to describe pictures, for instance, we are led to believe that the model "understands" the contents of the pictures, as well as the captions it generates. We then proceed to be very surprised when any slight departure from the sort of images present in the training data causes the model to start generating completely absurd captions. (...)
Humans are capable of far more than mapping immediate stimuli to immediate responses, like a deep net, or maybe an insect, would do. They maintain complex, abstract models of their current situation, of themselves, of other people, and can use these models to anticipate different possible futures and perform long-term planning. They are capable of merging together known concepts to represent something they have never experienced before—like picturing a horse wearing jeans, for instance, or imagining what they would do if they won the lottery. This ability to handle hypotheticals, to expand our mental model space far beyond what we can experience directly, in a word, to perform abstraction and reasoning, is arguably the defining characteristic of human cognition. I call it "extreme generalization": an ability to adapt to novel, never experienced before situations, using very little data or even no new data at all.
This stands in sharp contrast with what deep nets do, which I would call "local generalization": the mapping from inputs to outputs performed by deep nets quickly stops making sense if new inputs differ even slightly from what they saw at training time. Consider, for instance, the problem of learning the appropriate launch parameters to get a rocket to land on the moon. If you were to use a deep net for this task, whether training using supervised learning or reinforcement learning, you would need to feed it with thousands or even millions of launch trials, i.e. you would need to expose it to a dense sampling of the input space, in order to learn a reliable mapping from input space to output space. By contrast, humans can use their power of abstraction to come up with physical models—rocket science—and derive an exact solution that will get the rocket on the moon in just one or few trials. Similarly, if you developed a deep net controlling a human body, and wanted it to learn to safely navigate a city without getting hit by cars, the net would have to die many thousands of times in various situations until it could infer that cars are dangerous, and develop appropriate avoidance behaviors. Dropped into a new city, the net would have to relearn most of what it knows. On the other hand, humans are able to learn safe behaviors without having to die even once—again, thanks to their power of abstract modeling of hypothetical situations. (...)
Take-aways
Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data. Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.
To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction. A likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs. We have said before (Note: in Deep Learning with Python) that machine learning models could be defined as "learnable programs"; currently we can only learn programs that belong to a very narrow and specific subset of all possible programs. But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like.
You can read the second part here: The future of deep learning.
by By Francois Chollet, Keras Blog | Read more:
Image: Getty
The most surprising thing about deep learning is how simple it is. Ten years ago, no one expected that we would achieve such amazing results on machine perception problems by using simple parametric models trained with gradient descent. Now, it turns out that all you need is sufficiently large parametric models trained with gradient descent on sufficiently many examples. As Feynman once said about the universe, "It's not complicated, it's just a lot of it".
In deep learning, everything is a vector, i.e. everything is a point in a geometric space. Model inputs (it could be text, images, etc) and targets are first "vectorized", i.e. turned into some initial input vector space and target vector space. Each layer in a deep learning model operates one simple geometric transformation on the data that goes through it. Together, the chain of layers of the model forms one very complex geometric transformation, broken down into a series of simple ones. This complex transformation attempts to maps the input space to the target space, one point at a time. This transformation is parametrized by the weights of the layers, which are iteratively updated based on how well the model is currently performing. A key characteristic of this geometric transformation is that it must be differentiable, which is required in order for us to be able to learn its parameters via gradient descent. Intuitively, this means that the geometric morphing from inputs to outputs must be smooth and continuous—a significant constraint.

That's the magic of deep learning: turning meaning into vectors, into geometric spaces, then incrementally learning complex geometric transformations that map one space to another. All you need are spaces of sufficiently high dimensionality in order to capture the full scope of the relationships found in the original data.
The limitations of deep learning
The space of applications that can be implemented with this simple strategy is nearly infinite. And yet, many more applications are completely out of reach for current deep learning techniques—even given vast amounts of human-annotated data. Say, for instance, that you could assemble a dataset of hundreds of thousands—even millions—of English language descriptions of the features of a software product, as written by a product manager, as well as the corresponding source code developed by a team of engineers to meet these requirements. Even with this data, you could not train a deep learning model to simply read a product description and generate the appropriate codebase. That's just one example among many. In general, anything that requires reasoning—like programming, or applying the scientific method—long-term planning, and algorithmic-like data manipulation, is out of reach for deep learning models, no matter how much data you throw at them. Even learning a sorting algorithm with a deep neural network is tremendously difficult.
This is because a deep learning model is "just" a chain of simple, continuous geometric transformations mapping one vector space into another. All it can do is map one data manifold X into another manifold Y, assuming the existence of a learnable continuous transform from X to Y, and the availability of a dense sampling of X:Y to use as training data. So even though a deep learning model can be interpreted as a kind of program, inversely most programs cannot be expressed as deep learning models—for most tasks, either there exists no corresponding practically-sized deep neural network that solves the task, or even if there exists one, it may not be learnable, i.e. the corresponding geometric transform may be far too complex, or there may not be appropriate data available to learn it.
Scaling up current deep learning techniques by stacking more layers and using more training data can only superficially palliate some of these issues. It will not solve the more fundamental problem that deep learning models are very limited in what they can represent, and that most of the programs that one may wish to learn cannot be expressed as a continuous geometric morphing of a data manifold.
The risk of anthropomorphizing machine learning models
One very real risk with contemporary AI is that of misinterpreting what deep learning models do, and overestimating their abilities. A fundamental feature of the human mind is our "theory of mind", our tendency to project intentions, beliefs and knowledge on the things around us. Drawing a smiley face on a rock suddenly makes it "happy"—in our minds. Applied to deep learning, this means that when we are able to somewhat successfully train a model to generate captions to describe pictures, for instance, we are led to believe that the model "understands" the contents of the pictures, as well as the captions it generates. We then proceed to be very surprised when any slight departure from the sort of images present in the training data causes the model to start generating completely absurd captions. (...)
Humans are capable of far more than mapping immediate stimuli to immediate responses, like a deep net, or maybe an insect, would do. They maintain complex, abstract models of their current situation, of themselves, of other people, and can use these models to anticipate different possible futures and perform long-term planning. They are capable of merging together known concepts to represent something they have never experienced before—like picturing a horse wearing jeans, for instance, or imagining what they would do if they won the lottery. This ability to handle hypotheticals, to expand our mental model space far beyond what we can experience directly, in a word, to perform abstraction and reasoning, is arguably the defining characteristic of human cognition. I call it "extreme generalization": an ability to adapt to novel, never experienced before situations, using very little data or even no new data at all.
This stands in sharp contrast with what deep nets do, which I would call "local generalization": the mapping from inputs to outputs performed by deep nets quickly stops making sense if new inputs differ even slightly from what they saw at training time. Consider, for instance, the problem of learning the appropriate launch parameters to get a rocket to land on the moon. If you were to use a deep net for this task, whether training using supervised learning or reinforcement learning, you would need to feed it with thousands or even millions of launch trials, i.e. you would need to expose it to a dense sampling of the input space, in order to learn a reliable mapping from input space to output space. By contrast, humans can use their power of abstraction to come up with physical models—rocket science—and derive an exact solution that will get the rocket on the moon in just one or few trials. Similarly, if you developed a deep net controlling a human body, and wanted it to learn to safely navigate a city without getting hit by cars, the net would have to die many thousands of times in various situations until it could infer that cars are dangerous, and develop appropriate avoidance behaviors. Dropped into a new city, the net would have to relearn most of what it knows. On the other hand, humans are able to learn safe behaviors without having to die even once—again, thanks to their power of abstract modeling of hypothetical situations. (...)
Take-aways
Here's what you should remember: the only real success of deep learning so far has been the ability to map space X to space Y using a continuous geometric transform, given large amounts of human-annotated data. Doing this well is a game-changer for essentially every industry, but it is still a very long way from human-level AI.
To lift some of these limitations and start competing with human brains, we need to move away from straightforward input-to-output mappings, and on to reasoning and abstraction. A likely appropriate substrate for abstract modeling of various situations and concepts is that of computer programs. We have said before (Note: in Deep Learning with Python) that machine learning models could be defined as "learnable programs"; currently we can only learn programs that belong to a very narrow and specific subset of all possible programs. But what if we could learn any program, in a modular and reusable way? Let's see in the next post what the road ahead may look like.
You can read the second part here: The future of deep learning.
by By Francois Chollet, Keras Blog | Read more:
Image: Getty
[ed. See also: 2016: The year deep learning took over the internet.]
Long Strange Trip
Amir Bar-Lev’s rockumentary, Long Strange Trip, about the Grateful Dead, is aptly named for what is arguably the band’s most famous lyric: What a long, strange trip it’s been. The film takes you on a four-hour ride (much like the band's live shows) but this is not just another indulgent music doc.
Executive-produced by Martin Scorsese, the film digs deeply into the bizarre phenomenon that surrounded “The Dead” for decades—obsessive fans, called Deadheads, became a cult-like following that elevated the band’s ringmaster, Jerry Garcia (Aug. 1, 1942–Aug. 9, 1995), to a status he never wanted.
The must-see film includes 17 interviews, 1,100 rare photos and loads of footage you’ve never seen. Deadheads will be ecstatic. Bar-Lev doesn’t tell you what to think. Instead he offers many points of view. One theory is that the die-hard Deadheads were the major cause of Garcia’s descent into heroin. I didn’t buy that so I reached out to Grateful Dead insider Dennis McNally, whose book, A Long Strange Trip: The Inside History of the Grateful Dead, provided Bar-Lev with much of the band’s story. McNally spent 30 years with the band beginning when Garcia invited him to become their biographer in 1981.
When I asked McNally if he thought it was the Deadheads that drove Garcia to abuse heroin, or if he felt, as I do, that it was a progression from one addiction to another. McNally answered:
“I don’t think there’s an inherent progression [of addiction], I mean everybody starts with milk, too, you know? He turned to self-medication for any number of reasons…. His father died when he was four, he didn’t get the attention from his mother that he felt he deserved. Eventually, yes, but not specifically the fame. It was the responsibility. Jerry wanted to be Huckleberry Finn, well, if Huckleberry Finn was allowed to smoke joints and play guitar and cruise down the river on a raft.”
McNally pointed out that Garcia “employed 50 people, me among them. We expected paychecks every couple of weeks. There was a weight of responsibility on him for employees, their families, but also the million Deadheads who were addicted to the music and the shows. They expected him to come out and play 80 shows a year. That wore on him. He was 53-years-old and a walking candidate for a heart attack. Still smoked cigarettes, had a terrible diet. He was a diabetic who did not take it seriously.” (...)
The movie mimicked Garcia’s life. It began as a celebration but ended with a trip down the dark cellar of no return. Garcia probably didn’t set out to become a heroin addict. Maybe he just thought he could handle it. Or maybe, like my own drug use, just got to a point where he wanted out. Many people didn’t know the flip side of his jolly exterior was a dark depression.
The Dead’s casualties also included Ron “Pigpen” McKernan who drank himself to death in 1973 at age 27. Keith Godchaux died at age 32 in 1980 from a car crash after he and friend Courtenay Pollock had partied for hours to celebrate Godchaux’s birthday. The intoxicated driver—Pollock—survived the accident. Brent Mydland, mostly known as a drinker, died from a “speedball” (morphine and cocaine) in 1990. After Mydland’s death, keyboardist Vince Welnick joined the band but died in 2006 when he committed suicide. Phil Lesh’s drug use led to contracting hepatitis C. In the fall of 1998, his life was saved by a liver transplant.
Next I tracked down former president of Warner Bros. Records, Joe Smith, the executive who first signed the Dead. His presence brought a lot of fun into the film during the celebratory first half. “They were totally insane at times,” said Smith. “Trying to corral them was a very difficult thing, but we developed a friendship and Jerry Garcia was very bright. They were all bright. I established relationships with all of them, but not without difficulty because they didn’t want relationships. They were stoned most of the time. Phil Lesh was a particularly difficult guy.”
“How so?” I asked.
“He disputed and contested everything. One time, when I was trying to promote them, he said, ‘No. Let’s go out and record 30 minutes of heavy air on a smoggy day in L.A. Then we’re gonna record 30 minutes of clear air on a beautiful day, and we’ll mix it and that’ll be a rhythm soundtrack.”
Executive-produced by Martin Scorsese, the film digs deeply into the bizarre phenomenon that surrounded “The Dead” for decades—obsessive fans, called Deadheads, became a cult-like following that elevated the band’s ringmaster, Jerry Garcia (Aug. 1, 1942–Aug. 9, 1995), to a status he never wanted.
The must-see film includes 17 interviews, 1,100 rare photos and loads of footage you’ve never seen. Deadheads will be ecstatic. Bar-Lev doesn’t tell you what to think. Instead he offers many points of view. One theory is that the die-hard Deadheads were the major cause of Garcia’s descent into heroin. I didn’t buy that so I reached out to Grateful Dead insider Dennis McNally, whose book, A Long Strange Trip: The Inside History of the Grateful Dead, provided Bar-Lev with much of the band’s story. McNally spent 30 years with the band beginning when Garcia invited him to become their biographer in 1981.

“I don’t think there’s an inherent progression [of addiction], I mean everybody starts with milk, too, you know? He turned to self-medication for any number of reasons…. His father died when he was four, he didn’t get the attention from his mother that he felt he deserved. Eventually, yes, but not specifically the fame. It was the responsibility. Jerry wanted to be Huckleberry Finn, well, if Huckleberry Finn was allowed to smoke joints and play guitar and cruise down the river on a raft.”
McNally pointed out that Garcia “employed 50 people, me among them. We expected paychecks every couple of weeks. There was a weight of responsibility on him for employees, their families, but also the million Deadheads who were addicted to the music and the shows. They expected him to come out and play 80 shows a year. That wore on him. He was 53-years-old and a walking candidate for a heart attack. Still smoked cigarettes, had a terrible diet. He was a diabetic who did not take it seriously.” (...)
The movie mimicked Garcia’s life. It began as a celebration but ended with a trip down the dark cellar of no return. Garcia probably didn’t set out to become a heroin addict. Maybe he just thought he could handle it. Or maybe, like my own drug use, just got to a point where he wanted out. Many people didn’t know the flip side of his jolly exterior was a dark depression.
The Dead’s casualties also included Ron “Pigpen” McKernan who drank himself to death in 1973 at age 27. Keith Godchaux died at age 32 in 1980 from a car crash after he and friend Courtenay Pollock had partied for hours to celebrate Godchaux’s birthday. The intoxicated driver—Pollock—survived the accident. Brent Mydland, mostly known as a drinker, died from a “speedball” (morphine and cocaine) in 1990. After Mydland’s death, keyboardist Vince Welnick joined the band but died in 2006 when he committed suicide. Phil Lesh’s drug use led to contracting hepatitis C. In the fall of 1998, his life was saved by a liver transplant.
Next I tracked down former president of Warner Bros. Records, Joe Smith, the executive who first signed the Dead. His presence brought a lot of fun into the film during the celebratory first half. “They were totally insane at times,” said Smith. “Trying to corral them was a very difficult thing, but we developed a friendship and Jerry Garcia was very bright. They were all bright. I established relationships with all of them, but not without difficulty because they didn’t want relationships. They were stoned most of the time. Phil Lesh was a particularly difficult guy.”
“How so?” I asked.
“He disputed and contested everything. One time, when I was trying to promote them, he said, ‘No. Let’s go out and record 30 minutes of heavy air on a smoggy day in L.A. Then we’re gonna record 30 minutes of clear air on a beautiful day, and we’ll mix it and that’ll be a rhythm soundtrack.”
by Dorri Olds, The Fix | Read more:
Image: Michael ConwayTuesday, July 18, 2017
Letting Robots Teach Schoolkids
For all the talk about whether robots will take our jobs, a new worry is emerging, namely whether we should let robots teach our kids. As the capabilities of smart software and artificial intelligence advance, parents, teachers, teachers’ unions and the children themselves will all have stakes in the outcome.
I, for one, say bring on the robots, or at least let us proceed with the experiments. You can imagine robots in schools serving as pets, peers, teachers, tutors, monitors and therapists, among other functions. They can store and communicate vast troves of knowledge, or provide a virtually inexhaustible source of interactive exchange on any topic that can be programmed into software.
But perhaps more important in the longer run, robots also bring many introverted or disabled or non-conforming children into greater classroom participation. They are less threatening, always available, and they never tire or lose patience.
Human teachers sometimes feel the need to bully or put down their students. That’s a way of maintaining classroom control, but it also harms children and discourages learning. A robot in contrast need not resort to tactics of psychological intimidation.
The pioneer in robot education so far is, not surprisingly, Singapore. The city-state has begun experiments with robotic aides at the kindergarten level, mostly as instructor aides and for reading stories and also teaching social interactions. In the U.K., researchers have developed a robot to help autistic children better learn how to interact with their peers.
I can imagine robots helping non-English-speaking children make the transition to bilingualism. Or how about using robots in Asian classrooms where the teachers themselves do not know enough English to teach the language effectively?
A big debate today is how we can teach ourselves to work with artificial intelligence, so as to prevent eventual widespread technological unemployment. Exposing children to robots early, and having them grow accustomed to human-machine interaction, is one path toward this important goal.
In a recent Financial Times interview, Sherry Turkle, a professor of social psychology at MIT, and a leading expert on cyber interactions, criticized robot education. “The robot can never be in an authentic relationship," she said. "Why should we normalize what is false and in the realm of [a] pretend relationship from the start?” She’s opposed to robot companions more generally, again for their artificiality.
Yet K-12 education itself is a highly artificial creation, from the chalk to the schoolhouses to the standardized achievement tests, not to mention the internet learning and the classroom TV. Thinking back on my own experience, I didn’t especially care if my teachers were “authentic” (in fact, I suspected quite a few were running a kind of personality con), provided they communicated their knowledge and radiated some charisma. (...)
Keep in mind that robot instructors are going to come through toys and the commercial market in any case, whether schools approve or not. Is it so terrible an idea for some of those innovations to be supervised by, and combined with, the efforts of teachers and the educational establishment?
[ed. See also: Give robots an 'ethical black box' to track and explain decisions]
I, for one, say bring on the robots, or at least let us proceed with the experiments. You can imagine robots in schools serving as pets, peers, teachers, tutors, monitors and therapists, among other functions. They can store and communicate vast troves of knowledge, or provide a virtually inexhaustible source of interactive exchange on any topic that can be programmed into software.

Human teachers sometimes feel the need to bully or put down their students. That’s a way of maintaining classroom control, but it also harms children and discourages learning. A robot in contrast need not resort to tactics of psychological intimidation.
The pioneer in robot education so far is, not surprisingly, Singapore. The city-state has begun experiments with robotic aides at the kindergarten level, mostly as instructor aides and for reading stories and also teaching social interactions. In the U.K., researchers have developed a robot to help autistic children better learn how to interact with their peers.
I can imagine robots helping non-English-speaking children make the transition to bilingualism. Or how about using robots in Asian classrooms where the teachers themselves do not know enough English to teach the language effectively?
A big debate today is how we can teach ourselves to work with artificial intelligence, so as to prevent eventual widespread technological unemployment. Exposing children to robots early, and having them grow accustomed to human-machine interaction, is one path toward this important goal.
In a recent Financial Times interview, Sherry Turkle, a professor of social psychology at MIT, and a leading expert on cyber interactions, criticized robot education. “The robot can never be in an authentic relationship," she said. "Why should we normalize what is false and in the realm of [a] pretend relationship from the start?” She’s opposed to robot companions more generally, again for their artificiality.
Yet K-12 education itself is a highly artificial creation, from the chalk to the schoolhouses to the standardized achievement tests, not to mention the internet learning and the classroom TV. Thinking back on my own experience, I didn’t especially care if my teachers were “authentic” (in fact, I suspected quite a few were running a kind of personality con), provided they communicated their knowledge and radiated some charisma. (...)
Keep in mind that robot instructors are going to come through toys and the commercial market in any case, whether schools approve or not. Is it so terrible an idea for some of those innovations to be supervised by, and combined with, the efforts of teachers and the educational establishment?
by Tyler Cowen, Bloomberg | Read more:
Image: Nigel Treblin/Getty Images[ed. See also: Give robots an 'ethical black box' to track and explain decisions]
Sunday, July 16, 2017
Saturday, July 15, 2017
Friday, July 14, 2017
Trump's Russian Laundromat
In 1984, a Russian émigré named David Bogatin went shopping for apartments in New York City. The 38-year-old had arrived in America seven years before, with just $3 in his pocket. But for a former pilot in the Soviet Army—his specialty had been shooting down Americans over North Vietnam—he had clearly done quite well for himself. Bogatin wasn’t hunting for a place in Brighton Beach, the Brooklyn enclave known as “Little Odessa” for its large population of immigrants from the Soviet Union. Instead, he was fixated on the glitziest apartment building on Fifth Avenue, a gaudy, 58-story edifice with gold-plated fixtures and a pink-marble atrium: Trump Tower.
A monument to celebrity and conspicuous consumption, the tower was home to the likes of Johnny Carson, Steven Spielberg, and Sophia Loren. Its brash, 38-year-old developer was something of a tabloid celebrity himself. Donald Trump was just coming into his own as a serious player in Manhattan real estate, and Trump Tower was the crown jewel of his growing empire. From the day it opened, the building was a hit—all but a few dozen of its 263 units had sold in the first few months. But Bogatin wasn’t deterred by the limited availability or the sky-high prices. The Russian plunked down $6 million to buy not one or two, but five luxury condos. The big check apparently caught the attention of the owner. According to Wayne Barrett, who investigated the deal for the Village Voice, Trump personally attended the closing, along with Bogatin.
If the transaction seemed suspicious—multiple apartments for a single buyer who appeared to have no legitimate way to put his hands on that much money—there may have been a reason. At the time, Russian mobsters were beginning to invest in high-end real estate, which offered an ideal vehicle to launder money from their criminal enterprises. “During the ’80s and ’90s, we in the U.S. government repeatedly saw a pattern by which criminals would use condos and high-rises to launder money,” says Jonathan Winer, a deputy assistant secretary of state for international law enforcement in the Clinton administration. “It didn’t matter that you paid too much, because the real estate values would rise, and it was a way of turning dirty money into clean money. It was done very systematically, and it explained why there are so many high-rises where the units were sold but no one is living in them.” When Trump Tower was built, as David Cay Johnston reports in The Making of Donald Trump, it was only the second high-rise in New York that accepted anonymous buyers.
In 1987, just three years after he attended the closing with Trump, Bogatin pleaded guilty to taking part in a massive gasoline-bootlegging scheme with Russian mobsters. After he fled the country, the government seized his five condos at Trump Tower, saying that he had purchased them to “launder money, to shelter and hide assets.” A Senate investigation into organized crime later revealed that Bogatin was a leading figure in the Russian mob in New York. His family ties, in fact, led straight to the top: His brother ran a $150 million stock scam with none other than Semion Mogilevich, whom the FBI considers the “boss of bosses” of the Russian mafia. At the time, Mogilevich—feared even by his fellow gangsters as “the most powerful mobster in the world”—was expanding his multibillion-dollar international criminal syndicate into America.
In 1987, on his first trip to Russia, Trump visited the Winter Palace with Ivana. The Soviets flew him to Moscow—all expenses paid—to discuss building a luxury hotel across from the Kremlin.
Since Trump’s election as president, his ties to Russia have become the focus of intense scrutiny, most of which has centered on whether his inner circle colluded with Russia to subvert the U.S. election. A growing chorus in Congress is also asking pointed questions about how the president built his business empire. Rep. Adam Schiff, the ranking Democrat on the House Intelligence Committee, has called for a deeper inquiry into “Russian investment in Trump’s businesses and properties.”
The very nature of Trump’s businesses—all of which are privately held, with few reporting requirements—makes it difficult to root out the truth about his financial deals. And the world of Russian oligarchs and organized crime, by design, is shadowy and labyrinthine. For the past three decades, state and federal investigators, as well as some of America’s best investigative journalists, have sifted through mountains of real estate records, tax filings, civil lawsuits, criminal cases, and FBI and Interpol reports, unearthing ties between Trump and Russian mobsters like Mogilevich. To date, no one has documented that Trump was even aware of any suspicious entanglements in his far-flung businesses, let alone that he was directly compromised by the Russian mafia or the corrupt oligarchs who are closely allied with the Kremlin. So far, when it comes to Trump’s ties to Russia, there is no smoking gun.
But even without an investigation by Congress or a special prosecutor, there is much we already know about the president’s debt to Russia. A review of the public record reveals a clear and disturbing pattern: Trump owes much of his business success, and by extension his presidency, to a flow of highly suspicious money from Russia. Over the past three decades, at least 13 people with known or alleged links to Russian mobsters or oligarchs have owned, lived in, and even run criminal activities out of Trump Tower and other Trump properties. Many used his apartments and casinos to launder untold millions in dirty money. Some ran a worldwide high-stakes gambling ring out of Trump Tower—in a unit directly below one owned by Trump. Others provided Trump with lucrative branding deals that required no investment on his part. Taken together, the flow of money from Russia provided Trump with a crucial infusion of financing that helped rescue his empire from ruin, burnish his image, and launch his career in television and politics. “They saved his bacon,” says Kenneth McCallion, a former assistant U.S. attorney in the Reagan administration who investigated ties between organized crime and Trump’s developments in the 1980s.
It’s entirely possible that Trump was never more than a convenient patsy for Russian oligarchs and mobsters, with his casinos and condos providing easy pass-throughs for their illicit riches. At the very least, with his constant need for new infusions of cash and his well-documented troubles with creditors, Trump made an easy “mark” for anyone looking to launder money. But whatever his knowledge about the source of his wealth, the public record makes clear that Trump built his business empire in no small part with a lot of dirty money from a lot of dirty Russians—including the dirtiest and most feared of them all.
A monument to celebrity and conspicuous consumption, the tower was home to the likes of Johnny Carson, Steven Spielberg, and Sophia Loren. Its brash, 38-year-old developer was something of a tabloid celebrity himself. Donald Trump was just coming into his own as a serious player in Manhattan real estate, and Trump Tower was the crown jewel of his growing empire. From the day it opened, the building was a hit—all but a few dozen of its 263 units had sold in the first few months. But Bogatin wasn’t deterred by the limited availability or the sky-high prices. The Russian plunked down $6 million to buy not one or two, but five luxury condos. The big check apparently caught the attention of the owner. According to Wayne Barrett, who investigated the deal for the Village Voice, Trump personally attended the closing, along with Bogatin.

In 1987, just three years after he attended the closing with Trump, Bogatin pleaded guilty to taking part in a massive gasoline-bootlegging scheme with Russian mobsters. After he fled the country, the government seized his five condos at Trump Tower, saying that he had purchased them to “launder money, to shelter and hide assets.” A Senate investigation into organized crime later revealed that Bogatin was a leading figure in the Russian mob in New York. His family ties, in fact, led straight to the top: His brother ran a $150 million stock scam with none other than Semion Mogilevich, whom the FBI considers the “boss of bosses” of the Russian mafia. At the time, Mogilevich—feared even by his fellow gangsters as “the most powerful mobster in the world”—was expanding his multibillion-dollar international criminal syndicate into America.
In 1987, on his first trip to Russia, Trump visited the Winter Palace with Ivana. The Soviets flew him to Moscow—all expenses paid—to discuss building a luxury hotel across from the Kremlin.
Since Trump’s election as president, his ties to Russia have become the focus of intense scrutiny, most of which has centered on whether his inner circle colluded with Russia to subvert the U.S. election. A growing chorus in Congress is also asking pointed questions about how the president built his business empire. Rep. Adam Schiff, the ranking Democrat on the House Intelligence Committee, has called for a deeper inquiry into “Russian investment in Trump’s businesses and properties.”
The very nature of Trump’s businesses—all of which are privately held, with few reporting requirements—makes it difficult to root out the truth about his financial deals. And the world of Russian oligarchs and organized crime, by design, is shadowy and labyrinthine. For the past three decades, state and federal investigators, as well as some of America’s best investigative journalists, have sifted through mountains of real estate records, tax filings, civil lawsuits, criminal cases, and FBI and Interpol reports, unearthing ties between Trump and Russian mobsters like Mogilevich. To date, no one has documented that Trump was even aware of any suspicious entanglements in his far-flung businesses, let alone that he was directly compromised by the Russian mafia or the corrupt oligarchs who are closely allied with the Kremlin. So far, when it comes to Trump’s ties to Russia, there is no smoking gun.
But even without an investigation by Congress or a special prosecutor, there is much we already know about the president’s debt to Russia. A review of the public record reveals a clear and disturbing pattern: Trump owes much of his business success, and by extension his presidency, to a flow of highly suspicious money from Russia. Over the past three decades, at least 13 people with known or alleged links to Russian mobsters or oligarchs have owned, lived in, and even run criminal activities out of Trump Tower and other Trump properties. Many used his apartments and casinos to launder untold millions in dirty money. Some ran a worldwide high-stakes gambling ring out of Trump Tower—in a unit directly below one owned by Trump. Others provided Trump with lucrative branding deals that required no investment on his part. Taken together, the flow of money from Russia provided Trump with a crucial infusion of financing that helped rescue his empire from ruin, burnish his image, and launch his career in television and politics. “They saved his bacon,” says Kenneth McCallion, a former assistant U.S. attorney in the Reagan administration who investigated ties between organized crime and Trump’s developments in the 1980s.
It’s entirely possible that Trump was never more than a convenient patsy for Russian oligarchs and mobsters, with his casinos and condos providing easy pass-throughs for their illicit riches. At the very least, with his constant need for new infusions of cash and his well-documented troubles with creditors, Trump made an easy “mark” for anyone looking to launder money. But whatever his knowledge about the source of his wealth, the public record makes clear that Trump built his business empire in no small part with a lot of dirty money from a lot of dirty Russians—including the dirtiest and most feared of them all.
by Craig Unger, New Republic | Read more:
Image: Alex NabaumThursday, July 13, 2017
The Brave New World of Gene Editing
In the last few years, genetic testing has entered the commercial mainstream. Direct-to-consumer testing is now commonplace, performed by companies such as 23andMe (humans have twenty-three pairs of chromosomes). Much of the interest in such tests is based not only on the claim that they enable us to trace our ancestry, but also on the insight into our future health that they purport to provide. At the beginning of April, 23andMe received FDA approval to sell a do-it-yourself genetic test for ten diseases, including Parkinson’s and late-onset Alzheimer’s. You spit in a tube, send it off to the company, and after a few days you get your results. But as Steven Heine, a Canadian professor of social and cultural psychology who undertook several such tests on himself, explains in DNA Is Not Destiny, that is where the problems begin.
Some diseases are indeed entirely genetically determined—Huntington’s disease, Duchenne muscular dystrophy, and so on. If you have the faulty gene, you will eventually have the disease. Whether you want to be told by e-mail that you will develop a life-threatening disease is something you need to think hard about before doing the test. But for the vast majority of diseases, our future is not written in our genes, and the results of genetic tests can be misleading. (...)
More troublingly still, however imperfect its predictive value, the tsunami of human genetic information now pouring from DNA sequencers all over the planet raises the possibility that our DNA could be used against us. The Genetic Information Nondiscrimination Act of 2008 made it illegal for US medical insurance companies to discriminate on the basis of genetic information (although strikingly not for life insurance or long-term care). However, the health care reform legislation recently passed by the House (the American Health Care Act, known as Trumpcare) allows insurers to charge higher premiums for people with a preexisting condition. It is hard to imagine anything more preexisting than a gene that could or, even worse, will lead to your getting a particular disease; and under such a health system, insurance companies would have every incentive to find out the risks present in your DNA. If this component of the Republican health care reform becomes law, the courts may conclude that a genetic test qualifies as proof of a preexisting condition. If genes end up affecting health insurance payments, some people might choose not to take these tests.
But of even greater practical and moral significance is the second part of the revolution in genetics: our ability to modify or “edit” the DNA sequences of humans and other creatures. This technique, known as CRISPR (pronounced “crisper”), was first applied to human cells in 2013, and has already radically changed research in the life sciences. It works in pretty much every species in which it has been tried and is currently undergoing its first clinical trials. HIV, leukemia, and sickle-cell anemia will probably soon be treated using CRISPR.
In A Crack in Creation, one of the pioneers of this technique, the biochemist Jennifer Doudna of the University of California at Berkeley, together with her onetime student Samuel Sternberg, describes the science behind CRISPR and the history of its discovery. This guidebook to the CRISPR revolution gives equal weight to the science of CRISPR and the profound ethical questions it raises. The book is required reading for every concerned citizen—the material it covers should be discussed in schools, colleges, and universities throughout the country. Community and patient groups need to understand the implications of this technology and help decide how it should and should not be applied, while politicians must confront the dramatic challenges posed by gene editing.
The story of CRISPR is a case study in how scientific inquiry that is purely driven by curiosity can lead to major advances. Beginning in the 1980s, scientists noticed that parts of the genomes of microbes contained regular DNA sequences that were repeated and consisted of approximate palindromes. (In fact, in general only a few motifs are roughly repeated within each “palindrome.”) Eventually, these sequences were given the snappy acronym CRISPR—clustered regularly interspersed short palindromic repeats. A hint about their function emerged when it became clear that the bits of DNAfound in the spaces between the repeats—called spacer DNA—were not some random bacterial junk, but instead had come from viruses and had been integrated into the microbe’s genome.
These bits of DNA turned out to be very important in the life of the microbe. In 2002, scientists discovered that the CRISPR sequences activate a series of proteins—known as CRISPR-associated (or Cas) proteins—that can unravel and attack DNA. Then in 2007, it was shown that the CRISPR sequence and one particular protein (often referred to as CRISPR-Cas9) act together as a kind of immune system for microbes: if a particular virus’s DNA is incorporated into a microbe’s CRISPR sequences, the microbe can recognize an invasion by that virus and activate Cas proteins to snip it up.
This was a pretty big deal for microbiologists, but the excitement stems from the realization that the CRISPR-associated proteins could be used to alter any DNA to achieve a desired sequence. At the beginning in 2013, three groups of researchers, from the University of California at Berkeley (led by Jennifer Doudna), Harvard Medical School (led by George Church), and the Broad Institute of MIT and Harvard (led by Feng Zhang), independently showed that the CRISPR technique could be used to modify human cells. Gene editing was born.
The possibilities of CRISPR are immense. If you know a DNA sequence from a given organism, you can chop it up, delete it, and change it at will, much like what a word-processing program can do with texts. You can even use CRISPR to introduce additional control elements—for example to engineer a gene so that it is activated by light stimulation. In experimental organisms this can provide an extraordinary degree of control in studies of gene function, enabling scientists to explore the consequences of gene expression at a particular moment in the organism’s life or in a particular environment.
There appear to be few limits to how CRISPR might be used. One is technical: it can be difficult to deliver the specially constructed CRISPR DNA sequences to specific cells in order to change their genes. But a larger and more intractable concern is ethical: Where and when should this technology be used? In 2016, the power of gene editing and the relative ease of its application led James Clapper, President Obama’s director of national intelligence, to describe CRISPR as a weapon of mass destruction. Well-meaning biohackers are already selling kits over the Internet that enable anyone with high school biology to edit the genes of bacteria. The plotline of a techno-thriller may be writing itself in real time. (...)
The second half of A Crack in Creation deals with the profound ethical issues that are raised by gene editing. These pages are not dry or abstract—Doudna uses her own shifting positions on these questions as a way for the reader to explore different possibilities. However, she often offers no clear way forward, beyond the fairly obvious warning that we need to be careful. For example, Doudna was initially deeply opposed to any manipulation of the human genome that could be inherited by future generations—this is called germline manipulation, and is carried out on eggs or sperm, or on a single-cell embryo. (Genetic changes produced by all currently envisaged human uses of CRISPR, for example on blood cells, would not be passed to the patient’s children because these cells are not passed on.)
Although laws and guidelines differ among countries, for the moment implantation of genetically edited embryos is generally considered to be wrong, and in 2015 a nonbinding international moratorium on the manipulation of the human germline was reached at a meeting held in Washington by the National Academy of Sciences, the Institute of Medicine, the Royal Society of London, and the Chinese Academy of Sciences. Yet it seems inevitable that the world’s first CRISPR baby will be born sometime in the next decade, most likely as a result of a procedure that is intended to permanently remove genes that cause a particular disease. (...)
Like many scientists and the vast majority of the general public, Doudna remains hostile to changing the germline in an attempt to make humans smarter, more beautiful, or stronger, but she recognizes that it is extremely difficult to draw a line between remedial action and enhancement. Reassuringly, both A Crack in Creation and DNA Is Not Destiny show that these eugenic fantasies will not succeed—such characteristics are highly complex, and to the extent that they have a genetic component, it is encoded by a large number of genes each of which has a very small effect, and which interact in unknown ways. We are not on the verge of the creation of a CRISPR master race.
Nevertheless, Doudna does accept that there is a danger that the new technology will “transcribe our societies’ financial inequality into our genetic code,” as the rich will be able to use it to enhance their offspring while the poor will not. Unfortunately, her only solution is to suggest that we should start planning for international guidelines governing germline gene editing, with researchers and lawmakers (the public are not mentioned) encouraged to find “the right balance between regulation and freedom.”
[ed. Ever get the feeling that there's some kind of morbid inevitability to technological progress? I'm not sure what'll kill us first: nuclear warheads, climate change, gene editing, artificial intelligence, biological weapons, or politicians.]
Some diseases are indeed entirely genetically determined—Huntington’s disease, Duchenne muscular dystrophy, and so on. If you have the faulty gene, you will eventually have the disease. Whether you want to be told by e-mail that you will develop a life-threatening disease is something you need to think hard about before doing the test. But for the vast majority of diseases, our future is not written in our genes, and the results of genetic tests can be misleading. (...)

But of even greater practical and moral significance is the second part of the revolution in genetics: our ability to modify or “edit” the DNA sequences of humans and other creatures. This technique, known as CRISPR (pronounced “crisper”), was first applied to human cells in 2013, and has already radically changed research in the life sciences. It works in pretty much every species in which it has been tried and is currently undergoing its first clinical trials. HIV, leukemia, and sickle-cell anemia will probably soon be treated using CRISPR.
In A Crack in Creation, one of the pioneers of this technique, the biochemist Jennifer Doudna of the University of California at Berkeley, together with her onetime student Samuel Sternberg, describes the science behind CRISPR and the history of its discovery. This guidebook to the CRISPR revolution gives equal weight to the science of CRISPR and the profound ethical questions it raises. The book is required reading for every concerned citizen—the material it covers should be discussed in schools, colleges, and universities throughout the country. Community and patient groups need to understand the implications of this technology and help decide how it should and should not be applied, while politicians must confront the dramatic challenges posed by gene editing.
The story of CRISPR is a case study in how scientific inquiry that is purely driven by curiosity can lead to major advances. Beginning in the 1980s, scientists noticed that parts of the genomes of microbes contained regular DNA sequences that were repeated and consisted of approximate palindromes. (In fact, in general only a few motifs are roughly repeated within each “palindrome.”) Eventually, these sequences were given the snappy acronym CRISPR—clustered regularly interspersed short palindromic repeats. A hint about their function emerged when it became clear that the bits of DNAfound in the spaces between the repeats—called spacer DNA—were not some random bacterial junk, but instead had come from viruses and had been integrated into the microbe’s genome.
These bits of DNA turned out to be very important in the life of the microbe. In 2002, scientists discovered that the CRISPR sequences activate a series of proteins—known as CRISPR-associated (or Cas) proteins—that can unravel and attack DNA. Then in 2007, it was shown that the CRISPR sequence and one particular protein (often referred to as CRISPR-Cas9) act together as a kind of immune system for microbes: if a particular virus’s DNA is incorporated into a microbe’s CRISPR sequences, the microbe can recognize an invasion by that virus and activate Cas proteins to snip it up.
This was a pretty big deal for microbiologists, but the excitement stems from the realization that the CRISPR-associated proteins could be used to alter any DNA to achieve a desired sequence. At the beginning in 2013, three groups of researchers, from the University of California at Berkeley (led by Jennifer Doudna), Harvard Medical School (led by George Church), and the Broad Institute of MIT and Harvard (led by Feng Zhang), independently showed that the CRISPR technique could be used to modify human cells. Gene editing was born.
The possibilities of CRISPR are immense. If you know a DNA sequence from a given organism, you can chop it up, delete it, and change it at will, much like what a word-processing program can do with texts. You can even use CRISPR to introduce additional control elements—for example to engineer a gene so that it is activated by light stimulation. In experimental organisms this can provide an extraordinary degree of control in studies of gene function, enabling scientists to explore the consequences of gene expression at a particular moment in the organism’s life or in a particular environment.
There appear to be few limits to how CRISPR might be used. One is technical: it can be difficult to deliver the specially constructed CRISPR DNA sequences to specific cells in order to change their genes. But a larger and more intractable concern is ethical: Where and when should this technology be used? In 2016, the power of gene editing and the relative ease of its application led James Clapper, President Obama’s director of national intelligence, to describe CRISPR as a weapon of mass destruction. Well-meaning biohackers are already selling kits over the Internet that enable anyone with high school biology to edit the genes of bacteria. The plotline of a techno-thriller may be writing itself in real time. (...)
The second half of A Crack in Creation deals with the profound ethical issues that are raised by gene editing. These pages are not dry or abstract—Doudna uses her own shifting positions on these questions as a way for the reader to explore different possibilities. However, she often offers no clear way forward, beyond the fairly obvious warning that we need to be careful. For example, Doudna was initially deeply opposed to any manipulation of the human genome that could be inherited by future generations—this is called germline manipulation, and is carried out on eggs or sperm, or on a single-cell embryo. (Genetic changes produced by all currently envisaged human uses of CRISPR, for example on blood cells, would not be passed to the patient’s children because these cells are not passed on.)
Although laws and guidelines differ among countries, for the moment implantation of genetically edited embryos is generally considered to be wrong, and in 2015 a nonbinding international moratorium on the manipulation of the human germline was reached at a meeting held in Washington by the National Academy of Sciences, the Institute of Medicine, the Royal Society of London, and the Chinese Academy of Sciences. Yet it seems inevitable that the world’s first CRISPR baby will be born sometime in the next decade, most likely as a result of a procedure that is intended to permanently remove genes that cause a particular disease. (...)
Like many scientists and the vast majority of the general public, Doudna remains hostile to changing the germline in an attempt to make humans smarter, more beautiful, or stronger, but she recognizes that it is extremely difficult to draw a line between remedial action and enhancement. Reassuringly, both A Crack in Creation and DNA Is Not Destiny show that these eugenic fantasies will not succeed—such characteristics are highly complex, and to the extent that they have a genetic component, it is encoded by a large number of genes each of which has a very small effect, and which interact in unknown ways. We are not on the verge of the creation of a CRISPR master race.
Nevertheless, Doudna does accept that there is a danger that the new technology will “transcribe our societies’ financial inequality into our genetic code,” as the rich will be able to use it to enhance their offspring while the poor will not. Unfortunately, her only solution is to suggest that we should start planning for international guidelines governing germline gene editing, with researchers and lawmakers (the public are not mentioned) encouraged to find “the right balance between regulation and freedom.”
by Matthew Cobb, NY Review of Books | Read more:
Image: Anthony A. James/UC Irvine[ed. Ever get the feeling that there's some kind of morbid inevitability to technological progress? I'm not sure what'll kill us first: nuclear warheads, climate change, gene editing, artificial intelligence, biological weapons, or politicians.]
Understanding Poetry

by Editors, Paris Review | Read more:
Image: Tamara ShopsinI'm So Bored I Could Die
via: Sex and the City
Are You Ready To Consider That Capitalism Is The Real Problem?
In February, college sophomore Trevor Hill stood up during a televised town hall meeting in New York and posed a simple question to Nancy Pelosi, the leader of the Democrats in the House of Representatives. He cited a study by Harvard University showing that 51% of Americans between the ages of 18 and 29 no longer support the system of capitalism, and asked whether the Democrats could embrace this fast-changing reality and stake out a clearer contrast to right-wing economics.
Pelosi was visibly taken aback. “I thank you for your question,” she said, “but I’m sorry to say we’re capitalists, and that’s just the way it is.”
The footage went viral. It was powerful because of the clear contrast it set up. Trevor Hill is no hardened left-winger. He’s just your average millennial—bright, informed, curious about the world, and eager to imagine a better one. But Pelosi, a figurehead of establishment politics, refused to–or was just unable to–entertain his challenge to the status quo.
It all proceeds from the same deep logic. It’s the same logic that sold lives for profit in the Atlantic slave trade, it’s the logic that gives us sweatshops and oil spills, and it’s the logic that is right now pushing us headlong toward ecological collapse and climate change.
Pelosi was visibly taken aback. “I thank you for your question,” she said, “but I’m sorry to say we’re capitalists, and that’s just the way it is.”

It’s not only young voters who feel this way. A YouGov poll in 2015 found that 64% of Britons believe that capitalism is unfair, that it makes inequality worse. Even in the U.S., it’s as high as 55%. In Germany, a solid 77% are skeptical of capitalism. Meanwhile, a full three-quarters of people in major capitalist economies believe that big businesses are basically corrupt.
Why do people feel this way? Probably not because they deny the abundant material benefits of modern life that many are able to enjoy. Or because they want to travel back in time and live in the U.S.S.R. It’s because they realize—either consciously or at some gut level—that there’s something fundamentally flawed about a system that has a prime directive to churn nature and humans into capital, and do it more and more each year, regardless of the costs to human well-being and to the environment we depend on.
Because let’s be clear: That’s what capitalism is, at its root. That is the sum total of the plan. We can see this embodied in the imperative to grow GDP, everywhere, year on year, at a compound rate, even though we know that GDP growth, on its own, does nothing to reduce poverty or to make people happier or healthier. Global GDP has grown 630% since 1980, and in that same time, by some measures, inequality, poverty, and hunger have all risen. (...)
Why do people feel this way? Probably not because they deny the abundant material benefits of modern life that many are able to enjoy. Or because they want to travel back in time and live in the U.S.S.R. It’s because they realize—either consciously or at some gut level—that there’s something fundamentally flawed about a system that has a prime directive to churn nature and humans into capital, and do it more and more each year, regardless of the costs to human well-being and to the environment we depend on.
Because let’s be clear: That’s what capitalism is, at its root. That is the sum total of the plan. We can see this embodied in the imperative to grow GDP, everywhere, year on year, at a compound rate, even though we know that GDP growth, on its own, does nothing to reduce poverty or to make people happier or healthier. Global GDP has grown 630% since 1980, and in that same time, by some measures, inequality, poverty, and hunger have all risen. (...)
It all proceeds from the same deep logic. It’s the same logic that sold lives for profit in the Atlantic slave trade, it’s the logic that gives us sweatshops and oil spills, and it’s the logic that is right now pushing us headlong toward ecological collapse and climate change.
Once we realize this, we can start connecting the dots between our different struggles. There are people in the U.S. fighting against the Keystone pipeline. There are people in Britain fighting against the privatization of the National Health Service. There are people in India fighting against corporate land grabs. There are people in Brazil fighting against the destruction of the Amazon rainforest. There are people in China fighting against poverty wages. These are all noble and important movements in their own right. But by focusing on all these symptoms we risk missing the underlying cause. And the cause is capitalism. It’s time to name the thing.
What’s so exciting about our present moment is that people are starting to do exactly that. And they are hungry for something different. For some, this means socialism. That YouGov poll showed that Americans under the age of 30 tend to have a more favorable view of socialism than they do of capitalism, which is surprising given the sheer scale of the propaganda out there designed to convince people that socialism is evil. But millennials aren’t bogged down by these dusty old binaries. For them the matter is simple: They can see that capitalism isn’t working for the majority of humanity, and they’re ready to invent something better.
What might a better world look like? There are a million ideas out there. We can start by changing how we understand and measure progress. As Robert Kennedy famously said, GDP “does not allow for the health of our children, the quality of their education, or the joy of their play . . . it measures everything, in short, except that which makes life worthwhile.”
We can change that. People want health care and education to be social goods, not market commodities, so we can choose to put public goods back in public hands. People want the fruits of production and the yields of our generous planet to benefit everyone, rather than being siphoned up by the super-rich, so we can change tax laws and introduce potentially transformative measures like a universal basic income. People want to live in balance with the environment on which we all depend for our survival; so we can adopt regenerative agricultural solutions and even choose, as Ecuador did in 2008, to recognize in law, at the level of the nation’s constitution, that nature has “the right to exist, persist, maintain, and regenerate its vital cycles.”
Measures like these could dethrone capitalism’s prime directive and replace it with a more balanced logic, that recognizes the many factors required for a healthy and thriving civilization. If done systematically enough, they could consign one-dimensional capitalism to the dustbin of history.
by Jason Hickel and Martin Kirk, Fast Company | Read more:
Image: Kseniya_Milner/iStock
What’s so exciting about our present moment is that people are starting to do exactly that. And they are hungry for something different. For some, this means socialism. That YouGov poll showed that Americans under the age of 30 tend to have a more favorable view of socialism than they do of capitalism, which is surprising given the sheer scale of the propaganda out there designed to convince people that socialism is evil. But millennials aren’t bogged down by these dusty old binaries. For them the matter is simple: They can see that capitalism isn’t working for the majority of humanity, and they’re ready to invent something better.
What might a better world look like? There are a million ideas out there. We can start by changing how we understand and measure progress. As Robert Kennedy famously said, GDP “does not allow for the health of our children, the quality of their education, or the joy of their play . . . it measures everything, in short, except that which makes life worthwhile.”
We can change that. People want health care and education to be social goods, not market commodities, so we can choose to put public goods back in public hands. People want the fruits of production and the yields of our generous planet to benefit everyone, rather than being siphoned up by the super-rich, so we can change tax laws and introduce potentially transformative measures like a universal basic income. People want to live in balance with the environment on which we all depend for our survival; so we can adopt regenerative agricultural solutions and even choose, as Ecuador did in 2008, to recognize in law, at the level of the nation’s constitution, that nature has “the right to exist, persist, maintain, and regenerate its vital cycles.”
Measures like these could dethrone capitalism’s prime directive and replace it with a more balanced logic, that recognizes the many factors required for a healthy and thriving civilization. If done systematically enough, they could consign one-dimensional capitalism to the dustbin of history.
by Jason Hickel and Martin Kirk, Fast Company | Read more:
Image: Kseniya_Milner/iStock
Wednesday, July 12, 2017
Paths of the Soul
[ed. I read about this practice for the first time last week (can't remember where). Sounds excruciating. See also: A Holy Quest in Tibet: Prostrate, and Miles to Go]
Tuesday, July 11, 2017
The Uninhabitable Earth
I. 'Doomsday'
Peering beyond scientific reticence.
It is, I promise, worse than you think. If your anxiety about global warming is dominated by fears of sea-level rise, you are barely scratching the surface of what terrors are possible, even within the lifetime of a teenager today. And yet the swelling seas — and the cities they will drown — have so dominated the picture of global warming, and so overwhelmed our capacity for climate panic, that they have occluded our perception of other threats, many much closer at hand. Rising oceans are bad, in fact very bad; but fleeing the coastline will not be enough.
Indeed, absent a significant adjustment to how billions of humans conduct their lives, parts of the Earth will likely become close to uninhabitable, and other parts horrifically inhospitable, as soon as the end of this century.
Even when we train our eyes on climate change, we are unable to comprehend its scope. This past winter, a string of days 60 and 70 degrees warmer than normal baked the North Pole, melting the permafrost that encased Norway’s Svalbard seed vault — a global food bank nicknamed “Doomsday,” designed to ensure that our agriculture survives any catastrophe, and which appeared to have been flooded by climate change less than ten years after being built.
The Doomsday vault is fine, for now: The structure has been secured and the seeds are safe. But treating the episode as a parable of impending flooding missed the more important news. Until recently, permafrost was not a major concern of climate scientists, because, as the name suggests, it was soil that stayed permanently frozen. But Arctic permafrost contains 1.8 trillion tons of carbon, more than twice as much as is currently suspended in the Earth’s atmosphere. When it thaws and is released, that carbon may evaporate as methane, which is 34 times as powerful a greenhouse-gas warming blanket as carbon dioxide when judged on the timescale of a century; when judged on the timescale of two decades, it is 86 times as powerful. In other words, we have, trapped in Arctic permafrost, twice as much carbon as is currently wrecking the atmosphere of the planet, all of it scheduled to be released at a date that keeps getting moved up, partially in the form of a gas that multiplies its warming power 86 times over.
Maybe you know that already — there are alarming stories every day, like last month’s satellite data showing the globe warming, since 1998, more than twice as fast as scientists had thought. Or the news from Antarctica this past May, when a crack in an ice shelf grew 11 miles in six days, then kept going; the break now has just three miles to go — by the time you read this, it may already have met the open water, where it will drop into the sea one of the biggest icebergs ever, a process known poetically as “calving.”
But no matter how well-informed you are, you are surely not alarmed enough. Over the past decades, our culture has gone apocalyptic with zombie movies and Mad Max dystopias, perhaps the collective result of displaced climate anxiety, and yet when it comes to contemplating real-world warming dangers, we suffer from an incredible failure of imagination. The reasons for that are many: the timid language of scientific probabilities, which the climatologist James Hansen once called “scientific reticence” in a paper chastising scientists for editing their own observations so conscientiously that they failed to communicate how dire the threat really was; the fact that the country is dominated by a group of technocrats who believe any problem can be solved and an opposing culture that doesn’t even see warming as a problem worth addressing; the way that climate denialism has made scientists even more cautious in offering speculative warnings; the simple speed of change and, also, its slowness, such that we are only seeing effects now of warming from decades past; our uncertainty about uncertainty, which the climate writer Naomi Oreskes in particular has suggested stops us from preparing as though anything worse than a median outcome were even possible; the way we assume climate change will hit hardest elsewhere, not everywhere; the smallness (two degrees) and largeness (1.8 trillion tons) and abstractness (400 parts per million) of the numbers; the discomfort of considering a problem that is very difficult, if not impossible, to solve; the altogether incomprehensible scale of that problem, which amounts to the prospect of our own annihilation; simple fear. But aversion arising from fear is a form of denial, too.
In between scientific reticence and science fiction is science itself. This article is the result of dozens of interviews and exchanges with climatologists and researchers in related fields and reflects hundreds of scientific papers on the subject of climate change. What follows is not a series of predictions of what will happen — that will be determined in large part by the much-less-certain science of human response. Instead, it is a portrait of our best understanding of where the planet is heading absent aggressive action. It is unlikely that all of these warming scenarios will be fully realized, largely because the devastation along the way will shake our complacency. But those scenarios, and not the present climate, are the baseline. In fact, they are our schedule. (...)
The Earth has experienced five mass extinctions before the one we are living through now, each so complete a slate-wiping of the evolutionary record it functioned as a resetting of the planetary clock, and many climate scientists will tell you they are the best analog for the ecological future we are diving headlong into. Unless you are a teenager, you probably read in your high-school textbooks that these extinctions were the result of asteroids. In fact, all but the one that killed the dinosaurs were caused by climate change produced by greenhouse gas. The most notorious was 252 million years ago; it began when carbon warmed the planet by five degrees, accelerated when that warming triggered the release of methane in the Arctic, and ended with 97 percent of all life on Earth dead. We are currently adding carbon to the atmosphere at a considerably faster rate; by most estimates, at least ten times faster. The rate is accelerating. This is what Stephen Hawking had in mind when he said, this spring, that the species needs to colonize other planets in the next century to survive, and what drove Elon Musk, last month, to unveil his plans to build a Mars habitat in 40 to 100 years. These are nonspecialists, of course, and probably as inclined to irrational panic as you or I. But the many sober-minded scientists I interviewed over the past several months — the most credentialed and tenured in the field, few of them inclined to alarmism and many advisers to the IPCC who nevertheless criticize its conservatism — have quietly reached an apocalyptic conclusion, too: No plausible program of emissions reductions alone can prevent climate disaster. (...)
by David Wallace-Wells, NY Magazine | Read more:
Image:Fossils by Heartless Machine
Peering beyond scientific reticence.
It is, I promise, worse than you think. If your anxiety about global warming is dominated by fears of sea-level rise, you are barely scratching the surface of what terrors are possible, even within the lifetime of a teenager today. And yet the swelling seas — and the cities they will drown — have so dominated the picture of global warming, and so overwhelmed our capacity for climate panic, that they have occluded our perception of other threats, many much closer at hand. Rising oceans are bad, in fact very bad; but fleeing the coastline will not be enough.
Indeed, absent a significant adjustment to how billions of humans conduct their lives, parts of the Earth will likely become close to uninhabitable, and other parts horrifically inhospitable, as soon as the end of this century.
Even when we train our eyes on climate change, we are unable to comprehend its scope. This past winter, a string of days 60 and 70 degrees warmer than normal baked the North Pole, melting the permafrost that encased Norway’s Svalbard seed vault — a global food bank nicknamed “Doomsday,” designed to ensure that our agriculture survives any catastrophe, and which appeared to have been flooded by climate change less than ten years after being built.
The Doomsday vault is fine, for now: The structure has been secured and the seeds are safe. But treating the episode as a parable of impending flooding missed the more important news. Until recently, permafrost was not a major concern of climate scientists, because, as the name suggests, it was soil that stayed permanently frozen. But Arctic permafrost contains 1.8 trillion tons of carbon, more than twice as much as is currently suspended in the Earth’s atmosphere. When it thaws and is released, that carbon may evaporate as methane, which is 34 times as powerful a greenhouse-gas warming blanket as carbon dioxide when judged on the timescale of a century; when judged on the timescale of two decades, it is 86 times as powerful. In other words, we have, trapped in Arctic permafrost, twice as much carbon as is currently wrecking the atmosphere of the planet, all of it scheduled to be released at a date that keeps getting moved up, partially in the form of a gas that multiplies its warming power 86 times over.
Maybe you know that already — there are alarming stories every day, like last month’s satellite data showing the globe warming, since 1998, more than twice as fast as scientists had thought. Or the news from Antarctica this past May, when a crack in an ice shelf grew 11 miles in six days, then kept going; the break now has just three miles to go — by the time you read this, it may already have met the open water, where it will drop into the sea one of the biggest icebergs ever, a process known poetically as “calving.”
But no matter how well-informed you are, you are surely not alarmed enough. Over the past decades, our culture has gone apocalyptic with zombie movies and Mad Max dystopias, perhaps the collective result of displaced climate anxiety, and yet when it comes to contemplating real-world warming dangers, we suffer from an incredible failure of imagination. The reasons for that are many: the timid language of scientific probabilities, which the climatologist James Hansen once called “scientific reticence” in a paper chastising scientists for editing their own observations so conscientiously that they failed to communicate how dire the threat really was; the fact that the country is dominated by a group of technocrats who believe any problem can be solved and an opposing culture that doesn’t even see warming as a problem worth addressing; the way that climate denialism has made scientists even more cautious in offering speculative warnings; the simple speed of change and, also, its slowness, such that we are only seeing effects now of warming from decades past; our uncertainty about uncertainty, which the climate writer Naomi Oreskes in particular has suggested stops us from preparing as though anything worse than a median outcome were even possible; the way we assume climate change will hit hardest elsewhere, not everywhere; the smallness (two degrees) and largeness (1.8 trillion tons) and abstractness (400 parts per million) of the numbers; the discomfort of considering a problem that is very difficult, if not impossible, to solve; the altogether incomprehensible scale of that problem, which amounts to the prospect of our own annihilation; simple fear. But aversion arising from fear is a form of denial, too.
In between scientific reticence and science fiction is science itself. This article is the result of dozens of interviews and exchanges with climatologists and researchers in related fields and reflects hundreds of scientific papers on the subject of climate change. What follows is not a series of predictions of what will happen — that will be determined in large part by the much-less-certain science of human response. Instead, it is a portrait of our best understanding of where the planet is heading absent aggressive action. It is unlikely that all of these warming scenarios will be fully realized, largely because the devastation along the way will shake our complacency. But those scenarios, and not the present climate, are the baseline. In fact, they are our schedule. (...)
The Earth has experienced five mass extinctions before the one we are living through now, each so complete a slate-wiping of the evolutionary record it functioned as a resetting of the planetary clock, and many climate scientists will tell you they are the best analog for the ecological future we are diving headlong into. Unless you are a teenager, you probably read in your high-school textbooks that these extinctions were the result of asteroids. In fact, all but the one that killed the dinosaurs were caused by climate change produced by greenhouse gas. The most notorious was 252 million years ago; it began when carbon warmed the planet by five degrees, accelerated when that warming triggered the release of methane in the Arctic, and ended with 97 percent of all life on Earth dead. We are currently adding carbon to the atmosphere at a considerably faster rate; by most estimates, at least ten times faster. The rate is accelerating. This is what Stephen Hawking had in mind when he said, this spring, that the species needs to colonize other planets in the next century to survive, and what drove Elon Musk, last month, to unveil his plans to build a Mars habitat in 40 to 100 years. These are nonspecialists, of course, and probably as inclined to irrational panic as you or I. But the many sober-minded scientists I interviewed over the past several months — the most credentialed and tenured in the field, few of them inclined to alarmism and many advisers to the IPCC who nevertheless criticize its conservatism — have quietly reached an apocalyptic conclusion, too: No plausible program of emissions reductions alone can prevent climate disaster. (...)
by David Wallace-Wells, NY Magazine | Read more:
Image:Fossils by Heartless Machine
Praying For a Real Estate Crash
A year after getting married, Alex Taylor and Rachel Tuttle decided it was time to buy a home and start a family. The two Vancouver residents were in their late 30s, and each had stable, full-time jobs—Taylor served as an urban planner while Tuttle worked for a credit union. They were debt-free, and after years of hard work, frugal living, and the sale of a previous home Tuttle had owned in England, they had a down payment ready. The couple figured they could buy a fixer-upper in Vancouver’s historically low-income Downtown Eastside neighbourhood. But soon after starting the hunt in 2015, their hopes were dashed. Detached homes were averaging $1.2 million, and even though Taylor and Tuttle qualified for a mortgage, they would have faced steep monthly payments of $4,000. They adjusted their expectations and set their sights on a townhouse on the outskirts of the city. Still, the cost was too high. “It felt very risky to put that much of your savings into one investment,” says Tuttle.
That risk hasn’t stopped plenty of their peers from diving into the white-hot real estate market. Some got in before the bubble; others took the plunge more recently in a fit of panic as it seemed prices would never stop escalating. Taylor and Tuttle sensed an opportunity last year, when the province put in a place measures, including a foreign buyer tax, to temper runaway house prices. Sales slumped, but prices are picking up again. “It’s discouraging,” says Tuttle. “We look at people who bought two years ago and they’ve now made 30 per cent on their purchase. You definitely feel like you’ve been left behind.” Taylor is tired of talking about the issue. “I know it’s mean to say and I know it would hurt those of our friends who completely over-extended themselves,” he says, “but honestly, we’re praying for a crash.”
He’s not the only one. As prices in Vancouver and Toronto have skyrocketed and affordability has eroded, scores of Canadians fear getting permanently shut out of the country’s two largest regions. Paying for all of the costs associated with a detached home in the Vancouver area requires 121 per cent of median household income; for a condo, it’s 46 per cent of income, making it Canada’s least affordable city, according to economists at the Royal Bank of Canada. Toronto isn’t far behind. Aggregate housing costs are 64.6 per cent of income, the worst level since 1990, when interest rates spiked.
Soaring prices have for months stoked resentment between the haves and have-nots of housing, as young, educated Canadians who in the past could be assured a shot at purchasing a home and achieving financial stability feel the opportunity slipping away. With every price spike, the antipathy has deepened, but the hostility has come into sharper relief after Ontario followed B.C.’s example this spring by introducing its own market-cooling measures. In May, sales dropped 20 per cent compared to the year before in the Greater Toronto Area while active listings surged 42.9 per cent from a record low. Those are the kinds of numbers that cause indebted homeowners to sweat, but serve as a balm for those on the sidelines in Toronto: like Taylor in B.C., many now openly cheer for the market to collapse.
Nowhere is the antagonism more evident than on social media. Facebook and Twitter are home to daily (even hourly) outrages, of course, but a recent Toronto Life article touched off a firestorm and revealed deep frustrations about the state of the housing market. The author, Catherine Jheon, recounted the “nightmare” renovation she and her husband undertook, sinking hundreds of thousands dollars into a “crack house” purchased almost on impulse, with seemingly little to no concern for the low-income tenants who were evicted in the process. Many saw the couple as the worst kind of gentrifiers: privileged, callous and clueless. Jheon and her husband made numerous bad decisions during the renovation, but were still able to continue borrowing money (including from a wealthy relative) and ultimately rewarded for their fecklessness with a palatial detached house in an up-and-coming neighbourhood. On social media, readers expressed intense loathing (“I hate these people so much,”) threats of physical violence (“Dear god, I want to punch them in the face,”) and a longing for karmic justice (“I’ve never wanted the entire real estate market to completely collapse until now”).
It’s not all jealously, envy and social media griping. Missing out on home-ownership often means missing out on housing stability and security. Rental markets in Toronto and Vancouver are extremely tight, units suitable for raising families in are highly coveted, and being subjected to the whims of a landlord can make for a precarious existence. Hundreds of tenants in Toronto’s Parkdale neighbourhood, for example, have been withholding payments for more than two months to protest steep rent hikes in apartments meant to be rent-controlled. Many believe the goal is to squeeze them out so the building’s property management firm can re-list the units at market rates, well above what the current tenants are paying. Last month, the firm’s CEO nearly ran over a tenant advocate with his truck. (...)
“A lot of people have these totally unsustainable lifestyles they’re only able to pull off because, by doing nothing but sit on their ass, their net worth goes up by a few grand every month,” says Toronto resident Phillip Mendonça-Vieira. “I don’t think there’s anyone who doesn’t own property who’s not secretly, like, ‘F–k you, guys. This is unsustainable.’” Mendonça-Vieira has taken a keen interest in housing. He co-founded a group called BetterTO to organize discussions on issues facing the city—the first, held in March, focused on housing. The 30-year-old has shared a rental for the last two years; the owners of the house took advantage of the city’s exorbitant prices and cashed out a few months ago. Mendonça-Vieira faces the prospect of moving again, at a time when he and his partner would like to settle down in preparation for having kids in the near future. Had Mendonça-Vieira, who runs a small startup, and his partner, a lawyer, been in this situation a year or two ago, they might have been able to purchase a home. But then the average home price soared more than 30 per cent since the start of 2016 alone. “Frankly, it’s kind of inconceivable to own a house,” he says. (...)
What makes matters more frustrating is homeowner opposition to rental and multi-unit developments. “We have a housing shortage, and a large group of people who don’t want more housing—often people who already have secure housing, and who get richer if there is a shortage,” says Daniel Oleksiuk, a member of Abundant Housing Vancouver, an organization that advocates for changing zoning practices to build more multi-unit housing. “There’s a class of landowners that passively grow wealthy, and another class that’s struggling to pay rent,” he says. “That’s not the Canada I learned about growing up.”
by Joe Castaldo and Catherine McIntyre, Macleans | Read more:
Image: uncredited
That risk hasn’t stopped plenty of their peers from diving into the white-hot real estate market. Some got in before the bubble; others took the plunge more recently in a fit of panic as it seemed prices would never stop escalating. Taylor and Tuttle sensed an opportunity last year, when the province put in a place measures, including a foreign buyer tax, to temper runaway house prices. Sales slumped, but prices are picking up again. “It’s discouraging,” says Tuttle. “We look at people who bought two years ago and they’ve now made 30 per cent on their purchase. You definitely feel like you’ve been left behind.” Taylor is tired of talking about the issue. “I know it’s mean to say and I know it would hurt those of our friends who completely over-extended themselves,” he says, “but honestly, we’re praying for a crash.”

Soaring prices have for months stoked resentment between the haves and have-nots of housing, as young, educated Canadians who in the past could be assured a shot at purchasing a home and achieving financial stability feel the opportunity slipping away. With every price spike, the antipathy has deepened, but the hostility has come into sharper relief after Ontario followed B.C.’s example this spring by introducing its own market-cooling measures. In May, sales dropped 20 per cent compared to the year before in the Greater Toronto Area while active listings surged 42.9 per cent from a record low. Those are the kinds of numbers that cause indebted homeowners to sweat, but serve as a balm for those on the sidelines in Toronto: like Taylor in B.C., many now openly cheer for the market to collapse.
Nowhere is the antagonism more evident than on social media. Facebook and Twitter are home to daily (even hourly) outrages, of course, but a recent Toronto Life article touched off a firestorm and revealed deep frustrations about the state of the housing market. The author, Catherine Jheon, recounted the “nightmare” renovation she and her husband undertook, sinking hundreds of thousands dollars into a “crack house” purchased almost on impulse, with seemingly little to no concern for the low-income tenants who were evicted in the process. Many saw the couple as the worst kind of gentrifiers: privileged, callous and clueless. Jheon and her husband made numerous bad decisions during the renovation, but were still able to continue borrowing money (including from a wealthy relative) and ultimately rewarded for their fecklessness with a palatial detached house in an up-and-coming neighbourhood. On social media, readers expressed intense loathing (“I hate these people so much,”) threats of physical violence (“Dear god, I want to punch them in the face,”) and a longing for karmic justice (“I’ve never wanted the entire real estate market to completely collapse until now”).
It’s not all jealously, envy and social media griping. Missing out on home-ownership often means missing out on housing stability and security. Rental markets in Toronto and Vancouver are extremely tight, units suitable for raising families in are highly coveted, and being subjected to the whims of a landlord can make for a precarious existence. Hundreds of tenants in Toronto’s Parkdale neighbourhood, for example, have been withholding payments for more than two months to protest steep rent hikes in apartments meant to be rent-controlled. Many believe the goal is to squeeze them out so the building’s property management firm can re-list the units at market rates, well above what the current tenants are paying. Last month, the firm’s CEO nearly ran over a tenant advocate with his truck. (...)
“A lot of people have these totally unsustainable lifestyles they’re only able to pull off because, by doing nothing but sit on their ass, their net worth goes up by a few grand every month,” says Toronto resident Phillip Mendonça-Vieira. “I don’t think there’s anyone who doesn’t own property who’s not secretly, like, ‘F–k you, guys. This is unsustainable.’” Mendonça-Vieira has taken a keen interest in housing. He co-founded a group called BetterTO to organize discussions on issues facing the city—the first, held in March, focused on housing. The 30-year-old has shared a rental for the last two years; the owners of the house took advantage of the city’s exorbitant prices and cashed out a few months ago. Mendonça-Vieira faces the prospect of moving again, at a time when he and his partner would like to settle down in preparation for having kids in the near future. Had Mendonça-Vieira, who runs a small startup, and his partner, a lawyer, been in this situation a year or two ago, they might have been able to purchase a home. But then the average home price soared more than 30 per cent since the start of 2016 alone. “Frankly, it’s kind of inconceivable to own a house,” he says. (...)
What makes matters more frustrating is homeowner opposition to rental and multi-unit developments. “We have a housing shortage, and a large group of people who don’t want more housing—often people who already have secure housing, and who get richer if there is a shortage,” says Daniel Oleksiuk, a member of Abundant Housing Vancouver, an organization that advocates for changing zoning practices to build more multi-unit housing. “There’s a class of landowners that passively grow wealthy, and another class that’s struggling to pay rent,” he says. “That’s not the Canada I learned about growing up.”
by Joe Castaldo and Catherine McIntyre, Macleans | Read more:
Image: uncredited
[ed. It ain't just Canada.]
Capitalism the Apple Way vs. Capitalism the Google Way
Whichever company’s vision wins out will shape the future of the economy.
While lots of attention is directed toward identifying the next great start-up, the defining tech-industry story of the last decade has been the rise of Apple and Google. In terms of wealth creation, there is no comparison. Eight years ago, neither one of them was even in the top 10 most valuable companies in the world, and their combined market value was less than $300 billion. Now, Apple and Alphabet (Google’s parent company) have become the two most valuable companies, with a combined market capitalization of over $1.3 trillion. And increasingly, these two behemoths are starting to collide in various markets, from smartphones to home-audio devices to, according to speculation, automobiles.
But the greatest collision between Apple and Google is little noticed. The companies have taken completely different approaches to their shareholders and to the future, one willing to accede to the demands of investors and the other keeping power in the hands of founders and executives. These rival approaches are about something much bigger than just two of the most important companies in the world; they embody two alternative models of capitalism, and the one that wins out will shape the future of the economy.
In the spring of 2012, Toni Sacconaghi, a respected equity-research analyst, released a report that contemplated a radical move for Apple. He, along with other analysts, had repeatedly been pushing Apple’s CEO, Tim Cook, to consider returning some of Apple’s stockpile of cash, which approached $100 billion by the end of 2011, to shareholders. Cook, and Steve Jobs before him, had resisted similar calls so that the company could, in the words of Jobs, “keep their powder dry” and take advantage of “more strategic opportunities in the future.”
But there was another reason Apple wouldn’t so readily part with this cash: The majority of it was in Ireland because of the company’s fortuitous creation of Apple Operations International in Ireland in 1980. Since then, the vast majority of Apple’s non-U.S. profits had found their way to the country, and tapping into that cash would mean incurring significant U.S. taxes due upon repatriation to American soil. So Sacconaghi floated a bold idea: Apple should borrow the $100 billion in the U.S., and then pay it out to shareholders in the form of dividends and share buybacks. The unusual nature of the proposal attracted attention among financiers and served Sacconaghi’s presumed purpose, ratcheting up the pressure on Cook. A week later, Apple relented and announced plans to begin releasing cash via dividends.
The results of Sacconaghi’s report were not lost on Silicon Valley, and Google responded three weeks later. At the time, the share structure that the company put in place when it went public in 2004 was becoming fragile. This original arrangement allowed Google’s founders to maintain voting control over the company, even as their share of ownership shrunk as more shares were issued. The explicit premise was that this structure would “protect Google from outside pressures and the temptation to sacrifice future opportunities to meet short-term demands.”
But by the time of Apple’s announcement in March 2012, this bulwark against outside influence was eroding, as Google’s founders continued to sell stock and employees were issued shares in their compensation packages. A few weeks after Apple’s concession to shareholders, the founders of Google announced a new share structure that would defend against a similar situation: The structure gave the founders’ shares 10 times the voting power of regular shares, ensuring they’d dictate the company’s strategy long into the future and that Google was, in the words of the founders, “set up for success for decades to come.”
What has happened to Google and Apple in the wake of these events is the defining story of early 21st-century capitalism. Apple’s decision in 2012 to begin paying dividends didn’t satiate shareholders—it sparked a wider revolt. Several hedge funds started asking for much larger payouts, with some of them filing suits against Apple and even proposing an “iPref”—a new type of share that would allow Apple to release much more cash in a way that didn’t incur as high of a tax bill. In 2013 and 2014, Apple upped its commitments to distribute cash. From 2013 to March 2017, the company released $200 billion via dividends and buybacks—an amount that is equivalent to, using figures in S&P’s Capital IQ database, more than 72 percent of their operating cash flow (a common metric of performance that is about cash generation rather than profit accrual) during that period. And to help finance this, Apple took on $99 billion in debt. Sacconaghi’s vision had come true.
What has Google done in that same period? Google is, like Apple, making loads of money. From 2013 to March 2017, it generated $114 billion in operating cash flow. How much has the company distributed to shareholders? In contrast to Apple’s 72 percent payout rate, Google has only distributed 6 percent of that money to shareholders.
The paths taken by Apple and Google manifest alternative answers to one of the main questions facing capitalism today: What should public companies do with all of the money that they’re making? Even as corporations have brought in enormous profits, there has been a shortage of lucrative opportunities for investment and growth, creating surpluses of cash. This imbalance has resulted in the pileup of $2 trillion on corporate balance sheets. As companies continue to generate more profits than they need to fund their own growth, the question becomes: Who will decide what to do with all those profits—managers or investors? At Google, where the founders and executives reign supreme, insulated by their governance structure, the answer is the former. At Apple, where the investors are in charge because of the absence of one large manager-shareholder, it’s the latter. (To be clear, even though Apple’s previous efforts to stifle investors’ concerns were no longer tenable, the company can still afford to spend mightily on research and development.)
Why has each company taken the approach that it has? These two strategies reflect different reactions to an issue central to modern capitalism, the separation of ownership and control. In short, owners aren’t managers, as they once were when businesses existed on a smaller scale. And when owners have to outsource running the company to executives, this leads to what economists call “the principal-agent problem,” which refers to the issues that come up when one person, group, or company—an “agent”—can make decisions that significantly affect another—a “principal.”
Having investors dominate, as Apple does, is a good way of handling one principal-agent problem: getting managers to do right by their owners. Rather than spending money on failed products (remember Google Plus?) or , Apple has to face the disciplining force of large investors. In a way that individual shareholders would have trouble doing, larger investors can act swiftly to put a check on managers who might pursue goals that enrich themselves, such as wasteful mergers, excessive executive compensation, or lush perks. And, after all, a company’s profits theoretically belong to investors, so why shouldn’t they decide how they are put to use?
Proponents of the managerial model embodied by Google worry about a different principal-agent problem. Rather than being concerned about managers ignoring investors, they are concerned that investors won’t serve the people who would benefit from the long-term success of the company. Those professional investors are both the principals for the CEOs but also the agents of many other shareholders. The hedge funds that pressured Apple are the dreaded “short-term” investors who are interested only in quick wins and don’t serve their longer-term beneficiaries, such as pension funds, that allocate capital to them in the first place. As investors, hedge funds are impatient, and, the argument goes, ruining the economy by shortening time horizons.
Who’s right? Which principal-agent problem is more vexing? Stock-market returns are one, albeit imperfect, way of answering this question and since the initial developments, Google has far outperformed Apple. But that pattern is flipped if the time frame is restricted to the past year. So it won’t be known for many years to come if Apple or Google has a sharper financial strategy.
More importantly, though, how do these strategies impact the lives of everyday people?
by Mihir A. Desai, The Atlantic | Read more:
Image: Tim Clayton / Corbis / GettyWhile lots of attention is directed toward identifying the next great start-up, the defining tech-industry story of the last decade has been the rise of Apple and Google. In terms of wealth creation, there is no comparison. Eight years ago, neither one of them was even in the top 10 most valuable companies in the world, and their combined market value was less than $300 billion. Now, Apple and Alphabet (Google’s parent company) have become the two most valuable companies, with a combined market capitalization of over $1.3 trillion. And increasingly, these two behemoths are starting to collide in various markets, from smartphones to home-audio devices to, according to speculation, automobiles.
But the greatest collision between Apple and Google is little noticed. The companies have taken completely different approaches to their shareholders and to the future, one willing to accede to the demands of investors and the other keeping power in the hands of founders and executives. These rival approaches are about something much bigger than just two of the most important companies in the world; they embody two alternative models of capitalism, and the one that wins out will shape the future of the economy.

But there was another reason Apple wouldn’t so readily part with this cash: The majority of it was in Ireland because of the company’s fortuitous creation of Apple Operations International in Ireland in 1980. Since then, the vast majority of Apple’s non-U.S. profits had found their way to the country, and tapping into that cash would mean incurring significant U.S. taxes due upon repatriation to American soil. So Sacconaghi floated a bold idea: Apple should borrow the $100 billion in the U.S., and then pay it out to shareholders in the form of dividends and share buybacks. The unusual nature of the proposal attracted attention among financiers and served Sacconaghi’s presumed purpose, ratcheting up the pressure on Cook. A week later, Apple relented and announced plans to begin releasing cash via dividends.
The results of Sacconaghi’s report were not lost on Silicon Valley, and Google responded three weeks later. At the time, the share structure that the company put in place when it went public in 2004 was becoming fragile. This original arrangement allowed Google’s founders to maintain voting control over the company, even as their share of ownership shrunk as more shares were issued. The explicit premise was that this structure would “protect Google from outside pressures and the temptation to sacrifice future opportunities to meet short-term demands.”
But by the time of Apple’s announcement in March 2012, this bulwark against outside influence was eroding, as Google’s founders continued to sell stock and employees were issued shares in their compensation packages. A few weeks after Apple’s concession to shareholders, the founders of Google announced a new share structure that would defend against a similar situation: The structure gave the founders’ shares 10 times the voting power of regular shares, ensuring they’d dictate the company’s strategy long into the future and that Google was, in the words of the founders, “set up for success for decades to come.”
What has happened to Google and Apple in the wake of these events is the defining story of early 21st-century capitalism. Apple’s decision in 2012 to begin paying dividends didn’t satiate shareholders—it sparked a wider revolt. Several hedge funds started asking for much larger payouts, with some of them filing suits against Apple and even proposing an “iPref”—a new type of share that would allow Apple to release much more cash in a way that didn’t incur as high of a tax bill. In 2013 and 2014, Apple upped its commitments to distribute cash. From 2013 to March 2017, the company released $200 billion via dividends and buybacks—an amount that is equivalent to, using figures in S&P’s Capital IQ database, more than 72 percent of their operating cash flow (a common metric of performance that is about cash generation rather than profit accrual) during that period. And to help finance this, Apple took on $99 billion in debt. Sacconaghi’s vision had come true.
What has Google done in that same period? Google is, like Apple, making loads of money. From 2013 to March 2017, it generated $114 billion in operating cash flow. How much has the company distributed to shareholders? In contrast to Apple’s 72 percent payout rate, Google has only distributed 6 percent of that money to shareholders.
The paths taken by Apple and Google manifest alternative answers to one of the main questions facing capitalism today: What should public companies do with all of the money that they’re making? Even as corporations have brought in enormous profits, there has been a shortage of lucrative opportunities for investment and growth, creating surpluses of cash. This imbalance has resulted in the pileup of $2 trillion on corporate balance sheets. As companies continue to generate more profits than they need to fund their own growth, the question becomes: Who will decide what to do with all those profits—managers or investors? At Google, where the founders and executives reign supreme, insulated by their governance structure, the answer is the former. At Apple, where the investors are in charge because of the absence of one large manager-shareholder, it’s the latter. (To be clear, even though Apple’s previous efforts to stifle investors’ concerns were no longer tenable, the company can still afford to spend mightily on research and development.)
Why has each company taken the approach that it has? These two strategies reflect different reactions to an issue central to modern capitalism, the separation of ownership and control. In short, owners aren’t managers, as they once were when businesses existed on a smaller scale. And when owners have to outsource running the company to executives, this leads to what economists call “the principal-agent problem,” which refers to the issues that come up when one person, group, or company—an “agent”—can make decisions that significantly affect another—a “principal.”
Having investors dominate, as Apple does, is a good way of handling one principal-agent problem: getting managers to do right by their owners. Rather than spending money on failed products (remember Google Plus?) or , Apple has to face the disciplining force of large investors. In a way that individual shareholders would have trouble doing, larger investors can act swiftly to put a check on managers who might pursue goals that enrich themselves, such as wasteful mergers, excessive executive compensation, or lush perks. And, after all, a company’s profits theoretically belong to investors, so why shouldn’t they decide how they are put to use?
Proponents of the managerial model embodied by Google worry about a different principal-agent problem. Rather than being concerned about managers ignoring investors, they are concerned that investors won’t serve the people who would benefit from the long-term success of the company. Those professional investors are both the principals for the CEOs but also the agents of many other shareholders. The hedge funds that pressured Apple are the dreaded “short-term” investors who are interested only in quick wins and don’t serve their longer-term beneficiaries, such as pension funds, that allocate capital to them in the first place. As investors, hedge funds are impatient, and, the argument goes, ruining the economy by shortening time horizons.
Who’s right? Which principal-agent problem is more vexing? Stock-market returns are one, albeit imperfect, way of answering this question and since the initial developments, Google has far outperformed Apple. But that pattern is flipped if the time frame is restricted to the past year. So it won’t be known for many years to come if Apple or Google has a sharper financial strategy.
More importantly, though, how do these strategies impact the lives of everyday people?
by Mihir A. Desai, The Atlantic | Read more:
Subscribe to:
Posts (Atom)