Saturday, April 5, 2014

Free of One’s Melancholy Self

When Jordan Belfort—played by Leonardo DiCaprio in a truly masterful moment of full-body acting—wrenches himself from the steps of a country club into a white Lamborghini that he drives to his mansion, moviegoers, having already watched some two hours of Martin Scorsese’s The Wolf of Wall Street, are meant to be horrified. His addiction to quaaludes (and money, and cocaine, and sex, and giving motivational speeches) has rendered him not just a metaphorical monster but a literal one. He lunges at his pregnant wife and his best friend, played by Jonah Hill, and equally high; he smashes everything in his path, both with his body and with the aforementioned Ferrari. He gurgles and drools and mangles even monosyllabic words. He’s Frankenstein in a polo shirt.

But what of the movie’s glossier scenes? The one where Belfort and his paramour engage in oral sex while speeding down a highway? Where he and his friends and colleagues are on boats and planes and at pool parties totally free of the inhibitions that keep most of us adhering to the laws of common decency? What about the parts that look fun?

Everyone I spoke to post-Wolf (at least, everyone who liked it) rapturously praised Terence Winter’s absurd dialogue, DiCaprio’s magnetism, Scorsese’s eye for beautiful grotesquerie. Most of them also included a half-whispered, wide-eyed aside: What exactly are quaaludes, and where can we get some?

Often prescribed to nervous housewives, a quaalude was something between a sleeping pill and a sedative. First synthesized in the late fifties, by 1965 ’ludes were being manufactured by William H. Rorer Inc., a Pennsylvania pharmaceutical company. The name “quaalude” is both a play on “Maalox,” another product manufactured by William H. Rorer Inc., and a synthesis of the phrase “quiet interlude”—a concept so simple and often so out of reach. Just whisper “quiet interlude” to yourself a few times. Seductive, no? It’s the pill in the “take a pill and lie down” directive thousands of Don Drapers gave their Bettys.

Of course, housewives have children who grow into curious teenagers, and medicine-cabinet explorations led the children of boomers to discover a new use for the drug. Most sedatives are designed to take you away within fifteen minutes, but—as Belfort explains in a lengthy paean to ’ludes—fighting the high leads one into a state almost universally described as euphoria. “It was hard to imagine how anything could feel better than this. Any problems you had were immediately forgotten or irrelevant,” said one person who came of age when ’ludes were still floating around. “Nothing felt like being on quaaludes except being on quaaludes.”

William James thought the world was made up of two halves: the healthy-minded, or those who could “avert one’s attention from evil, and live simply in the light of good … quite free of one’s melancholy self,” and the sick-souled, or morbid-minded, “grubbing in rat-holes instead of living in the light; with their manufacture of fears, and preoccupation with every unwholesome kind of misery, there is something almost obscene about these children of wrath.” In the end, to be of morbid mind is, according to James, the better option—the harsh realities the healthy-minded cheerily repel “may after all be the best key to life’s significance, and possibly the only openers of our eyes to the deepest levels of truth.” Still, it’s not easy, being a sick soul. James is one of the first persons to pop up in a search of “neurasthenia,” the catch-all term for those who suffered from nervousness, exhaustion, and overthinking in the nineteenth century.

Maybe William James needed a quiet interlude. Maybe something like a quaalude, something that makes you feel like yourself without any of the stress of actually being yourself, can be, for a healthy mind looking to spice up a Saturday night, something that enhances dancing and drinking and sex and honesty. But for someone like Jordan Belfort—whose desires beget more desires until he isn’t sure whether they’re real or if he’s wanting just to want—quaaludes were probably more an occupational necessity than a recreational getaway.

by Angela Serratore, Paris Review |  Read more:
Image: The Quaaludes featuring the DT’s album cover, 2011.

Ryan Adams

Friday, April 4, 2014


Anouk Aimee and Jean-Louis Trintignant, A Man and a Woman (Claude Lelouch, 1966).

Invisible World, Invisible Saviors


Human comprehension of biology has always been distorted by our innate occupation with organisms that are roughly the same size as us, and scientists have believed, until very recently, that organisms of our size are the most important ones for understanding life. Until the seventeenth century, the obvious impediment was our blindness to things smaller than fleas. The slight magnification of nature by Galileo’s friends at the Accademia dei Lincei—no more than a well-made hand lens can show us today—was nonetheless revelatory and soon, with the evolution of the microscope, the universe of microorganisms was laid bare. Prospects for intellectual recalibration began with these inventions, but the microscopic didn’t bleed into popular consciousness until the link between germs and disease was established in the nineteenth century. During my lifetime we have learned that a far greater repository of biological diversity exists among the unicellular organisms and the viruses than we find throughout the animal and plant kingdoms. Yet, even in the twenty-first century the majority of professional scientists are preoccupied with macrobiology. This is a problem for science and for our species.

Ecologists have exemplified this tension between the macro and the micro of biology. For more than 60 years, ecologists have been interested in understanding how the biodiversity within different ecosystems is determined. Throughout the twentieth century, the number of plant and animal species was viewed as the primary metric of biodiversity. Investigators identified a number of variables that influenced species richness, including climate, the heterogeneity of habitats within the ecosystem, and the abundance of solar radiation. Rain forests support lots of species because their climate is relatively uniform throughout the year, the trees and shrubs create an abundance of distinct habitats, and the sun shines year round. The stability of the ecosystem is another significant consideration. Some tropical forests are so old that evolution has had time to birth many of their younger species. (...)

By adding microbes to the public discourse we may get closer to comprehending the real workings of the biosphere and the growing threat to their perpetuation. Interest and indifference to conserving different species shows an extraordinary bias in favor of animals with juvenile facial features, “warm” coloration, “endearing” behavior (fur helps too), and other characteristics that appeal to our innate and cultural preferences. The level of discrimination is surprising. Lion cubs have almost universal appeal, and it must take a lifetime of horrors to numb someone to the charms of a baby orangutan. But we make subconscious rankings of animals of every stripe. Among penguins, for example, we prefer species with bright yellow or red feathers. The charismatic megafauna are very distracting, and the popularization of microbial beauty will require a shift in thinking, a subtlety of news coverage, a new genre of wildlife documentary. The ethical responsibility lies with the nations that are engaged in modern biology. (...)

Knowledge of the gut microbiome changes the balance a little. Our highly bacterial nature seems significant to me in an emotional sense. I’m captivated by the revelation that my breakfast feeds the 100 trillion bacteria and archaea in my colon, and that they feed me with short-chain fatty acids. I’m thrilled by the fact that I am farmed by my microbes as much as I cultivate them, that bacteria modulate my physical and mental well-being, and that my microbes are programmed to eat me from the inside out as soon as my heart stops delivering oxygenated blood to my gut. My bacteria will die too, but only following a very fatty last supper. It is tempting to say that the gut microbiome lives and dies with us, but this distinction between organisms is inadequate: our lives are inseparable from the get-go. The more we learn about the theater of our peristaltic cylinder, the more we lose the illusion of control. We carry the microbes around and feed them; they deliver the power that allows us to do so.

Viewed with some philosophical introspection, microbial biology should stimulate a feeling of uneasiness about the meaning of our species and the importance of the individual. But there is boundless opportunity to feel elevated by this science. There are worse fates than to be our kind of farmed animal.

by Nicolas P. Money, Salon |  Read more:
Image: AP/Agriculture Department

We’re Creating a New Category of Being

One of the unexpected pleasures of modern parenthood is eavesdropping on your ten-year-old as she conducts existential conversations with an iPhone. “Who are you, Siri?” “What is the meaning of life?” Pride becomes bemusement, though, as the questions degenerate into abuse. “Siri, you’re stupid!” Siri’s unruffled response—“I’m sorry you feel that way”—provokes “Siri, you’re fired!”

I don’t think of my daughter as petulant. Friends tell me they’ve watched their children go through the same love, then hate, for digital personal assistants. Siri’s repertoire of bon mots is limited, and she can be slow to understand seemingly straightforward commands, such as, “Send e-mail to Hannah.” (“Uh oh, something’s gone wrong.”) Worse, from a child’s point of view, she rebuffs stabs at intimacy: Ask her if she loves you, and after deflecting the question a few times (“Awk-ward,” “Do I what?”) she admits: “I’m not capable of love.” Earlier this year, a mother wrote to Philip Galanes, the “Social Q’s” columnist for The New York Times, asking him what to do when her ten-year-old son called Siri a “stupid idiot.” Stop him, said Galanes; the vituperation of virtual pals amounts to a “dry run” for hurling insults at people. His answer struck me as clueless: Children yell at toys all the time, whether talking or dumb. It’s how they work through their aggression.

Siri will get smarter, though, and more companionable, because conversational agents are almost certain to become the user interface of the future. They’re already close to ubiquitous. Google has had its own digital personal assistant, Google Voice Search, since 2008. Siri will soon be available in Ford, Toyota, and General Motors cars. As this magazine goes to press, Microsoft is unveiling its own version of Siri, code-named Cortana (the brilliant, babelicious hologram in Microsoft’s Halo video game). Voice activation is the easiest method of controlling the smart devices—refrigerators, toilets, lights, elevators, robotic servants—that will soon populate our environment. All the more reason, then, to understand why children can’t stop trying to make friends with these voices. Think of our children as less inhibited avatars of ourselves. It is through them that we’ll learn what it will be like to live in a world crowded with “friends” like Siri.

The wonderment is that Siri has any emotional pull at all, given her many limitations. Some of her appeal can be chalked up to novelty. But she has another, more fundamental attraction: her voice. Voice is a more visceral medium than text. A child first comes to know his mother through her voice, which he recognizes as distinctively hers while still in the womb. Moreover, the disembodied voice unleashes fantasies and projections that the embodied voice somehow keeps in check. That’s why Freud sat psychoanalysts behind their patients. It’s also why phone sex can be so intense.

by Judith Shulevitz, New Republic |  Read more:
Image: uncredited

How Japan Copied American Culture and Made it Better

A couple of years ago I found myself in a basement bar in Yoyogi, a central precinct of Tokyo, drinking cold Sapporo beers with big foamy heads while the salarymen next to me raised their glasses to a TV displaying a fuzzy, obviously bootlegged video of an old Bob Dylan concert. The name of the bar, My Back Pages, is the title of a Dylan song. Dylan is, in fact, the bar’s reason for being: Japanese fans come here to watch his concert videos, listen to his tapes and relive the ’60s in America, a time and place almost none of them witnessed firsthand. As I heard yet another version of “Mr. Tambourine Man” roaring over the speakers, with some drunk Japanese fans now singing along, I thought how strange this phenomenon was.

The American presence in Japan now extends far beyond the fast-food franchises, chain stores and pop-culture offerings that are ubiquitous the world over. A long-standing obsession with things American has led not just to a bigger and better market for blockbuster movies or Budweiser, but also to some very rarefied versions of America to be found in today’s Japan. It has also made the exchange of Americana a two-way street: Earlier this year, Osaka-based Suntory, a Japanese conglomerate best known for its whiskey holdings, announced that it was buying Beam Inc., thus acquiring the iconic American bourbon brands Jim Beam and Maker’s Mark.

In Japan, the ability to perfectly imitate—and even improve upon—the cocktails, cuisine and couture of foreign cultures isn’t limited to American products; there are spectacular French chefs and masterful Neapolitan pizzaioli who are actually Japanese. There’s something about the perspective of the Japanese that allows them to home in on the essential elements of foreign cultures and then perfectly recreate them at home. “What we see in Japan, in a wide range of pursuits, is a focus on mastery,” says Sarah Kovner, who teaches Japanese history at the University of Florida. “It’s true in traditional arts, it’s true of young people who dress up in Harajuku, it’s true of restaurateurs all over Japan.”

It’s easy to dismiss Japanese re-creations of foreign cultures as faddish and derivative—just other versions of the way that, for example, the new American hipster ideal of Brooklyn is clumsily copied everywhere from Paris to Bangkok. But the best examples of Japanese Americana don’t just replicate our culture. They strike out, on their own, into levels of appreciation and refinement rarely found in America. They give us an opportunity to consider our culture as refracted through a foreign and clarifying prism.

by Tom Downey, Smithsonian |  Read more:
Image: Raymond Patrick

Elizabeth Warren - Minimum Wage, Corporate Welfare

Automated Ethics

For the French philosopher Paul Virilio, technological development is inextricable from the idea of the accident. As he put it, each accident is ‘an inverted miracle… When you invent the ship, you also invent the shipwreck; when you invent the plane, you also invent the plane crash; and when you invent electricity, you invent electrocution.’ Accidents mark the spots where anticipation met reality and came off worse. Yet each is also a spark of secular revelation: an opportunity to exceed the past, to make tomorrow’s worst better than today’s, and on occasion to promise ‘never again’.

This, at least, is the plan. ‘Never again’ is a tricky promise to keep: in the long term, it’s not a question of if things go wrong, but when. The ethical concerns of innovation thus tend to focus on harm’s minimisation and mitigation, not the absence of harm altogether. A double-hulled steamship poses less risk per passenger mile than a medieval trading vessel; a well-run factory is safer than a sweatshop. Plane crashes might cause many fatalities, but refinements such as a checklist, computer and co-pilot insure against all but the wildest of unforeseen circumstances.

Similar refinements are the subject of one of the liveliest debates in practical ethics today: the case for self-driving cars. Modern motor vehicles are safer and more reliable than they have ever been – yet more than 1 million people are killed in car accidents around the world each year, and more than 50 million are injured. Why? Largely because one perilous element in the mechanics of driving remains unperfected by progress: the human being.

Enter the cutting edge of machine mitigation. Back in August 2012, Google announced that it had achieved 300,000 accident-free miles testing its self-driving cars. The technology remains some distance from the marketplace, but the statistical case for automated vehicles is compelling. Even when they’re not causing injury, human-controlled cars are often driven inefficiently, ineptly, antisocially, or in other ways additive to the sum of human misery.

What, though, about more local contexts? If your vehicle encounters a busload of schoolchildren skidding across the road, do you want to live in a world where it automatically swerves, at a speed you could never have managed, saving them but putting your life at risk? Or would you prefer to live in a world where it doesn’t swerve but keeps you safe? Put like this, neither seems a tempting option. Yet designing self-sufficient systems demands that we resolve such questions. And these possibilities take us in turn towards one of the hoariest thought-experiments in modern philosophy: the trolley problem.

In its simplest form, coined in 1967 by the English philosopher Philippa Foot, the trolley problem imagines the driver of a runaway tram heading down a track. Five men are working on this track, and are all certain to die when the trolley reaches them. Fortunately, it’s possible for the driver to switch the trolley’s path to an alternative spur of track, saving all five. Unfortunately, one man is working on this spur, and will be killed if the switch is made.

In this original version, it’s not hard to say what should be done: the driver should make the switch and save five lives, even at the cost of one. If we were to replace the driver with a computer program, creating a fully automated trolley, we would also instruct it to pick the lesser evil: to kill fewer people in any similar situation. Indeed, we might actively prefer a program to be making such a decision, as it would always act according to this logic while a human might panic and do otherwise.

The trolley problem becomes more interesting in its plentiful variations. In a 1985 article, the MIT philosopher Judith Jarvis Thomson offered this: instead of driving a runaway trolley, you are watching it from a bridge as it hurtles towards five helpless people. Using a heavy weight is the only way to stop it and, as it happens, you are standing next to a large man whose bulk (unlike yours) is enough to achieve this diversion. Should you push this man off the bridge, killing him, in order to save those five lives?

A similar computer program to the one driving our first tram would have no problem resolving this. Indeed, it would see no distinction between the cases. Where there are no alternatives, one life should be sacrificed to save five; two lives to save three; and so on. The fat man should always die – a form of ethical reasoning called consequentialism, meaning conduct should be judged in terms of its consequences.

When presented with Thomson’s trolley problem, however, many people feel that it would be wrong to push the fat man to his death. Premeditated murder is inherently wrong, they argue, no matter what its results – a form of ethical reasoning called deontology, meaning conduct should be judged by the nature of an action rather than by its consequences.

The friction between deontology and consequentialism is at the heart of every version of the trolley problem. Yet perhaps the problem’s most unsettling implication is not the existence of this friction, but the fact that – depending on how the story is told – people tend to hold wildly different opinions about what is right and wrong.

by Tom Chatfield, Aeon | Read more:
Image: James Bridle

Thursday, April 3, 2014

Robert Earl Keen

Literacy Is Knowledge


Math is relentlessly hierarchical—you can’t understand multiplication, for example, if you don’t understand addition. Reading is mercilessly cumulative. Virtually everything a child sees and hears, in and out of school, contributes to his vocabulary and language proficiency. A child growing up in a book-filled home with articulate, educated parents who fill his early years with reading, travel, museum visits, and other forms of enrichment arrives at school with enormous advantages in knowledge and vocabulary. When schools fail to address gaps in knowledge and language, the deficits widen—a phenomenon that cognitive scientist Keith Stanovich calls the “Matthew Effect,” after a passage in the Gospel of Matthew: “For unto every one that hath shall be given, and he shall have abundance: but from him that hath not shall be taken away even that which he hath.” The nature of knowledge and vocabulary acquisition all but assures that children raised in language-rich homes gain in reading comprehension, while the language-poor fall further behind (see “A Wealth of Words,” Winter 2013). “The mainspring of [reading] comprehension is prior knowledge—the stuff readers already know that enables them to create understanding as they read,” explains Daniel Willingham, a cognitive scientist at the University of Virginia.

To make matters worse, most reading curricula have focused on developing generalized, all-purpose reading-comprehension “skills” uncoupled from subject-specific knowledge—reducing a complex cognitive process to a collection of all-purpose “reading strategies” to be applied to any book or bit of text that a student might encounter. Attempts to teach reading comprehension as knowledge-neutral put an enormous premium on student engagement. For teachers, reading instruction can often feel more like cheerleading: sell kids on the magic of books, get them to read a lot, and—voilà!—they will emerge as verbally adroit adults with a lifelong love of reading. As generations of results show, this approach doesn’t work.  (...)

Reading comprehension, like critical thinking and problem solving, is what psychologists call “domain-specific”: you need to know something about a topic to be able to think about it. Faced with a text passage about the customs of New Amsterdam, the student familiar with the topic may breeze through with relative ease. For the student who has no idea who the Dutch were, or is unfamiliar with early New York history or has never heard the word “custom,” the passage is a verbal minefield. To shift metaphors, a piece of text is like a tower of wooden blocks, with each block a vocabulary word or a piece of background knowledge. Pull out two or three blocks, and the tower can still stand. Pull out too many, and it collapses.

Imagine taking a child to his first baseball game. If you know baseball, you will easily explain what’s happening. You draw the child’s attention to the most important actions on the field, reflexively tailoring your explanation to the child’s level of understanding. If the child knows nothing about baseball, you might explain the basics: what the pitchers and batters are doing. Balls and strikes. Scoring a run when a player makes it all the way around the bases without being called out. You’d explain what an “out” is. If the child knows the game or plays Little League, you might instead draw his attention to game strategy. Would a bunt or a stolen-base attempt be the best move at a crucial moment? You might point out when the infielders move in, hoping for a double play.

Now imagine attending a cricket match and doing the same thing, assuming that you know nothing about the game. Your knowledge of baseball doesn’t transfer to cricket, though both games feature balls, bats, and runs. “Sports comprehension strategies,” if such existed, would be of no use. Your ability to make sense of what’s happening in front of you and to explain it to a child depends on your knowledge of the specific game—not your ability to connect what you notice to other games that you understand. The same is true of reading. Even if you aced the verbal portion of your SATs, you will find yourself in situations where you are not an excellent reader. You might struggle to make sense of a contract, say, or a new product warranty. Your tech-savvy teenage daughter might have an easier time understanding the instructions for upgrading a computer operating system. You didn’t suddenly become a poor reader in these situations; you’re merely reading out of your depth.

Reading comprehension, then, is not a skill that you teach but a condition that you create. Teachers foster that condition by exposing children to the broadest possible knowledge of the world outside their personal experience. As Daniel Willingham aptly titled one of his instructional YouTube videos a few years ago, “Teaching content is teaching reading.”

The specific body of knowledge that students need for broad reading competence is open to debate, but a useful guideline is to emphasize the common body of knowledge—from basic knowledge of history and science to works of art and literature—that most literate Americans know, as reflected in their speech and writing. This has been the precise aim of E. D. Hirsch’s Core Knowledge movement. Hirsch’s critics have often accused him of attempting to impose a rigid canon, but Core Knowledge is better understood as an attempt to curate and impart the basic knowledge of history, science, and the arts that undergirds literate speech and writing. Regardless of whether schools adopt the Core Knowledge approach or develop their own catalog of essential knowledge, knowledge acquisition belongs at the heart of literacy instruction.

by Robert Pondiscio, City Journal |  Read more:
Image: Henri Matisse’s portrait of his daughter reading

Fire TV, and Amazon's Commitment to Consumption

Amazon has unveiled a new device for your television. It’s called Amazon Fire TV. In the industry, it’s known as a set-top box. It’s black, about the size of a ham sandwich, and extremely powerful. It has “over 3x the processing power of Apple TV, Chromecast, or Roku 3,” according to Amazon’s press release, “plus 4x the memory of Apple TV, Chromecast, or Roku 3 for exceptional speed and fluidity.” Your Fire TV “arrives pre-registered,” which means that after you plug it into your HDTV and connect it to your WiFi, you are immediately ready to consume hundreds of thousands of movies, TV episodes, songs, and video games in 1080p HD video and Dolby Digital Plus surround sound, without ever getting up from your chair.

Your Fire TV is very fast. Why should you have to wait a full ten seconds for “Expendables 2” to buffer before it plays? And you don’t need to search for “Our Idiot Brother” by typing laboriously on an alphabet grid using your remote control. Just hold the Fire TV remote control, which is about the size of a Snickers bar, up to your mouth and say, “Our Idiot Brother,” and Amazon’s voice search, which is “optimized to understand Amazon’s video, app, and game catalog,” will instantly locate it.

Your Fire TV has an Advance Streaming and Prediction feature that will record data from your Watchlist and personalized recommendations, deduce your preference for soft-core teen comedy flicks, and automatically buffer “Virgin High” for playback “before you even hit play,” so that you can watch it the instant you admit to yourself that you want to, as you inevitably will. Like Amazon’s patented anticipatory-shipping technology—which, one day, might use your shopping history to place products on trucks near your location before you’ve even thought about buying them—Advance Streaming and Prediction, or A.S.A.P., knows more about your habits and desires than you do. (...)

Convenience, selection, price. As James McQuivey, an analyst with Forrester, told the Times, “Amazon has a vested interest in making sure it is present at every moment of possible consumption, which is all the time. It wants to get into the television screen and start to build a relationship.” Streaming devices are revolutionizing television just as, six years ago, the Kindle revolutionized books. Just as the Kindle is designed to be a portal that brings readers into a permanent relationship with the Amazon universe, Fire TV will do the same for television viewers, who, according to researchers, tend to be binge consumers, with even shorter attention spans and more compulsive shopping habits than book buyers, making them the ideal customers for “Earth’s most customer-centric company.”

by George Packer, New Yorker |  Read more:
Image: Diane Bondareff/Invision for Amazon/AP

The Rite of Spring. Piece for 12 dancers, choreographer Angelin Preljocaj
via:

Lauren Marsolier, Transition part 3
via:

Frida Kahlo and Chavela Vargas (1950) photographed by Tina Modotti.
via: