Wednesday, November 28, 2012
Moral Machines
That moment will be significant not just because it will signal the end of one more human niche, but because it will signal the beginning of another: the era in which it will no longer be optional for machines to have ethical systems. Your car is speeding along a bridge at fifty miles per hour when errant school bus carrying forty innocent children crosses its path. Should your car swerve, possibly risking the life of its owner (you), in order to save the children, or keep going, putting all forty kids at risk? If the decision must be made in milliseconds, the computer will have to make the call. (...)
With or without robotic soldiers, what we really need is a sound way to teach our machines to be ethical. The trouble is that we have almost no idea how to do that. Many discussions start with three famous laws from Isaac Asimov:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.The trouble with these seemingly sound laws is threefold. The first is technical: at least for now, we couldn’t program a machine with Asimov’s laws if we tried. As yet, we haven’t figured out how to build a machine that fully comprehends the concept of “dinner”, much less something as abstract as “harm” or “protection.” Likewise, we are a long way from constructing a robot that can fully anticipate the consequences of any of its actions (or inactions). For now, a robot is lucky if it can predict would happen if it dropped a glass of water. A.I. has a long way to go before laws as abstract as Asimov’s could realistically be encoded in software.
A robot must obey the orders given to it by human beings, except where such orders would conflict with the first law.
A robot must protect its own existence as long as such protection does not conflict with the first or second laws.
Second, even if we could figure out how to do the programming, the rules might be too restrictive. The first and second laws, for example, preclude robots from ever harming other humans, but most people would make exceptions for robots that could eliminate potential human targets that were a clear and present danger to others. Only a true ideologue would want to stop a robotic sniper from taking down a hostage-taker or Columbine killer.
Meanwhile, Asimov’s laws themselves might not be fair—to robots. As the computer scientist Kevin Korb has pointed out, Asimov’s laws effectively treat robots like slaves. Perhaps that is acceptable for now, but it could become morally questionable (and more difficult to enforce) as machines become smarter and possibly more self-aware.
The laws of Asimov are hardly the only approach to machine ethics, but many others are equally fraught. An all-powerful computer that was programmed to maximize human pleasure, for example, might consign us all to an intravenous dopamine drip; an automated car that aimed to minimize harm would never leave the driveway. Almost any easy solution that one might imagine leads to some variation or another on the Sorceror’s Apprentice, a genie that’s given us what we’ve asked for, rather than what we truly desire. A tiny cadre of brave-hearted souls at Oxford, Yale, and the Berkeley California Singularity Institute are working on these problems, but the annual amount of money being spent on developing machine morality is tiny
.
by Gary Marcus, New Yorker | Read more:
Photograph by Justin Sullivan/Getty.How Does Nitroglycerin Stop Heart Attacks?
Dear Cecil:
Recently a friend of the family had a heart attack. While he was in the hospital, they gave him nitroglycerin pills to stop the attack and ease his chest pains! I consider myself as having a rational mind, but the ingestion of explosives (no matter how small the amount) does not on the surface seem to be a great way to promote cardiovascular health! In fact, it would seem that nitro might have caused a few heart attacks (especially around the Fourth of July). How does nitroglycerin stop heart attacks?
— Steve S., Salt Lake City
People nowadays are such wimps. If you're looking for strong medicine, how can you do better than a high explosive? The nitroglycerin in the pills, patches, and sprays that heart patients use for angina (chest pain) is in fact the same stuff you find in dynamite--the residue the drug leaves on patients' skin and clothing is often enough to set off airport bomb-sniffing machines. The medicinal dose is tiny and diluted with inert material, so it's completely nonexplosive; even so, nitroglycerin is one medicine I'd hesitate to shake before use.
I'm kidding, of course. Still, straight nitroglycerin (an oily yellow liquid) isn't something you'd want to take a swig of--even if we ignore the fact that it's poisonous, the merest jolt will detonate it. The man who discovered it in 1846, Italian chemist Ascanio Sobrero, had his face scarred by a laboratory explosion. The Swedish inventor of dynamite, Alfred Nobel, made his pile after figuring out in the 1860s that mixing nitro with diatomaceous earth would produce a relatively stable explosive paste that was much safer to use.
Laborers in Nobel's factories were the first to feel nitroglycerin's therapeutic effects. When they arrived at work each morning, those with heart problems found that their chest pains subsided (though almost everybody on the job noticed that sometimes their heads hurt like hell). Turned out the nitroglycerin vapor in the factory air was acting as a vasodilator, increasing blood flow both to the heart (which needed it, at least in the case of the angina sufferers) and to the head (which didn't).
Nitroglycerin pills have been a standard treatment for angina and heart attack symptoms since 1879--doctors prescribed them for Nobel himself not long before his death in 1896 (he refused to take them--couldn't brook the headaches). But more than a century passed before scientists understood how they worked. In the 1970s, researchers established that the body converts nitroglycerin into nitric oxide, and in the '80s they demonstrated that nitric oxide is a messenger molecule that tells the smooth muscles surrounding blood vessels to relax. (A heart attack basically means that not enough blood is reaching your cardiac muscles.) In 1998 three scientists who'd been instrumental in unlocking the mystery of nitroglycerin were collectively awarded--I'm telling you, this story has irony out the wazoo--the Nobel Prize in medicine.
Recently a friend of the family had a heart attack. While he was in the hospital, they gave him nitroglycerin pills to stop the attack and ease his chest pains! I consider myself as having a rational mind, but the ingestion of explosives (no matter how small the amount) does not on the surface seem to be a great way to promote cardiovascular health! In fact, it would seem that nitro might have caused a few heart attacks (especially around the Fourth of July). How does nitroglycerin stop heart attacks?
— Steve S., Salt Lake City

I'm kidding, of course. Still, straight nitroglycerin (an oily yellow liquid) isn't something you'd want to take a swig of--even if we ignore the fact that it's poisonous, the merest jolt will detonate it. The man who discovered it in 1846, Italian chemist Ascanio Sobrero, had his face scarred by a laboratory explosion. The Swedish inventor of dynamite, Alfred Nobel, made his pile after figuring out in the 1860s that mixing nitro with diatomaceous earth would produce a relatively stable explosive paste that was much safer to use.
Laborers in Nobel's factories were the first to feel nitroglycerin's therapeutic effects. When they arrived at work each morning, those with heart problems found that their chest pains subsided (though almost everybody on the job noticed that sometimes their heads hurt like hell). Turned out the nitroglycerin vapor in the factory air was acting as a vasodilator, increasing blood flow both to the heart (which needed it, at least in the case of the angina sufferers) and to the head (which didn't).
Nitroglycerin pills have been a standard treatment for angina and heart attack symptoms since 1879--doctors prescribed them for Nobel himself not long before his death in 1896 (he refused to take them--couldn't brook the headaches). But more than a century passed before scientists understood how they worked. In the 1970s, researchers established that the body converts nitroglycerin into nitric oxide, and in the '80s they demonstrated that nitric oxide is a messenger molecule that tells the smooth muscles surrounding blood vessels to relax. (A heart attack basically means that not enough blood is reaching your cardiac muscles.) In 1998 three scientists who'd been instrumental in unlocking the mystery of nitroglycerin were collectively awarded--I'm telling you, this story has irony out the wazoo--the Nobel Prize in medicine.
End the Charade: Let Athletes Major in Sports
That collegiate sports is tainted by chicanery and a host of moral dilemmas is nothing new. Rarely does a week pass without some embarrassing deviance being uncovered and scrutinized by the ever-vigilant news media. A steady stream of scandalous disclosures depicting illicit communications and relationships among athletes, agents, and coaches, not to mention forced resignations, expulsions, and sanctions, reveals deep dysfunction in college athletics.
Countless published articles, letters to the editor, and essays have railed against those ethical violations for decades in well-intentioned efforts to provide solutions. A recent investigationby The Chronicle's Brad Wolverton revealed a host of quick, cheap, and easy academic credits available to athletes in danger of losing their eligibility to play.
What bothers me as a retired academic with decades of service—and as an avid college-sports fan to boot—is an issue that may be integral to a good portion of such travesties.
Why do we impose upon young, talented, and serious-minded high-school seniors the imperative of selecting an academic major that is, more often than not, completely irrelevant to, or at least inconsistent with, their heartfelt desires and true career objectives: to be professional athletes?
Acquisition of athletic skills is what significant numbers of NCAA Division I student athletes want to pursue. And this is undeniably why they've gone to their campus of choice. Their confessions about their primary interest are readily proclaimed and by no means denied or repressed. These athletes are as honest in recognizing and divulging their aspiration as is the student who declares a goal of performing some day at the Metropolitan Opera or on the Broadway stage. Student athletes wish to be professional entertainers. This is their heart's desire.
Their family members, friends, and high-school coaches acknowledge and support that goal, so why not let them step out of the closet and declare their true aspiration—to study football, basketball, or baseball? Why not legitimize such an academic specialty in the same manner that other professional performance careers, such as dance, voice, theater, and music, are recognized and supported? Why treat preparation for professional sports careers differently? Why not establish a well-planned, defensible, educationally sound curriculum that correlates with a career at the elite level of sports?
Countless published articles, letters to the editor, and essays have railed against those ethical violations for decades in well-intentioned efforts to provide solutions. A recent investigationby The Chronicle's Brad Wolverton revealed a host of quick, cheap, and easy academic credits available to athletes in danger of losing their eligibility to play.
What bothers me as a retired academic with decades of service—and as an avid college-sports fan to boot—is an issue that may be integral to a good portion of such travesties.
Why do we impose upon young, talented, and serious-minded high-school seniors the imperative of selecting an academic major that is, more often than not, completely irrelevant to, or at least inconsistent with, their heartfelt desires and true career objectives: to be professional athletes?
Acquisition of athletic skills is what significant numbers of NCAA Division I student athletes want to pursue. And this is undeniably why they've gone to their campus of choice. Their confessions about their primary interest are readily proclaimed and by no means denied or repressed. These athletes are as honest in recognizing and divulging their aspiration as is the student who declares a goal of performing some day at the Metropolitan Opera or on the Broadway stage. Student athletes wish to be professional entertainers. This is their heart's desire.
Their family members, friends, and high-school coaches acknowledge and support that goal, so why not let them step out of the closet and declare their true aspiration—to study football, basketball, or baseball? Why not legitimize such an academic specialty in the same manner that other professional performance careers, such as dance, voice, theater, and music, are recognized and supported? Why treat preparation for professional sports careers differently? Why not establish a well-planned, defensible, educationally sound curriculum that correlates with a career at the elite level of sports?
by David Pargman, Chronicle of Higher Education | Read more:
Illustration: Tim Foley
Can a Jellyfish Unlock the Secret of Immortality?
Sommer was conducting research on hydrozoans, small invertebrates that, depending on their stage in the life cycle, resemble either a jellyfish or a soft coral. Every morning, Sommer went snorkeling in the turquoise water off the cliffs of Portofino. He scanned the ocean floor for hydrozoans, gathering them with plankton nets. Among the hundreds of organisms he collected was a tiny, relatively obscure species known to biologists as Turritopsis dohrnii. Today it is more commonly known as the immortal jellyfish.
Sommer kept his hydrozoans in petri dishes and observed their reproduction habits. After several days he noticed that his Turritopsis dohrnii was behaving in a very peculiar manner, for which he could hypothesize no earthly explanation. Plainly speaking, it refused to die. It appeared to age in reverse, growing younger and younger until it reached its earliest stage of development, at which point it began its life cycle anew.
Sommer was baffled by this development but didn’t immediately grasp its significance. (It was nearly a decade before the word “immortal” was first used to describe the species.) But several biologists in Genoa, fascinated by Sommer’s finding, continued to study the species, and in 1996 they published a paper called “Reversing the Life Cycle.” The scientists described how the species — at any stage of its development — could transform itself back to a polyp, the organism’s earliest stage of life, “thus escaping death and achieving potential immortality.” This finding appeared to debunk the most fundamental law of the natural world — you are born, and then you die. (...)
Yet the publication of “Reversing the Life Cycle” barely registered outside the academic world. You might expect that, having learned of the existence of immortal life, man would dedicate colossal resources to learning how the immortal jellyfish performs its trick. You might expect that biotech multinationals would vie to copyright its genome; that a vast coalition of research scientists would seek to determine the mechanisms by which its cells aged in reverse; that pharmaceutical firms would try to appropriate its lessons for the purposes of human medicine; that governments would broker international accords to govern the future use of rejuvenating technology. But none of this happened. (...)
In fact there is just one scientist who has been culturing Turritopsis polyps in his lab consistently. He works alone, without major financing or a staff, in a cramped office in Shirahama, a sleepy beach town in Wakayama Prefecture, Japan, four hours south of Kyoto. The scientist’s name is Shin Kubota, and he is, for the time being, our best chance for understanding this unique strand of biological immortality.
Many marine biologists are reluctant to make such grand claims about Turritopsis’ promise for human medicine. “That’s a question for journalists,” Boero said (to a journalist) in 2009. “I prefer to focus on a slightly more rational form of science.”
Kubota, however, has no such compunction. “Turritopsis application for human beings is the most wonderful dream of mankind,” he told me the first time I called him. “Once we determine how the jellyfish rejuvenates itself, we should achieve very great things. My opinion is that we will evolve and become immortal ourselves.”
I decided I better book a ticket to Japan.
by Nathaniel Rich, NY Times | Read more:
Photo: Takashi Murai for The New York TimesTuesday, November 27, 2012
End of the Line in the ICU
Last year I graduated from nursing school and began working in a specialized intensive care unit in a large academic hospital. During an orientation class a nurse who has worked on the unit for six years gave a presentation on the various kinds of strokes. Noting the difference between supratentorial and infratentorial strokes—the former being more survivable and the latter having a more severe effect on the body’s basic functions such as breathing—she said that if she were going to have a stroke, she knew which type she would prefer: “I would want to have an infratentorial stroke. Because I don’t even want to make it to the hospital.”
She wasn’t kidding, and after a couple months of work, I understood why. I also understood the nurses who voice their advocacy of natural death—and their fear of ending up like some of our patients—in regular discussions of plans for DNRtattoos. For example: “I am going to tattoo DO NOT RESUSCITATE across my chest. No, across my face, because they won’t take my gown off. I am going to tattoo DO NOT INTUBATE above my lip.”
Another nurse says that instead of DNR, she’s going to be DNA, Do Not Admit.
We know that such plainly stated wishes would never be honored. Medical personnel are bound by legal documents and orders, and the DNR tattoo is mostly a very dark joke. But the oldest nurse on my unit has instructed her children never to call 911 for her, and readily discusses her suicide pact with her husband.
You will not find a group less in favor of automatically aggressive, invasive medical care than intensive care nurses, because we see the pointless suffering it often causes in patients and families. Intensive care is at best a temporary detour during which a patient’s instability is monitored, analyzed, and corrected, but it is at worst a high tech torture chamber, a taste of hell during a person’s last days on earth.
I cared for a woman in her 90s whose family had considered making her a DNR,but decided against it. After a relatively minor stroke that left her awake but not lucid, Helen* went into kidney failure and started on continuous hemodialysis. Because she kept pulling out her IV lines and the feeding tube we had dropped into her nose and down to her stomach, we put boxing glove-like pillow mitts on her hands. When I approached with her medicine, Helen batted at me with her boxing gloves, saying, “NO. STOP.” She frowned, shook her head and then her fist at me. Her wishes were pretty clear, but technically she was “confused,” because when asked her name, the date, and her location, she failed to answer.
During the next shift, Helen’s heart stopped beating. But despite talking with the doctors about her advanced age and the poor state of her health, her family had nonetheless decided that we should “do everything we can” for her, and so Helen died in a frenzy of nurses pumping her with vasopressors and doing chest compressions, probably cracking several ribs.
That was a situation in which a patient’s family made a decision that probably caused Helen to suffer and did not help her. But there are circumstances where it is the healthcare team that chooses to push on with intensive interventions. And there are circumstances where bureaucracy, miscommunication, and the relatively low priority, among very busy physicians, of making decisions about how far to pursue medical care cause patients to linger in the ICU weeks past the point when any medical professional thought meaningful recovery was possible.
She wasn’t kidding, and after a couple months of work, I understood why. I also understood the nurses who voice their advocacy of natural death—and their fear of ending up like some of our patients—in regular discussions of plans for DNRtattoos. For example: “I am going to tattoo DO NOT RESUSCITATE across my chest. No, across my face, because they won’t take my gown off. I am going to tattoo DO NOT INTUBATE above my lip.”
Another nurse says that instead of DNR, she’s going to be DNA, Do Not Admit.
We know that such plainly stated wishes would never be honored. Medical personnel are bound by legal documents and orders, and the DNR tattoo is mostly a very dark joke. But the oldest nurse on my unit has instructed her children never to call 911 for her, and readily discusses her suicide pact with her husband.
You will not find a group less in favor of automatically aggressive, invasive medical care than intensive care nurses, because we see the pointless suffering it often causes in patients and families. Intensive care is at best a temporary detour during which a patient’s instability is monitored, analyzed, and corrected, but it is at worst a high tech torture chamber, a taste of hell during a person’s last days on earth.
I cared for a woman in her 90s whose family had considered making her a DNR,but decided against it. After a relatively minor stroke that left her awake but not lucid, Helen* went into kidney failure and started on continuous hemodialysis. Because she kept pulling out her IV lines and the feeding tube we had dropped into her nose and down to her stomach, we put boxing glove-like pillow mitts on her hands. When I approached with her medicine, Helen batted at me with her boxing gloves, saying, “NO. STOP.” She frowned, shook her head and then her fist at me. Her wishes were pretty clear, but technically she was “confused,” because when asked her name, the date, and her location, she failed to answer.
During the next shift, Helen’s heart stopped beating. But despite talking with the doctors about her advanced age and the poor state of her health, her family had nonetheless decided that we should “do everything we can” for her, and so Helen died in a frenzy of nurses pumping her with vasopressors and doing chest compressions, probably cracking several ribs.
That was a situation in which a patient’s family made a decision that probably caused Helen to suffer and did not help her. But there are circumstances where it is the healthcare team that chooses to push on with intensive interventions. And there are circumstances where bureaucracy, miscommunication, and the relatively low priority, among very busy physicians, of making decisions about how far to pursue medical care cause patients to linger in the ICU weeks past the point when any medical professional thought meaningful recovery was possible.
by Kristen McConnell, The Health Care Blog | Read more:
Land of the Seven Moles
When it comes to eating, I’m not wildly adventurous. Sometimes I think that I’m too cautious. Looking back at those moments when I wasn’t setting the sort of example a parent should set, I can hear Abigail saying, while the two of us were perusing the menu at a restaurant in Cuzco, Peru, “I guess you’re going to wimp out on the guinea pig.” I haven’t felt inspired by those who talk about having downed a great variety of gruesome foodstuffs. Eating, say, iguana spleen strikes me as sort of like bungee jumping: the point is not to do it but to have done it. When I’m asked about my willingness to eat the ostensibly inedible, I usually tell the story of finding on the menu of a restaurant in Hong Kong an item listed as double-boiled deer penis. “I thought about ordering it,” I always say, “but I was afraid when they brought it to the table I’d take one look at it and say, ‘Maybe you could take it back and have him boil it one more time.’ ”
On the other hand, I usually like to try the local specialty. In Ecuador, I eventually did eat guinea pig. Given my experience with nutria in Louisiana some years before, in fact, I suppose that, if I hadn’t been raised to prize modesty, I could describe myself as a man with relatively broad experience in rodent consumption. As I studied the mounds of various sizes of grasshoppers in the markets, though, I found myself with a question similar to the one that goes through my mind when I see someone in Chinatown reach into a barrel of live frogs and pull one out for inspection: What, exactly, does one look for in a grasshopper? I thought I might ease into grasshopper-eating, following the general rule that anything is edible if it’s chopped up finely enough. That’s apparently the route my granddaughters had taken. Both of the girls had sampled grasshopper, although neither of them seemed keen on making a habit of it. Given Rebecca’s reputation as someone with an almost limitless appetite for corn tortillas—a woman across the road from the house Abigail and Brian had rented makes three hundred a day on a traditional earthenware griddle called a comal, and it’s clear that, left unchecked, Rebecca could put a considerable dent in a day’s inventory—Isabelle, who’s ten, had a simple explanation for how her little sister, who’s only seven, happened to consume grasshoppers, mixed with some other things: “Rebecca will eat anything that’s wrapped in a tortilla.”
by Calvin Trillin, New Yorker | Read more:
Photograph by TrujilloPaumierMonday, November 26, 2012
Positive Thinking is for Suckers!
The man who claims that he is about to tell me the secret of human happiness is eighty-three years old, with an alarming orange tan that does nothing to enhance his credibility. It is just after eight o’clock on a December morning, in a darkened basketball stadium on the outskirts of San Antonio, and — according to the orange man — I am about to learn ‘the one thing that will change your life forever.” I’m skeptical, but not as much as I might normally be, because I am only one of more than fifteen thousand people at Get Motivated!, America’s “most popular business motivational seminar,” and the enthusiasm of my fellow audience members is starting to become infectious.
“So you wanna know?” asks the octogenarian, who is Dr. Robert H. Schuller, veteran self-help guru, author of more than thirty-five books on the power of positive thinking, and, in his other job, the founding pastor of the largest church in the United States constructed entirely out of glass. The crowd roars its assent. Easily embarrassed British people like me do not, generally speaking, roar our assent at motivational seminars in Texas basketball stadiums, but the atmosphere partially overpowers my reticence. I roar quietly.
“Here it is, then,” Dr. Schuller declares, stiffly pacing the stage, which is decorated with two enormous banners reading “MOTIVATE!” and “SUCCEED!,” seventeen American flags, and a large number of potted plants. “Here’s the thing that will change your life forever.” Then he barks a single syllable — “Cut!” — and leaves a dramatic pause before completing his sentence: ‘… the word ‘impossible’ out of your life! Cut it out! Cut it out forever!”
The audience combusts. I can’t help feeling underwhelmed, but then I probably shouldn’t have expected anything different from Get Motivated!, an event at which the sheer power of positivity counts for everything. “You are the master of your destiny!” Schuller goes on. “Think big, and dream bigger! Resurrect your abandoned hope! … Positive thinking works in every area of life!’
The logic of Schuller’s philosophy, which is the doctrine of positive thinking at its most distilled, isn’t exactly complex: decide to think happy and successful thoughts — banish the spectres of sadness and failure — and happiness and success will follow. It could be argued that not every speaker listed in the glossy brochure for today’s seminar provides uncontroversial evidence in support of this outlook: the keynote speech is to be delivered, in a few hours’ time, by George W . Bush, a president far from universally viewed as successful. But if you voiced this objection to Dr. Schuller, he would probably dismiss it as “negativity thinking.” To criticize the power of positivity is to demonstrate that you haven’t really grasped it at all. If you had, you would stop grumbling about such things, and indeed about anything else.
The organisers of Get Motivated! describe it as a motivational seminar, but that phrase — with its suggestion of minor-league life coaches giving speeches in dingy hotel ballrooms — hardly captures the scale and grandiosity of the thing. Staged roughly once a month, in cities across North America, it sits at the summit of the global industry of positive thinking, and boasts an impressive roster of celebrity speakers: Mikhail Gorbachev and Rudy Giuliani are among the regulars, as are General Colin Powell and, somewhat incongruously, William Shatner. Should it ever occur to you that a formerly prominent figure in world politics (or William Shatner) has been keeping an inexplicably low profile in recent months, there’s a good chance you’ll find him or her at Get Motivated!, preaching the gospel of optimism.
As befits such celebrity, there’s nothing dingy about the staging, either, which features banks of swooping spotlights, sound systems pumping out rock anthems, and expensive pyrotechnics; each speaker is welcomed to the stage amid showers of sparks and puffs of smoke. These special effects help propel the audience to ever higher altitudes of excitement, though it also doesn’t hurt that for many of them, a trip to Get Motivated! means an extra day off work: many employers classify it as job training. Even the United States military, where “training” usually means something more rigorous, endorses this view; in San Antonio, scores of the stadium’s seats are occupied by uniformed soldiers from the local Army base.
Technically, I am here undercover. Tamara Lowe, the self-described “world’s No. 1 female motivational speaker,” who along with her husband runs the company behind Get Motivated!, has been accused of denying access to reporters, a tribe notoriously prone to negativity thinking. Lowe denies the charge, but out of caution, I’ve been describing myself as a “self-employed businessman” — a tactic, I’m realizing too late, that only makes me sound shifty. I needn’t have bothered with subterfuge anyway, it turns out, since I’m much too far away from the stage for the security staff to be able to see me scribbling in my notebook. My seat is described on my ticket as “premier seating,” but this turns out to be another case of positivity run amok: at Get Motivated!, there is only “premier seating,” “executive seating,” and “VIP seating.”
In reality, mine is up in the nosebleed section; it is a hard plastic perch, painful on the buttocks. But I am grateful for it, because it means that by chance I’m seated next to a man who, as far as I can make out, is one of the few cynics in the arena — an amiable, large-limbed park ranger named Jim, who sporadically leaps to his feet to shout I’m so motivated!” in tones laden with sarcasm.
He explains that he was required to attend by his employer, the United States National Park Service, though when I ask why that organization might wish its rangers to use paid work time in this fashion, he cheerily concedes that he has “no fucking clue.” Dr. Schuller’s sermon, meanwhile, is gathering pace. “When I was a child, it was impossible for a man ever to walk on the moon, impossible to cut out a human heart and put it in another man’s chest … the word ‘impossible’ has proven to be a very stupid word!” He does not spend much time marshaling further evidence for his assertion that failure is optional: it’s clear that Schuller, the author of “Move Ahead with Possibility Thinking” and “Tough Times Never Last, but Tough People Do!,” vastly prefers inspiration to argument. But in any case, he is really only a warm-up man for the day’s main speakers, and within fifteen minutes he is striding away, to adulation and fireworks, fists clenched victoriously up at the audience, the picture of positive-thinking success.
It is only months later, back at my home in New York, reading the headlines over morning coffee, that I learn the news that the largest church in the United States constructed entirely from glass has filed for bankruptcy, a word Dr. Schuller had apparently neglected to eliminate from his vocabulary.
For a civilization so fixated on achieving happiness, we seem remarkably incompetent at the task. One of the best-known general findings of the “science of happiness” has been the discovery that the countless advantages of modern life have done so little to lift our collective mood. The awkward truth seems to be that increased economic growth does not necessarily make for happier societies, just as increased personal income, above a certain basic level, doesn’t make for happier people. Nor does better education, at least according to some studies. Nor does an increased choice of consumer products. Nor do bigger and fancier homes, which instead seem mainly to provide the privilege of more space in which to feel gloomy.
Perhaps you don’t need telling that self-help books, the modern-day apotheosis of the quest for happiness, are among the things that fail to make us happy. But, for the record, research strongly suggests that they are rarely much help. This is why, among themselves, some self-help publishers refer to the “eighteen-month rule,” which states that the person most likely to purchase any given self-help book is someone who, within the previous eighteen months, purchased a self-help book — one that evidently didn’t solve all their problems. When you look at the self-help shelves with a coldly impartial eye, this isn’t especially surprising. That we yearn for neat, book-sized solutions to the problem of being human is understandable, but strip away the packaging, and you’ll find that the messages of such works are frequently banal. The “Seven Habits of Highly Effective People” essentially tells you to decide what matters most to you in life, and then do it; “How to Win Friends and Influence People” advises its readers to be pleasant rather than obnoxious, and to use people’s first names a lot. One of the most successful management manuals of the last few years, “Fish!,” which is intended to help foster happiness and productivity in the workplace, suggests handing out small toy fish to your hardest-working employees.
As we’ll see, when the messages get more specific than that, self-help gurus tend to make claims that simply aren’t supported by more reputable research. The evidence suggests, for example, that venting your anger doesn’t get rid of it, while visualising your goals doesn’t seem to make you more likely to achieve them. And whatever you make of the country-by-country surveys of national happiness that are now published with some regularity, it’s striking that the “happiest” countries are never those where self-help books sell the most, nor indeed where professional psychotherapists are most widely consulted. The existence of a thriving “happiness industry” clearly isn’t sufficient to engender national happiness, and it’s not unreasonable to suspect that it might make matters worse.
Yet the ineffectiveness of modern strategies for happiness is really just a small part of the problem. There are good reasons to believe that the whole notion of “seeking happiness” is flawed to begin with. For one thing, who says happiness is a valid goal in the first place? Religions have never placed much explicit emphasis on it, at least as far as this world is concerned; philosophers have certainly not been unanimous in endorsing it, either. And any evolutionary psychologist will tell you that evolution has little interest in your being happy, beyond trying to make sure that you’re not so listless or miserable that you lose the will to reproduce.
Even assuming happiness to be a worthy target, though, a worse pitfall awaits, which is that aiming for it seems to reduce your chances of ever attaining it. “Ask yourself whether you are happy,” observed the philosopher John Stuart Mill, “and you cease to be so.” At best, it would appear, happiness can only be glimpsed out of the corner of an eye, not stared at directly. (We tend to remember having been happy in the past much more frequently than we are conscious of being happy in the present.) Making matters worse still, what happiness actually is feels impossible to define in words; even supposing you could do so, you’d presumably end up with as many different definitions as there are people on the planet. All of which means it’s tempting to conclude that “How can we be happy?” is simply the wrong question — that we might as well resign ourselves to never finding the answer, and get on with something more productive instead.
But could there be a third possibility, besides the futile effort to pursue solutions that never seem to work, on the one hand, and just giving up, on the other? After several years reporting on the field of psychology as a journalist, I finally realized that there might be. I began to think that something united all those psychologists and philosophers — and even the occasional self-help guru — whose ideas seemed actually to hold water. The startling conclusion at which they had all arrived, in different ways, was this: that the effort to try to feel happy is often precisely the thing that makes us miserable. And that it is our constant efforts to eliminate the negative — insecurity, uncertainty, failure, or sadness — that is what causes us to feel so insecure, anxious, uncertain, or unhappy. They didn’t see this conclusion as depressing, though. Instead, they argued that it pointed to an alternative approach, a “negative path” to happiness, that entailed taking a radically different stance towards those things that most of us spend our lives trying hard to avoid. It involved learning to enjoy uncertainty, embracing insecurity, stopping trying to think positively, becoming familiar with failure, even learning to value death. In short, all these people seemed to agree that in order to be truly happy, we might actually need to be willing to experience more negative emotions — or, at the very least, to learn to stop running quite so hard from them. Which is a bewildering thought, and one that calls into question not just our methods for achieving happiness, but also our assumptions about what “happiness” really means.
by Oliver Burkeman, Salon | Read more:
Photo: Pete Souza
Toques From Underground
For the past two years, in a loft apartment in downtown Los Angeles, Craig Thornton has been conducting an experiment in the conventions of high-end American dining. Several nights a week, a group of sixteen strangers gather around his dining-room table to eat delicacies he has handpicked and prepared for them, from a meticulously considered menu over which they have no say. It is the toughest reservation in the city: when he announces a dinner, hundreds of people typically respond. The group is selected with an eye toward occupational balance—all lawyers, a party foul that was recently avoided thanks to Google, would have been too monochrome—and, when possible, democracy. Your dinner companion might be a former U.F.C. heavyweight champion; the chef Ludo Lefebvre; a Food Network obsessive for whom any meal is an opportunity to talk about a different meal; or a kid who saved his money and drove four hours from Fresno to be there. At the end, you place a “donation”—whatever you think the meal was worth—in a desiccated crocodile head that sits in the middle of the table. Most people pay around ninety dollars; after buying the ingredients and paying a small crew, Thornton usually breaks even. The experiment is called Wolvesmouth, the loft Wolvesden; Thornton is the Wolf. “I grew up in a survival atmosphere,” he says. “I like that aggressiveness. And I like that it’s a shy animal that avoids confrontation.”
Thornton is thirty and skinny, five feet nine, with a lean, carved face and the playful, semi-wild bearing of a stray animal that half-remembers life at the hearth. People of an older generation adopt him. Three women consider themselves to be his mother; two men—neither one his father—call him son. Lost boys flock to him; at any given time, there are a couple of them camping on his floor, in tents and on bedrolls.
Thornton doesn’t drink, smoke, or often sleep, and he once lost fifteen pounds driving across the country because he couldn’t bring himself to eat road food. (At the end of the trip, he weighed a hundred and eighteen.) It is hard for him to eat while working—which sometimes means fasting for days—and in any case he always leaves food on the plate. “I like the idea of discipline and restraint,” he says. “You have to have that edge.” He dresses in moody blacks and grays, with the occasional Iron Maiden T-shirt, and likes his jeans girl-tight. His hair hangs to his waist, but he keeps it tucked up in a newsboy cap with cutouts over the ears. I once saw him take it down and shake it for a second, to the delight of a couple of female diners, then, sheepish, return it to hiding. One of his great fears is to be known as the Axl Rose of cooking.
For a confluence of reasons—global recession, social media, foodie-ism—restaurants have been dislodged from their traditional fixed spots and are loose on the land. Established chefs, between gigs, squat in vacant commercial kitchens: pop-ups. Young, undercapitalized cooks with catchy ideas go in search of drunken undergraduates: gourmet food trucks. Around the world, cooks, both trained and not, are hosting sporadic, legally questionable supper clubs and dinner parties in unofficial spaces. There are enough of them—five hundred or so—that two former Air B-n-B employees founded a site, Gusta.com, to help chefs manage their secret events. The movement is marked by ambition, some of it out of proportion to talent. “You’ve got a lot of people trying to be Thomas Keller in their shitty walkup,” one veteran of the scene told me. If you’re serving the food next to the litter box, how else are you going to get people to pay up?
At Wolvesmouth, Thornton has accomplished something rare: above-ground legitimacy, with underground preëminence. In February, Zagat put Thornton on its first “30 Under 30” list for Los Angeles. “Top Chef” has repeatedly tried to get him on the show, and investors have approached him with plans for making Wolvesmouth into a household name. But he has been reluctant to leave the safety of the den, where he exerts complete control. “I don’t want a business partner who’s like, ‘You know, my mom used to make a great meat loaf—I think we should do something with that,’ ” he told me. “I don’t necessarily need seventeen restaurants serving the kind of food I do. When someone gets a seat at Wolvesmouth, they know I’m going to be behind the stove cooking.” His stubbornness is attractive, particularly to an audience defined by its pursuit of singular food experiences. “He is obsessed with obscurity, which is why I love him,” James Skotchdopole, one of Quentin Tarantino’s producers and a frequent guest, says. Still, there is the problem of the neighbors, who let Thornton hold Wolvesmouth dinners only on weekends, when they are out of town. (He hosts smaller, private events, which pay the rent, throughout the week.) And there are the authorities, who have occasionally shut such operations down.
Getting busted is not always a calamity for the underground restaurateur, however. In 2009, Nguyen Tran and his wife, Thi, who had lost her job in advertising, started serving tofu balls and Vietnamese-style tacos out of their home, and within a few months their apartment was ranked the No. 1 Asian fusion restaurant in Los Angeles on Yelp. (Providence, a fantastically expensive restaurant with two Michelin Stars, was No. 2.) When the health department confronted Nguyen with his Twitter feed touting specials and warned him to stop, Thi was unnerved, but Nguyen insisted that the intervention was a blessing. They moved the restaurant, which they called Starry Kitchen, into a legitimate space, and burnished their creation myth. “It increased our audience,” he told me. “We were seedy, and being caught validated that we really were underground.”
Thornton is thirty and skinny, five feet nine, with a lean, carved face and the playful, semi-wild bearing of a stray animal that half-remembers life at the hearth. People of an older generation adopt him. Three women consider themselves to be his mother; two men—neither one his father—call him son. Lost boys flock to him; at any given time, there are a couple of them camping on his floor, in tents and on bedrolls.
Thornton doesn’t drink, smoke, or often sleep, and he once lost fifteen pounds driving across the country because he couldn’t bring himself to eat road food. (At the end of the trip, he weighed a hundred and eighteen.) It is hard for him to eat while working—which sometimes means fasting for days—and in any case he always leaves food on the plate. “I like the idea of discipline and restraint,” he says. “You have to have that edge.” He dresses in moody blacks and grays, with the occasional Iron Maiden T-shirt, and likes his jeans girl-tight. His hair hangs to his waist, but he keeps it tucked up in a newsboy cap with cutouts over the ears. I once saw him take it down and shake it for a second, to the delight of a couple of female diners, then, sheepish, return it to hiding. One of his great fears is to be known as the Axl Rose of cooking.
For a confluence of reasons—global recession, social media, foodie-ism—restaurants have been dislodged from their traditional fixed spots and are loose on the land. Established chefs, between gigs, squat in vacant commercial kitchens: pop-ups. Young, undercapitalized cooks with catchy ideas go in search of drunken undergraduates: gourmet food trucks. Around the world, cooks, both trained and not, are hosting sporadic, legally questionable supper clubs and dinner parties in unofficial spaces. There are enough of them—five hundred or so—that two former Air B-n-B employees founded a site, Gusta.com, to help chefs manage their secret events. The movement is marked by ambition, some of it out of proportion to talent. “You’ve got a lot of people trying to be Thomas Keller in their shitty walkup,” one veteran of the scene told me. If you’re serving the food next to the litter box, how else are you going to get people to pay up?
At Wolvesmouth, Thornton has accomplished something rare: above-ground legitimacy, with underground preëminence. In February, Zagat put Thornton on its first “30 Under 30” list for Los Angeles. “Top Chef” has repeatedly tried to get him on the show, and investors have approached him with plans for making Wolvesmouth into a household name. But he has been reluctant to leave the safety of the den, where he exerts complete control. “I don’t want a business partner who’s like, ‘You know, my mom used to make a great meat loaf—I think we should do something with that,’ ” he told me. “I don’t necessarily need seventeen restaurants serving the kind of food I do. When someone gets a seat at Wolvesmouth, they know I’m going to be behind the stove cooking.” His stubbornness is attractive, particularly to an audience defined by its pursuit of singular food experiences. “He is obsessed with obscurity, which is why I love him,” James Skotchdopole, one of Quentin Tarantino’s producers and a frequent guest, says. Still, there is the problem of the neighbors, who let Thornton hold Wolvesmouth dinners only on weekends, when they are out of town. (He hosts smaller, private events, which pay the rent, throughout the week.) And there are the authorities, who have occasionally shut such operations down.
Getting busted is not always a calamity for the underground restaurateur, however. In 2009, Nguyen Tran and his wife, Thi, who had lost her job in advertising, started serving tofu balls and Vietnamese-style tacos out of their home, and within a few months their apartment was ranked the No. 1 Asian fusion restaurant in Los Angeles on Yelp. (Providence, a fantastically expensive restaurant with two Michelin Stars, was No. 2.) When the health department confronted Nguyen with his Twitter feed touting specials and warned him to stop, Thi was unnerved, but Nguyen insisted that the intervention was a blessing. They moved the restaurant, which they called Starry Kitchen, into a legitimate space, and burnished their creation myth. “It increased our audience,” he told me. “We were seedy, and being caught validated that we really were underground.”
The Truce On Drugs
Three weeks ago, voters in Colorado and Washington chose to legalize marijuana for recreational use in both states—to make the drug legal to sell, legal to smoke, and legal to carry, so long as you are over 21 and you don’t drive while high. No doctor’s note is necessary. Marijuana will no longer be mostly regulated by the police, as if it were cocaine, but instead by the state liquor board (in Washington) and the Department of Revenue (in Colorado), as if it were whiskey. Colorado’s law has an extra provision that permits anyone to grow up to six marijuana plants at home and give away an ounce to friends.
It seems very unlikely that the momentum for legalization will stop on its own. About 50 percent of voters around the country now favor legalizing the drug for recreational use (the number only passed 30 percent in 2000 and 40 percent in 2009), and the younger you are, the more likely you are to favor legal pot. Legalization campaigns have the backing of a few committed billionaires, notably George Soros and Peter Lewis, and the polls suggest that the support for legalization won’t simply be confined to progressive coalitions: More than a third of conservatives are for full legalization, and there is a gender gap, with more men in favor than women. Perhaps most striking of all, an organized opposition seems to have vanished completely. In Washington State, the two registered groups opposing the referendum had combined by early fall to raise a grand total of $16,000. “We have a marriage-equality initiative on the ballot here, and it is all over television, the radio, the newspapers,” Christine Gregoire, the Democratic governor of Washington, told me just before the election. When it comes to marijuana, “it’s really interesting. You don’t hear it discussed at all.” A decade ago, legalization advocates were struggling to corral pledges of support for medicinal pot from very liberal politicians. Now, the old fearful talk about a gateway drug has disappeared entirely, and voters in two states have chosen a marijuana regime more liberal than Amsterdam’s.
These votes suggest what may be a spreading, geographic Humboldt of the mind, in which the liberties of pot in far-northern California, and the unusually ambiguous legal regime there, metastasize around the country. If you live in Seattle and sell licensed marijuana, your operation could be perfectly legal from the perspective of the state government and committing a federal crime at the same time. It is hard to detect much political enthusiasm for a federal pot crackdown, but the complexities that come with these new laws may be hard for Washington to simply ignore. What happens, for instance, when a New York dealer secures a license and a storefront in Denver, and then illegally ships the weed back home? Economists who have studied these questions thoroughly say that they can’t rule out a scenario in which little changes in the consumption of pot—the same people will smoke who always have. But they also can’t rule out a scenario in which consumption doubles, or more than doubles, and pot is not so much less prevalent than alcohol.
And yet the prohibition on marijuana is something more than just a fading relic of the culture wars. It has also been part of the ad hoc assemblage of laws, treaties, and policies that together we call the “war on drugs,” and it is in this context that the votes on Election Day may have their furthest reach. When activists in California tried to fully legalize marijuana there in 2010, the most deeply felt opposition came from the president of Mexico, who called the initiative “absurd,” telling reporters that an America that legalized marijuana had “very little moral authority to condemn a Mexican farmer who for hunger is planting marijuana to sustain the insatiable North American market for drugs.” This year, the reaction from the chief strategist for the incoming Mexican president was even broader and more pointed. The votes in Colorado and Washington, he said, “change somewhat the rules of the game … we have to carry out a review of our joint policies in regard to drug trafficking and security in general.” The suggestion from south of the border wasn’t that cocaine should be subject to the same regime as marijuana. It was: If we are going to rewrite the rules on drug policy to make them more sensible, why stop at only one drug? Why go partway?
Something unexpected has happened in the past five years. The condemnations of the war on drugs—of the mechanized imprisonment of much of our inner cities, of the brutal wars sustained in Latin America at our behest, of the sheer cost of prohibition, now likely past a trillion dollars—have migrated out from the left-wing cul-de-sacs that they have long inhabited and into the political Establishment. “The war on drugs, though well-intentioned, has been a failure,” New Jersey governor Chris Christie said this summer. A global blue-ribbon panel that included both the former Reagan secretary of State George Shultz and Kofi Annan had reached the same conclusion the previous June: “The global war on drugs has failed, with devastating consequences for individuals and societies.” The pressures from south of the border have grown far more urgent: The presidents of Colombia, Guatemala, Mexico, Honduras, Belize, and Costa Rica have all called for a broad reconsideration of the drug war in the past year, and the Organization of American States is now trying to work out what realistic alternatives there might be.
The war on drugs has always depended upon a morbid equilibrium, in which the cost of our efforts to keep narcotics from users is balanced against the consequences—in illness and death—of more widely spread use. But thanks in part to enforcement, addiction has receded in America, meaning, ironically, that the benefits of continuing prohibition have diminished. Meanwhile, the wars in Mexico and elsewhere have escalated the costs, killing nearly 60,000 people in six years. Together those developments have shifted the ethical equation. “There’s now no question,” says Mark Kleiman of UCLA, an influential drug-policy scholar, “that the costs of the drug war itself exceed the costs of drug use. It’s not even close.”
In many ways, what is happening right now is a collection of efforts, some liberating and some scary, to reset that moral calibration, to find a new equilibrium. The prohibition on drugs did not begin as neatly as the prohibition on alcohol once did, with a constitutional amendment, and it is unlikely to end neatly, with an act of a legislature or a new international treaty. Nor is the war on drugs likely to end with something that looks exactly like a victory. What is happening instead is more complicated and human: Without really acknowledging it, we are beginning to experiment with a negotiated surrender.
by Benjamin Wallace-Wells, New York Magazine | Read more:
Photo: Kenji Aoki
It seems very unlikely that the momentum for legalization will stop on its own. About 50 percent of voters around the country now favor legalizing the drug for recreational use (the number only passed 30 percent in 2000 and 40 percent in 2009), and the younger you are, the more likely you are to favor legal pot. Legalization campaigns have the backing of a few committed billionaires, notably George Soros and Peter Lewis, and the polls suggest that the support for legalization won’t simply be confined to progressive coalitions: More than a third of conservatives are for full legalization, and there is a gender gap, with more men in favor than women. Perhaps most striking of all, an organized opposition seems to have vanished completely. In Washington State, the two registered groups opposing the referendum had combined by early fall to raise a grand total of $16,000. “We have a marriage-equality initiative on the ballot here, and it is all over television, the radio, the newspapers,” Christine Gregoire, the Democratic governor of Washington, told me just before the election. When it comes to marijuana, “it’s really interesting. You don’t hear it discussed at all.” A decade ago, legalization advocates were struggling to corral pledges of support for medicinal pot from very liberal politicians. Now, the old fearful talk about a gateway drug has disappeared entirely, and voters in two states have chosen a marijuana regime more liberal than Amsterdam’s.
These votes suggest what may be a spreading, geographic Humboldt of the mind, in which the liberties of pot in far-northern California, and the unusually ambiguous legal regime there, metastasize around the country. If you live in Seattle and sell licensed marijuana, your operation could be perfectly legal from the perspective of the state government and committing a federal crime at the same time. It is hard to detect much political enthusiasm for a federal pot crackdown, but the complexities that come with these new laws may be hard for Washington to simply ignore. What happens, for instance, when a New York dealer secures a license and a storefront in Denver, and then illegally ships the weed back home? Economists who have studied these questions thoroughly say that they can’t rule out a scenario in which little changes in the consumption of pot—the same people will smoke who always have. But they also can’t rule out a scenario in which consumption doubles, or more than doubles, and pot is not so much less prevalent than alcohol.
And yet the prohibition on marijuana is something more than just a fading relic of the culture wars. It has also been part of the ad hoc assemblage of laws, treaties, and policies that together we call the “war on drugs,” and it is in this context that the votes on Election Day may have their furthest reach. When activists in California tried to fully legalize marijuana there in 2010, the most deeply felt opposition came from the president of Mexico, who called the initiative “absurd,” telling reporters that an America that legalized marijuana had “very little moral authority to condemn a Mexican farmer who for hunger is planting marijuana to sustain the insatiable North American market for drugs.” This year, the reaction from the chief strategist for the incoming Mexican president was even broader and more pointed. The votes in Colorado and Washington, he said, “change somewhat the rules of the game … we have to carry out a review of our joint policies in regard to drug trafficking and security in general.” The suggestion from south of the border wasn’t that cocaine should be subject to the same regime as marijuana. It was: If we are going to rewrite the rules on drug policy to make them more sensible, why stop at only one drug? Why go partway?
Something unexpected has happened in the past five years. The condemnations of the war on drugs—of the mechanized imprisonment of much of our inner cities, of the brutal wars sustained in Latin America at our behest, of the sheer cost of prohibition, now likely past a trillion dollars—have migrated out from the left-wing cul-de-sacs that they have long inhabited and into the political Establishment. “The war on drugs, though well-intentioned, has been a failure,” New Jersey governor Chris Christie said this summer. A global blue-ribbon panel that included both the former Reagan secretary of State George Shultz and Kofi Annan had reached the same conclusion the previous June: “The global war on drugs has failed, with devastating consequences for individuals and societies.” The pressures from south of the border have grown far more urgent: The presidents of Colombia, Guatemala, Mexico, Honduras, Belize, and Costa Rica have all called for a broad reconsideration of the drug war in the past year, and the Organization of American States is now trying to work out what realistic alternatives there might be.
The war on drugs has always depended upon a morbid equilibrium, in which the cost of our efforts to keep narcotics from users is balanced against the consequences—in illness and death—of more widely spread use. But thanks in part to enforcement, addiction has receded in America, meaning, ironically, that the benefits of continuing prohibition have diminished. Meanwhile, the wars in Mexico and elsewhere have escalated the costs, killing nearly 60,000 people in six years. Together those developments have shifted the ethical equation. “There’s now no question,” says Mark Kleiman of UCLA, an influential drug-policy scholar, “that the costs of the drug war itself exceed the costs of drug use. It’s not even close.”
In many ways, what is happening right now is a collection of efforts, some liberating and some scary, to reset that moral calibration, to find a new equilibrium. The prohibition on drugs did not begin as neatly as the prohibition on alcohol once did, with a constitutional amendment, and it is unlikely to end neatly, with an act of a legislature or a new international treaty. Nor is the war on drugs likely to end with something that looks exactly like a victory. What is happening instead is more complicated and human: Without really acknowledging it, we are beginning to experiment with a negotiated surrender.
by Benjamin Wallace-Wells, New York Magazine | Read more:
Photo: Kenji Aoki
Sunday, November 25, 2012
Rolling Stones
[ed. The entire documentary, (92 min.), including Altmont.]
Subscribe to:
Posts (Atom)