Thursday, May 31, 2012
Self-Portrait in a Sheet Mirror: On Vivian Maier
Imagine being the kind of person who finds everything provocative. All you have to do is set out on a walk through city streets, a Rolleiflex hanging from a strap around your neck, and your heart starts pounding in anticipation. In a world that never fails to startle, it is up to you to find the perfect angle of vision and make use of the available light to illuminate thrilling juxtapositions. You have the power to create extraordinary images out of ordinary scenes, such as two women crossing the street, minks hanging listlessly down the backs of their matching black jackets; or a white man dropping a coin in a black man’s cup while a white dog on a leash looks away, as if in embarrassment; or a stout old woman braced in protest, gripping the hands of a policeman; or three women waiting at a bus stop, lips set in grim response to the affront represented by your camera, their expressions saying “go away” despite the sign behind them announcing, “Welcome to Chicago.”
Welcome to this crowded stage of a city, where everyone is an actor—the poor, the rich, the policemen and street vendors, the nuns and nannies. Even a leaf, a balloon, a puddle, the corpse of a cat or horse can play a starring role. And you are there, too, as involved in the action of this vibrant theater as anyone else, caught in passing at just the right time, your self-portraits turned to vaporous mirages in store windows, outlined in the silhouettes of shadows and reflected in mirrors that you find in unexpected places. You have to be quick if you’re going to get the image you want. You are quick—so quick that you can snap the picture before the doorman has a chance to come outside and tell you to move on.
There is so much drama worth capturing on film; you don’t have the time or resources to turn all of your many thousands of negatives into prints. Anyway, prints aren’t the point of these adventures. It’s enough to delight in your own ingenuity over and over again, with each click of the shutter. You’ll leave the distribution of your art to someone else.
On a winter’s day in 2007, a young realtor named John Maloof paid $400 for a box full of negatives that was being sold by an auction house in Chicago. The box had been repossessed from a storage locker gone into arrears, and Maloof was hoping it contained images he could use to illustrate a book he was co-writing about the Chicago neighborhood of Portage Park. As it turned out, he had stumbled upon a much more valuable treasure: the work of a photographer who looks destined to take her place as one of the pre-eminent street photographers of the twentieth century.
Like all good stories, this one is full of false leads and startling surprises. Maloof was unimpressed initially by the negatives and disappointed that he hadn’t found any materials for his book on Portage Park. As he told a reporter from the Guardian, “Nothing was pertinent for the book so I thought: ‘Well, this sucks, but we can probably sell them on eBay or whatever.’” He created a blog and posted scans of the negatives, but after the blog received no visitors for months, he posted the scans on Flickr. People began to take notice, and their responses helped Maloof appreciate the importance of his purchase.
His growing excitement led him to take a crash course in photography, buy a Rolleiflex—the same kind of camera that had been used to capture the images on the negatives—and even build a darkroom in his attic. He tracked down other buyers who had been at the auction and persuaded them to sell him their boxes, ultimately accumulating a collection of more than 100,000 negatives and 3,000 prints, hundreds of rolls of film, home movies and audiotapes, as well as personal items like clothes, letters and books on photography. A second Chicago collector, Jeffrey Goldstein, held on to materials he acquired from one of the initial bidders. But Maloof estimates that he succeeded in gathering 90 percent of the photographer’s archive.
At some point between 2007 and 2009, Maloof set out to identify the person who had taken the photographs, though this portion of the story remains murky. According to the Chicago Sun-Times, Maloof was “sifting through the negatives in 2009 when he found” a name, that of Vivian Maier, “on an envelope and Googled it. What he found was an obit.” But in a discussion on Flickr, Maloof indicated that he had found Maier’s name earlier. He reported that he came across her name on a photo-label envelope a year after he’d purchased the materials from the auction house. He considered trying to meet Maier but was told by the auction house that she was ill. “I didn’t want to bother her,” he said. “Soooo many questions would have been answered if I had. It eats at me from time to time.” In April 2009 he Googled Maier’s name and found her obituary, which had been placed the previous day. “How weird?” Maloof commented on Flickr. (...)
In an interview with Chicago Magazine, Lane Gensburg described his former nanny as having “an amazing ability to relate to children.” Gensburg indicated that he wanted nothing unflattering said about Maier, not foreseeing how an offhand epithet would, for some, become the basis of her legacy: “She was like Mary Poppins,” he reportedly said, introducing a loving comparison that has been repeated less lovingly in subsequent accounts of Maier’s life. Maier may have left behind a huge archive of fascinating visual material that is inviting the world’s attention. But it’s not easy for Mary Poppins to be taken seriously as an artist.
by Joanna Scott, The Nation | Read more:
Photo: Vivian Maier, Self Portrait
Meet 'Flame, 'The Massive Spy Malware Infiltrating Iranian Computers
A massive, highly sophisticated piece of malware has been newly found
infecting systems in Iran and elsewhere and is believed to be part of a
well-coordinated, ongoing, state-run cyberespionage operation. (...)
Early analysis of Flame by the Lab indicates that it’s designed primarily to spy on the users of infected computers and steal data from them, including documents, recorded conversations and keystrokes. It also opens a backdoor to infected systems to allow the attackers to tweak the toolkit and add new functionality.
The malware, which is 20 megabytes when all of its modules are installed, contains multiple libraries, SQLite3 databases, various levels of encryption — some strong, some weak — and 20 plug-ins that can be swapped in and out to provide various functionality for the attackers. It even contains some code that is written in the LUA programming language — an uncommon choice for malware. (...)
“It’s a very big chunk of code. Because of that, it’s quite interesting that it stayed undetected for at least two years,” Gostev said. He noted that there are clues that the malware may actually date back to as early as 2007, around the same time period when Stuxnet and DuQu are believed to have been created.
Gostev says that because of its size and complexity, complete analysis of the code may take years.
“It took us half a year to analyze Stuxnet,” he said. “This is 20 times more complicated. It will take us 10 years to fully understand everything.”
Among Flame’s many modules is one that turns on the internal microphone of an infected machine to secretly record conversations that occur either over Skype or in the computer’s near vicinity; a module that turns Bluetooth-enabled computers into a Bluetooth beacon, which scans for other Bluetooth-enabled devices in the vicinity to siphon names and phone numbers from their contacts folder; and a module that grabs and stores frequent screenshots of activity on the machine, such as instant-messaging and e-mail communications, and sends them via a covert SSL channel to the attackers’ command-and-control servers.
The malware also has a sniffer component that can scan all of the traffic on an infected machine’s local network and collect usernames and password hashes that are transmitted across the network. The attackers appear to use this component to hijack administrative accounts and gain high-level privileges to other machines and parts of the network. (...)
Because Flame is so big, it gets loaded to a system in pieces. The machine first gets hit with a 6-megabyte component, which contains about half a dozen other compressed modules inside. The main component extracts, decompresses and decrypts these modules and writes them to various locations on disk. The number of modules in an infection depends on what the attackers want to do on a particular machine.
Once the modules are unpacked and loaded, the malware connects to one of about 80 command-and-control domains to deliver information about the infected machine to the attackers and await further instruction from them. The malware contains a hardcoded list of about five domains, but also has an updatable list, to which the attackers can add new domains if these others have been taken down or abandoned.
While the malware awaits further instruction, the various modules in it might take screenshots and sniff the network. The screenshot module grabs desktop images every 15 seconds when a high-value communication application is being used, such as instant messaging or Outlook, and once every 60 seconds when other applications are being used.
Although the Flame toolkit does not appear to have been written by the same programmers who wrote Stuxnet and DuQu, it does share a few interesting things with Stuxnet.
Stuxnet is believed to have been written through a partnership between Israel and the United States, and was first launched in June 2009. It is widely believed to have been designed to sabotage centrifuges used in Iran’s uranium enrichment program. DuQu was an espionage tool discovered on machines in Iran, Sudan, and elsewhere in 2011 that was designed to steal documents and other data from machines. Stuxnet and DuQu appeared to have been built on the same framework, using identical parts and using similar techniques. But Flame doesn’t resemble either of these in framework, design or functionality.
by Kim Zetter, Wired | Read more:
Image: Courtesy of Kaspersky
Early analysis of Flame by the Lab indicates that it’s designed primarily to spy on the users of infected computers and steal data from them, including documents, recorded conversations and keystrokes. It also opens a backdoor to infected systems to allow the attackers to tweak the toolkit and add new functionality.
The malware, which is 20 megabytes when all of its modules are installed, contains multiple libraries, SQLite3 databases, various levels of encryption — some strong, some weak — and 20 plug-ins that can be swapped in and out to provide various functionality for the attackers. It even contains some code that is written in the LUA programming language — an uncommon choice for malware. (...)
“It’s a very big chunk of code. Because of that, it’s quite interesting that it stayed undetected for at least two years,” Gostev said. He noted that there are clues that the malware may actually date back to as early as 2007, around the same time period when Stuxnet and DuQu are believed to have been created.
Gostev says that because of its size and complexity, complete analysis of the code may take years.
“It took us half a year to analyze Stuxnet,” he said. “This is 20 times more complicated. It will take us 10 years to fully understand everything.”
Among Flame’s many modules is one that turns on the internal microphone of an infected machine to secretly record conversations that occur either over Skype or in the computer’s near vicinity; a module that turns Bluetooth-enabled computers into a Bluetooth beacon, which scans for other Bluetooth-enabled devices in the vicinity to siphon names and phone numbers from their contacts folder; and a module that grabs and stores frequent screenshots of activity on the machine, such as instant-messaging and e-mail communications, and sends them via a covert SSL channel to the attackers’ command-and-control servers.
The malware also has a sniffer component that can scan all of the traffic on an infected machine’s local network and collect usernames and password hashes that are transmitted across the network. The attackers appear to use this component to hijack administrative accounts and gain high-level privileges to other machines and parts of the network. (...)
Because Flame is so big, it gets loaded to a system in pieces. The machine first gets hit with a 6-megabyte component, which contains about half a dozen other compressed modules inside. The main component extracts, decompresses and decrypts these modules and writes them to various locations on disk. The number of modules in an infection depends on what the attackers want to do on a particular machine.
Once the modules are unpacked and loaded, the malware connects to one of about 80 command-and-control domains to deliver information about the infected machine to the attackers and await further instruction from them. The malware contains a hardcoded list of about five domains, but also has an updatable list, to which the attackers can add new domains if these others have been taken down or abandoned.
While the malware awaits further instruction, the various modules in it might take screenshots and sniff the network. The screenshot module grabs desktop images every 15 seconds when a high-value communication application is being used, such as instant messaging or Outlook, and once every 60 seconds when other applications are being used.
Although the Flame toolkit does not appear to have been written by the same programmers who wrote Stuxnet and DuQu, it does share a few interesting things with Stuxnet.
Stuxnet is believed to have been written through a partnership between Israel and the United States, and was first launched in June 2009. It is widely believed to have been designed to sabotage centrifuges used in Iran’s uranium enrichment program. DuQu was an espionage tool discovered on machines in Iran, Sudan, and elsewhere in 2011 that was designed to steal documents and other data from machines. Stuxnet and DuQu appeared to have been built on the same framework, using identical parts and using similar techniques. But Flame doesn’t resemble either of these in framework, design or functionality.
by Kim Zetter, Wired | Read more:
Image: Courtesy of Kaspersky
Booktography is fast becoming a viral fad all over the web. The best ones are those which seamlessly integrates the book’s cover with the live person. A dead person may also be used for the purposes of this meme but that’s rather macabre. Much like photobombs, and jumping-in-the-air photos; the originator of this concept is unknown, but the concept behind his/her creative idea will go on to spawn many more memes.
More here:
Freaks, Geeks and Microsoft
When the Kinect was introduced in November 2010 as a $150 motion-control add-on to Microsoft’s Xbox consoles, it drew attention from more than just video-gamers. A slim, black, oblong 11½-inch wedge perched on a base, it allowed a gamer to use his or her body to throw virtual footballs or kick virtual opponents without a controller, but it was also seen as an important step forward in controlling technology with natural gestures.
In fact, as the company likes to note, the Kinect set “a Guinness World Record for the fastest-selling consumer device ever.” And at least some of the early adopters of the Kinect were not content just to play games with it. “Kinect hackers” were drawn to the fact that the object affordably synthesizes an arsenal of sophisticated components — notably, a fancy video camera, a “depth sensor” to capture visual data in three dimensions and a multiarray microphone capable of a similar trick with audio.
Combined with a powerful microchip and software, these capabilities could be put to uses unrelated to the Xbox. Like: enabling a small drone to “see” its surroundings and avoid obstacles; rigging up a 3-D scanner to create small reproductions of most any object (or person); directing the music of a computerized orchestra with conductorlike gestures; remotely controlling a robot to brush a cat’s fur. It has been used to make animation, to add striking visual effects to videos, to create an “interactive theme park” in South Korea and to control a P.C. by the movement of your hands (or, in a variation developed by some Japanese researchers, your tongue).
At the International Consumer Electronics Show earlier this year, Steve Ballmer, Microsoft’s chief executive, used his keynote presentation to announce that the company would release a version specifically meant for use outside the Xbox context and to indicate that the company would lay down formal rules permitting commercial uses for the device. A result has been a fresh wave of Kinect-centric experiments aimed squarely at the marketplace: helping Bloomingdale’s shoppers find the right size of clothing; enabling a “smart” shopping cart to scan Whole Foods customers’ purchases in real time; making you better at parallel parking.
An object that spawns its own commercial ecosystem is a thing to take seriously. Think of what Apple’s app store did for the iPhone, or for that matter how software continuously expanded the possibilities of the personal computer. Patent-watching sites report that in recent months, Sony, Apple and Google have all registered plans for gesture-control technologies like the Kinect. But there is disagreement about exactly how the Kinect evolved into an object with such potential. Did Microsoft intentionally create a versatile platform analogous to the app store? Or did outsider tech-artists and hobbyists take what the company thought of as a gaming device and redefine its potential?
This clash of theories illustrates a larger debate about the nature of innovation in the 21st century, and the even larger question of who, exactly, decides what any given object is really for. Does progress flow from a corporate entity’s offering a whiz-bang breakthrough embraced by the masses? Or does techno-thing success now depend on the company’s acquiescing to the crowd’s input? Which vision of an object’s meaning wins? The Kinect does not neatly conform to either theory. But in this instance, maybe it’s not about whose vision wins; maybe it’s about the contest.
by Rob Walker, NY Times | Read more:
Illustration by Robbie Porter
Internet to Grow Fourfold in Four Years
Cisco Systems (NASDAQ: CSCO) put out its annual Visual Networking Index (VNI) forecast for 2011 to 2016. The huge router company projects the Internet will be four times as large in four years as it will be this year. The “wired” world, which has changed human interaction and the growth of the availability of information, will explode, if Cisco is correct.
It is hard to find an analogue to this expansion in recent business and social history. Perhaps the growth of the number of TV sets or cable use. Or, maybe the growth of car ownership at the beginning of the last century. At any rate, the growth cannot be matched by anything that has happened in recent memory. The Cisco forecast means that billions of people will be tethered to the Internet. Cisco does not believe its job is to say what the impact of this will be, but there are some reasonable guesses.
The path to the fourfold increase includes these things:
The weight of video use is likely to be the greatest burden on Internet systems. While news is probably a large part of this, entertainment is likely to be larger. Businesses modeled on companies like Netflix (NASDAQ: NFLX) and Google’s (NASDAQ: GOOG) YouTube will expand not just in America and Europe. Similar companies will be established in the most populous nations, with the largest probably coming from China, Russia and much of South America. No one knows yet from where the content for these new businesses, whether or not they are legitimate, will come. If the past is any indication, a great deal will originate from U.S. studios. It will either be a revenue windfall for them or part of the growing trouble with piracy.
by Douglas A. McIntyre, 24/7 Wall Street | Read more:
It is hard to find an analogue to this expansion in recent business and social history. Perhaps the growth of the number of TV sets or cable use. Or, maybe the growth of car ownership at the beginning of the last century. At any rate, the growth cannot be matched by anything that has happened in recent memory. The Cisco forecast means that billions of people will be tethered to the Internet. Cisco does not believe its job is to say what the impact of this will be, but there are some reasonable guesses.
The path to the fourfold increase includes these things:
It is to Cisco’s advantage to make telecom, cable and wireless providers believe these numbers, because the increased use of the company’s routers will be needed to carry the burgeoning load. But, based on recent history, it is not hard to believe that Cisco is right — at least directionally.
- By 2016, the forecast projects there will be nearly 18.9 billion network connections — almost 2.5 connections for each person on earth — compared with 10.3 billion in 2011.
- By 2016, there are expected to be 3.4 billion Internet users — about 45% of the world’s projected population, according to UN estimates.
- The average fixed broadband speed is expected to increase nearly fourfold, from 9 megabits per second (Mbps) in 2011 to 34 Mbps in 2016.
- By 2016, 1.2 million video minutes — the equivalent of 833 days (or more than two years) — will travel the Internet every second.
The weight of video use is likely to be the greatest burden on Internet systems. While news is probably a large part of this, entertainment is likely to be larger. Businesses modeled on companies like Netflix (NASDAQ: NFLX) and Google’s (NASDAQ: GOOG) YouTube will expand not just in America and Europe. Similar companies will be established in the most populous nations, with the largest probably coming from China, Russia and much of South America. No one knows yet from where the content for these new businesses, whether or not they are legitimate, will come. If the past is any indication, a great deal will originate from U.S. studios. It will either be a revenue windfall for them or part of the growing trouble with piracy.
by Douglas A. McIntyre, 24/7 Wall Street | Read more:
Wednesday, May 30, 2012
Josef Sudek (Czech, 1896-1976). Advertising photograph for Ladislav Sutnar porcelain set (with black rim), 1932. Gelatin silver print. 23.2 x 17.1 cm.
The Art Institute of Chicago, Laura T. Magnuson Acquisition Fund.
via:
The Antidepressant Wars
I began to think of suicide at sixteen. An anxious and driven child, I entered in my mid-teens a clinical depression that would last for 40 years. I participated in psychotropic drug therapy for almost 30 of those, and now, owing in part, but only in part, to the drug Cymbalta, I have respite from the grievous suffering that is mental illness.
As a health policy scholar, I understand the machinations of the pharmaceutical industry. My students learn about “me-too” drugs, which barely improve on existing medications, and about “pay-for-delay,” whereby pharmaceutical companies cut deals with manufacturers of generic drugs to keep less expensive products off the market. I study policymakers’ widespread use of effectiveness research and their belief that effectiveness will contain costs while improving quality. I appreciate that randomized controlled trials are the gold standard for determining what works. Specifically, I know that antidepressant medication is vigorously promoted, that the diagnostic criteria for depression are muddled and limited, and that recent research attributes medicated patients’ positive outcomes to the placebo effect. In my own research and advocacy work, I take a political, rather than a medical, approach to recovery from mental illness.
Cymbalta in particular epitomizes pharmaceutical imperialism. Approved by the FDA in August 2004 for the treatment of major depressive disorder, it has since gotten the go-ahead for treating generalized anxiety disorder, fibromyalgia, and chronic musculoskeletal pain, including osteoarthritis and lower back pain. It remains under patent to Eli Lilly.
I would not have been surprised if Cymbalta had not worked for me or had not bested the myriad drugs and drug combinations that came before. My path through clinical depression is strewn with discarded remedies. “Who are these people?” I wondered about patients who were said to achieve happiness with the first pill and therefore to violate societal notions of identity and independence. I was just trying to get out of bed, and although my first antidepressant, at age 26, had a strong positive result, it also had incommodious side effects, and relief was tentative and partial. Decades of new and evolving treatment regimens followed. I have been treated with every class of antidepressant medication, often in combination with other psychotropic drugs. Some drugs worked better than others, some did not work at all, and some had unendurable side effects. But Cymbalta did not disappoint, and now I have become a teller of two tales, one about health policy, the other about health.
Like many depressed people, I resisted the idea of psychotropic medication. I was deeply hurt when my psychotherapist suggested I see a psychiatrist about antidepressant drugs. How could she think I was that crazy or that weak? But she said she was concerned for my survival, and I eventually did as she asked. I became an outpatient at a venerable psychiatric hospital, where I found a kind stranger who knew my deepest secrets and wanted to end my suffering. He wrote a prescription, and thus began my 30-year trek.
Depression is sometimes confused with sadness. Many depressed people are very sad, as I was, but the essence of my depression was feeling dead among the living. Everything was just so hard. William Styron describes depression as “a storm of murk.” Andrew Solomon’s atlas of depression is titled Noonday Demon. I too found depression to be fierce, wrapping me in a heavy woolen blanket and mocking my attempts to cast it off. The self-loathing was palpable; it felt like I was chewing glass. I sensed that other people were seeing things I did not, and apparently they were, because when I began my first course of antidepressants, it was as if someone had turned on the lights. It did not make me happy or even content. The world simply looked different—brighter, deeper—and I was a part of it. I saw something other than the impassable flatness and enervating dullness, and I was amazed.
My progress came at a cost. In the late 1970s, before Prozac, antidepressant medication was seldom spoken of. The people I told about my treatment echoed my first reaction and sang throaty choruses of why-don’t-you-just-cheer-up and won’t-this-make-you-a-drug-addict. I was also drowsy after I ate, my mouth was always dry, and when a second medication was added, I began to lose control of my limbs and fall down. I insisted to my psychiatrist that it was the second drug that was causing me to fall. A champion of that one, he instructed me to discontinue the first. I responded in the way only privileged patients can: I went around him, using personal connections to wrest an informal second opinion from a resident in the lab run by my psychiatrist’s mentor. My doctor was convinced, and a little embarrassed, and we both learned something about therapeutic alliances. (...)
In the years that followed, we just kept trying. I would remain on a regimen until my psychiatrist proposed another, and, looking back, I was remarkably game. I was treated with monoamine oxidase inhibitors, which can be fatal in combination with some foods, and a famous psychiatrist in Manhattan prescribed a drug sold only in Canada. When a medication produced double vision, my psychiatrist suggested I drive with one eye closed. Drug cocktails deteriorated into over-medication. I tried to enroll in a clinical trial that would implant electrodes in my brain, but it was already full. There was only one remedy I rejected outright: electroconvulsive therapy. I was told by other patients about their memory loss, and I needed a good memory to do my job. (...)
Medications that affect the mind seem to discomfit us deeply, culturally, viscerally. And so do the people who need them: psychiatric patients have gone, in this discourse, from covetous of an unfair advantage to oblivious to a colossal con. I am not sure which characterization I prefer, but I know my heart will break when a friend in the grip of depression forgoes medication—not because it is not right for her, but because it is only for cheaters or fools.
Most parties to the debate agree that antidepressants can be effective for severely depressed patients such as me, but selfishly I fear the rhetoric of antidepressant uselessness will influence the pharmacy policies of my health plan. At present I am charged an inflated copayment for Cymbalta because my health plan claims it is no more effective than generic antidepressants. I am not privy to the basis for this determination; I do not know if it is based on average treatment effects, the preferences of plan professionals, or an overriding concern for cost. I do know that it does not include my experience, and when I queried the plan about an appeal, I was told I could appeal but should not bother: there are no successful appeals. The plan representative was unmoved by my savings on psychiatry, rheumatology, and hospitalization. She intimated that it is just too hard to satisfy individuals and that the plan has enough to do managing costs.
by Sandra J. Tanenbaum, Boston Review | Read more:
Photo: Jordan Olels
Marjorie and the Birds (Fiction)
After her husband died, Marjorie took up hobbies, lots of them, just to see what stuck. She went on a cruise for widows and widowers, which was awful for everyone except the people who hadn’t really loved their spouses to begin with. She took up knitting, which made her fingers hurt, and modern dance for seniors, which made the rest of her body hurt, too. Most of all, Marjorie enjoyed birding, which didn’t seem like a hobby at all, but like agreeing to be more observant. She’d always been good at paying attention.
She signed up for an introductory course at the Museum of Natural History, sending her check in the mail with a slip of paper wrapped around it. It was the sort of thing that her children made fun of her for, but Marjorie had her ways. The class met twice a week at seven in the morning, always gathering on the Naturalist’s Bridge just past the entrance to the park at 77th Street. Marjorie liked that, the consistency. Even on days when she was late—all year, it had only happened twice, and she’d been mortified both times—Marjorie knew just where to find the group, as they always wound around the park on the same path, moving at a snail’s pace, a birder’s pace, their eyes up in the trees and their hands loosely holding onto the binoculars around their necks.
Dr. Lawrence was in charge. He was a small man, smaller than Marjorie, who stood five foot seven in her walking shoes. His hair was thin but not gone, pale but not white. To Marjorie, he seemed a youthful spirit, though he must have been in his late fifties. Dr. Lawrence had another job at the museum, unrelated to birds. Marjorie could never remember exactly what it was. He arranged bones, or pinned butterfly wings, or dusted off the dinosaurs with a toothbrush. She was too embarrassed to keep asking. But the birds were his real love, that was clear. Marjorie loved listening to Dr. Lawrence describe what he saw in the trees. Warbling in the fir tree, behind the maple, eleven o’clock. Upper branches, just below the moon. Do you hear them calling to each other? Don’t you hear them? Sometimes Marjorie would close her eyes, even though she knew that wasn’t the point. But the park sounded so beautiful to her, like it and she had been asleep together and were only now waking up, were only now beginning to understand what was possible on a daily basis.
Marjorie’s husband, Steve, had had a big personality and the kind of booming voice that often made people turn around in restaurants. In the end, it was his heart that stopped working, as they had long suspected it would be. There had been too many decades of three-hour dinners, too much butter, too much fun. Steve had resisted all the diets his doctors suggested on principle—if that was living, what was the point? He’d known that it would happen this way, that he would go down swinging, or swigging as the case may have been. Marjorie understood. It was the children who argued.
Their daughter, Kate, was the eldest, and already had two children of her own. She would send articles over email, knowing that neither of her parents would read them. Lowering his salt, lowering his sugar, lowering his alcohol intake. Simple exercises that could be done while sitting in a chair—Kate had tried them, they were easy. Marjorie knew how to press delete.
by Emma Straub, Fifty-Two Stories | Read more:
She signed up for an introductory course at the Museum of Natural History, sending her check in the mail with a slip of paper wrapped around it. It was the sort of thing that her children made fun of her for, but Marjorie had her ways. The class met twice a week at seven in the morning, always gathering on the Naturalist’s Bridge just past the entrance to the park at 77th Street. Marjorie liked that, the consistency. Even on days when she was late—all year, it had only happened twice, and she’d been mortified both times—Marjorie knew just where to find the group, as they always wound around the park on the same path, moving at a snail’s pace, a birder’s pace, their eyes up in the trees and their hands loosely holding onto the binoculars around their necks.
Dr. Lawrence was in charge. He was a small man, smaller than Marjorie, who stood five foot seven in her walking shoes. His hair was thin but not gone, pale but not white. To Marjorie, he seemed a youthful spirit, though he must have been in his late fifties. Dr. Lawrence had another job at the museum, unrelated to birds. Marjorie could never remember exactly what it was. He arranged bones, or pinned butterfly wings, or dusted off the dinosaurs with a toothbrush. She was too embarrassed to keep asking. But the birds were his real love, that was clear. Marjorie loved listening to Dr. Lawrence describe what he saw in the trees. Warbling in the fir tree, behind the maple, eleven o’clock. Upper branches, just below the moon. Do you hear them calling to each other? Don’t you hear them? Sometimes Marjorie would close her eyes, even though she knew that wasn’t the point. But the park sounded so beautiful to her, like it and she had been asleep together and were only now waking up, were only now beginning to understand what was possible on a daily basis.
Marjorie’s husband, Steve, had had a big personality and the kind of booming voice that often made people turn around in restaurants. In the end, it was his heart that stopped working, as they had long suspected it would be. There had been too many decades of three-hour dinners, too much butter, too much fun. Steve had resisted all the diets his doctors suggested on principle—if that was living, what was the point? He’d known that it would happen this way, that he would go down swinging, or swigging as the case may have been. Marjorie understood. It was the children who argued.
Their daughter, Kate, was the eldest, and already had two children of her own. She would send articles over email, knowing that neither of her parents would read them. Lowering his salt, lowering his sugar, lowering his alcohol intake. Simple exercises that could be done while sitting in a chair—Kate had tried them, they were easy. Marjorie knew how to press delete.
by Emma Straub, Fifty-Two Stories | Read more:
Pictures and Vision
Okay, I’m going to argue that the futures of Facebook and Google are pretty much totally embedded in these two images:
The first one you know. What you might not know is just how completely central photos are to Facebook’s product, and by extension its whole business. The company’s S1 filing reports that, in the last three months of 2011, users uploaded around 250 million photos every day. For context, around 480 million people used the service on any given day in that span. That’s like… quite a ratio. A whole lot of people sign up for Facebook because they want to see a friend or family member’s photos, and a whole lot of people return to the site to see new ones. (And I mean, really: does the core Facebook behavior of stalking provide any satisfaction without photos? No, it does not.)
Really, Facebook is the world’s largest photo sharing site—that also happens to be a social network and a login system. In this context, the Instagram acquisition and the new Facebook Camera app make perfect sense; this is Facebook trebling down on photos. The day another service steals the photo throne is the day that Facebook’s trajectory starts to bend.
(As an aside, I’d love to know how many photo views happen daily on Facebook. My guess is that the number utterly dwarfs every other metric in the system—other than pageviews, of which it is obviously a subset.)
You might not recognize the second image up above. It was posted on Sebastian Thrun’s Google+ page, and it was taken with a working version of Project Glass out in the wild, or at least in Thrun’s backyard. It’s a POV shot taken hands-free: Thrun’s son Jasper, just as Thrun saw him.
Thrun also demonstrated Glass on Charlie Rose and it’s worth watching the first five minutes there just to see (a) exactly how weird the glasses look, and (b) exactly how wonderful the interaction seems. This isn’t about sharing pictures. This is about sharing your vision.
Now, Google’s big pitch video for Glass is all about utility, with just a dollop of delight at the end, but don’t let that fool you. There is serious delight waiting here. Imagine actors and athletes doing what they do today on Twitter—sharing their adventures from a first-person POV—except doing it with Glass. It’s pretty exciting, actually, and if the glasses look criminally dorky, well, we didn’t expect to find ourselves walking the world staring down into skinny little black boxes, either.
So the titanic showdown between Facebook and Google might not be the News Feed vs. Google+ after all. It might be Facebook Camera vs. Project Glass.
It might, in fact, be pictures vs. vision.
by Robin Sloan | Read more:
The first one you know. What you might not know is just how completely central photos are to Facebook’s product, and by extension its whole business. The company’s S1 filing reports that, in the last three months of 2011, users uploaded around 250 million photos every day. For context, around 480 million people used the service on any given day in that span. That’s like… quite a ratio. A whole lot of people sign up for Facebook because they want to see a friend or family member’s photos, and a whole lot of people return to the site to see new ones. (And I mean, really: does the core Facebook behavior of stalking provide any satisfaction without photos? No, it does not.)
Really, Facebook is the world’s largest photo sharing site—that also happens to be a social network and a login system. In this context, the Instagram acquisition and the new Facebook Camera app make perfect sense; this is Facebook trebling down on photos. The day another service steals the photo throne is the day that Facebook’s trajectory starts to bend.
(As an aside, I’d love to know how many photo views happen daily on Facebook. My guess is that the number utterly dwarfs every other metric in the system—other than pageviews, of which it is obviously a subset.)
You might not recognize the second image up above. It was posted on Sebastian Thrun’s Google+ page, and it was taken with a working version of Project Glass out in the wild, or at least in Thrun’s backyard. It’s a POV shot taken hands-free: Thrun’s son Jasper, just as Thrun saw him.
Thrun also demonstrated Glass on Charlie Rose and it’s worth watching the first five minutes there just to see (a) exactly how weird the glasses look, and (b) exactly how wonderful the interaction seems. This isn’t about sharing pictures. This is about sharing your vision.
Now, Google’s big pitch video for Glass is all about utility, with just a dollop of delight at the end, but don’t let that fool you. There is serious delight waiting here. Imagine actors and athletes doing what they do today on Twitter—sharing their adventures from a first-person POV—except doing it with Glass. It’s pretty exciting, actually, and if the glasses look criminally dorky, well, we didn’t expect to find ourselves walking the world staring down into skinny little black boxes, either.
So the titanic showdown between Facebook and Google might not be the News Feed vs. Google+ after all. It might be Facebook Camera vs. Project Glass.
It might, in fact, be pictures vs. vision.
by Robin Sloan | Read more:
Designer Bugs
The prospect of artificial life is so outlandish that we rarely even mean the words. Most of the time we mean clever androids or computers that talk. Even the pages of science fiction typically stop short: in the popular dystopian narrative, robots are always taking over, erecting armies, firing death rays and sometimes even learning to love, but underneath their replicant skin, they tend to be made of iron ore. From the Terminator to the Matrix to the awakening of HAL, what preoccupies the modern imagination is the sentient evolution of machines, not artificial life itself.
But inside the laboratories of biotechnology, a more literal possibility is taking hold: What if machines really were alive? To some extent, this is already happening. Brewers and bakers have long relied on the diligence of yeast to make beer and bread, and in medical manufacturing, it has become routine to harness organisms like Penicillium to generate drugs. At DuPont, engineers are using modified E. coli to produce polyester for carpet, and the pharmaceutical giant Sanofi is using yeast injected with strips of synthetic DNA to manufacture medicine. But the possibility of designing a new organism, entirely from synthetic DNA, to produce whatever compounds we want, would mark a radical leap forward in biotechnology and a paradigm shift in manufacturing.
The appeal of biological machinery is manifold. For one thing, because organisms reproduce, they can generate not only their target product but also more factories to do the same. Then too, microbes use novel fuel. Chances are, unless you’ve slipped off the grid, virtually every machine you own, from your iPhone to your toaster oven, depends on burning fossil fuels to work. Even if you have slipped off the grid, manufacturing those devices required massive carbon emissions. This is not necessarily the case for biomachinery. A custom organism could produce the same plastic or metal as an industrial plant while feeding on the compounds in pollution or the energy of the sun.
Then there is the matter of yield. Over the last 60 years, agricultural production has boomed in large part through plant modification, chemical additives and irrigation. But as the world population continues to soar, adding nearly a billion people over the past decade, major aquifers are giving out, and agriculture may not be able to keep pace with the world’s needs. If a strain of algae could secrete high yields of protein, using less land and water than traditional crops, it may represent the best hope to feed a booming planet.
Finally, the rise of biomachinery could usher in an era of spot production. “Biology is the ultimate distributed manufacturing platform,” Drew Endy, an assistant professor at Stanford University, told me recently. Endy is trained as an engineer but has become a leading proponent of synthetic biology. He sketched a picture of what “distributed manufacturing” by microbes might look like: say a perfume company could design a bacterium to produce an appealing aroma; “rather than running this in a large-scale fermenter, they would upload the DNA sequences onto the future equivalent of iTunes,” he said. “People all over the world could then pay a fee to download the information.” Then, Endy explained, customers could simply synthesize the bugs at home and grow them on their skin. “They could transform epidermal ecosystems to have living production of scents and fragrances,” he said. “Living perfume!”
Whether all this could really happen — or should — depends on whom you ask. The challenge of building a synthetic bacterium from raw DNA is as byzantine as it probably sounds. It means taking four bottles of chemicals — the adenine, thymine, cytosine and guanine that make up DNA — and linking them into a daisy chain at least half a million units long, then inserting that molecule into a host cell and hoping it will spring to life as an organism that not only grows and reproduces but also manufactures exactly what its designer intended. (A line about hubris, Icarus and Frankenstein typically follows here.) Since the late 1990s, laboratories around the world have been experimenting with synthetic biology, but many scientists believe that it will take decades to see major change. “We’re still really early,” Endy said. “Or to say it differently, we’re still really bad.”
Venter disagrees. The future, he says, may be sooner than we think. Much of the groundwork is already done. In 2003, Venter’s lab used a new method to piece together a strip of DNA that was identical to a natural virus, then watched it spring to action and attack a cell. In 2008, they built a longer genome, replicating the DNA of a whole bacterium, and in 2010 they announced that they brought a bacterium with synthetic DNA to life. That organism was still mostly a copy of one in nature, but as a flourish, Venter and his team wrote their names into its DNA, along with quotes from James Joyce and J. Robert Oppenheimer and even secret messages. As the bacteria reproduced, the quotes and messages and names remained in the colony’s DNA.
In theory, this leaves just one step between Venter and a custom species. If he can write something more useful than his name into the synthetic DNA of an organism, changing its genetic function in some deliberate way, he will have crossed the threshold to designer life.
Unless he already has.
by Wil S. Hylton, NY Times | Read more:
Photo: Brad Swonetz
But inside the laboratories of biotechnology, a more literal possibility is taking hold: What if machines really were alive? To some extent, this is already happening. Brewers and bakers have long relied on the diligence of yeast to make beer and bread, and in medical manufacturing, it has become routine to harness organisms like Penicillium to generate drugs. At DuPont, engineers are using modified E. coli to produce polyester for carpet, and the pharmaceutical giant Sanofi is using yeast injected with strips of synthetic DNA to manufacture medicine. But the possibility of designing a new organism, entirely from synthetic DNA, to produce whatever compounds we want, would mark a radical leap forward in biotechnology and a paradigm shift in manufacturing.
The appeal of biological machinery is manifold. For one thing, because organisms reproduce, they can generate not only their target product but also more factories to do the same. Then too, microbes use novel fuel. Chances are, unless you’ve slipped off the grid, virtually every machine you own, from your iPhone to your toaster oven, depends on burning fossil fuels to work. Even if you have slipped off the grid, manufacturing those devices required massive carbon emissions. This is not necessarily the case for biomachinery. A custom organism could produce the same plastic or metal as an industrial plant while feeding on the compounds in pollution or the energy of the sun.
Then there is the matter of yield. Over the last 60 years, agricultural production has boomed in large part through plant modification, chemical additives and irrigation. But as the world population continues to soar, adding nearly a billion people over the past decade, major aquifers are giving out, and agriculture may not be able to keep pace with the world’s needs. If a strain of algae could secrete high yields of protein, using less land and water than traditional crops, it may represent the best hope to feed a booming planet.
Finally, the rise of biomachinery could usher in an era of spot production. “Biology is the ultimate distributed manufacturing platform,” Drew Endy, an assistant professor at Stanford University, told me recently. Endy is trained as an engineer but has become a leading proponent of synthetic biology. He sketched a picture of what “distributed manufacturing” by microbes might look like: say a perfume company could design a bacterium to produce an appealing aroma; “rather than running this in a large-scale fermenter, they would upload the DNA sequences onto the future equivalent of iTunes,” he said. “People all over the world could then pay a fee to download the information.” Then, Endy explained, customers could simply synthesize the bugs at home and grow them on their skin. “They could transform epidermal ecosystems to have living production of scents and fragrances,” he said. “Living perfume!”
Whether all this could really happen — or should — depends on whom you ask. The challenge of building a synthetic bacterium from raw DNA is as byzantine as it probably sounds. It means taking four bottles of chemicals — the adenine, thymine, cytosine and guanine that make up DNA — and linking them into a daisy chain at least half a million units long, then inserting that molecule into a host cell and hoping it will spring to life as an organism that not only grows and reproduces but also manufactures exactly what its designer intended. (A line about hubris, Icarus and Frankenstein typically follows here.) Since the late 1990s, laboratories around the world have been experimenting with synthetic biology, but many scientists believe that it will take decades to see major change. “We’re still really early,” Endy said. “Or to say it differently, we’re still really bad.”
Venter disagrees. The future, he says, may be sooner than we think. Much of the groundwork is already done. In 2003, Venter’s lab used a new method to piece together a strip of DNA that was identical to a natural virus, then watched it spring to action and attack a cell. In 2008, they built a longer genome, replicating the DNA of a whole bacterium, and in 2010 they announced that they brought a bacterium with synthetic DNA to life. That organism was still mostly a copy of one in nature, but as a flourish, Venter and his team wrote their names into its DNA, along with quotes from James Joyce and J. Robert Oppenheimer and even secret messages. As the bacteria reproduced, the quotes and messages and names remained in the colony’s DNA.
In theory, this leaves just one step between Venter and a custom species. If he can write something more useful than his name into the synthetic DNA of an organism, changing its genetic function in some deliberate way, he will have crossed the threshold to designer life.
Unless he already has.
by Wil S. Hylton, NY Times | Read more:
Photo: Brad Swonetz
Tuesday, May 29, 2012
The Evolution of the American Dream
In the New York Times earlier this year, Paul Krugman wrote of an economic effect called "The Great Gatsby curve,"
a graph that measures fiscal inequality against social mobility and
shows that America's marked economic inequality means it has
correlatively low social mobility. In one sense this hardly seems
newsworthy, but it is telling that even economists think that F Scott Fitzgerald's
masterpiece offers the most resonant (and economical) shorthand for the
problems of social mobility, economic inequality and class antagonism
that we face today. Nietzsche – whose Genealogy of Morals
Fitzgerald greatly admired – called the transformation of class
resentment into a moral system "ressentiment"; in America, it is
increasingly called the failure of the American dream, a failure now
mapped by the "Gatsby curve".
Fitzgerald had much to say about the failure of this dream, and the fraudulences that sustain it – but his insights are not all contained within the economical pages of his greatest novel. Indeed, when Fitzgerald published The Great Gatsby in April 1925, the phrase "American dream" as we know it did not exist. Many now assume the phrase stretches back to the nation's founding, but "the American dream" was never used to describe a shared national value system until a popular 1917 novel called Susan Lenox: Her Fall and Rise, which remarked that "the fashion and home magazines … have prepared thousands of Americans … for the possible rise of fortune that is the universal American dream and hope." The OED lists this as the first recorded instance of the American dream, although it's not yet the catchphrase as we know it. That meaning is clearly emerging – but only as "possible" rise of fortune; a dream, not a promise. And as of 1917, at least some Americans were evidently beginning to recognise that consumerism and mass marketing were teaching them what to want, and that rises of fortune would be measured by the acquisition of status symbols. The phrase next appeared in print in a 1923 Vanity Fair article by Walter Lippmann, "Education and the White-Collar Class" (which Fitzgerald probably read); it warned that widening access to education was creating untenable economic pressure, as young people graduated with degrees only to find that insufficient white-collar jobs awaited. Instead of limiting access to education in order to keep such jobs the exclusive domain of the upper classes (a practice America had recently begun to justify by means of a controversial new idea called "intelligence tests"), Lippmann argued that Americans must decide that skilled labour was a proper vocation for educated people. There simply weren't enough white-collar jobs to go around, but "if education could be regarded not as a step ladder to a few special vocations, but as the key to the treasure house of life, we should not even have to consider the fatal proposal that higher education be confined to a small and selected class," a decision that would mark the "failure of the American dream" of universal education.
These two incipient instances of the phrase are both, in their different ways, uncannily prophetic; but as a catchphrase, the American dream did not explode into popular culture until the 1931 publication of a book called The Epic of America by James Truslow Adams, which spoke of "the American dream of a better, richer and happier life for all our citizens of every rank, which is the greatest contribution we have made to the thought and welfare of the world. That dream or hope has been present from the start. Ever since we became an independent nation, each generation has seen an uprising of ordinary Americans to save that dream from the forces that appear to be overwhelming it."
In the early years of the great depression Adams's book sparked a great national debate about the promise of America as a place that fosters "the genuine worth of each man or woman", whose efforts should be restricted by "no barriers beyond their own natures". Two years later, a New York Times article noted: "Get-rich-quick and gambling was the bane of our life before the smash"; they were also what caused the "smash" itself in 1929. By 1933, Adams was writing in the New York Times of the way the American dream had been hijacked: "Throughout our history, the pure gold of this vision has been heavily alloyed with the dross of materialistic aims. Not only did the wage scales and our standard of living seem to promise riches to the poor immigrant, but the extent and natural wealth of the continent awaiting exploitation offered to Americans of the older stocks such opportunities for rapid fortunes that the making of money and the enjoying of what money could buy too often became our ideal of a full and satisfying life. The struggle of each against all for the dazzling prizes destroyed in some measure both our private ideals and our sense of social obligation." As the Depression deepened, books such as Who Owns America? A New Declaration of Independence were arguing that "monopoly capitalism is morally ugly as well as economically unsound," that in America "the large majority should be able – in accordance with the tenets of the 'American dream' … to count on living in an atmosphere of equality, in a world which puts relatively few barriers between man and man." Part of the problem, however, was that the dream itself was being destroyed by "the friends of big business, who dishonour the dream by saying that it has been realised" already.
The phrase the American dream was first invented, in other words, to describe a failure, not a promise: or rather, a broken promise, a dream that was continually faltering beneath the rampant monopoly capitalism that set each struggling against all; and it is no coincidence that it was first popularised during the early years of the great depression. The impending failure had been clear to Fitzgerald by the time he finished Gatsby – and the fact that in 1925 most Americans were still recklessly chasing the dream had a great deal to do with the initial commercial and critical failure of The Great Gatsby, which would not be hailed as a masterpiece until the 50s, once hindsight had revealed its prophetic truth.
by Sarah Churchwell, The Guardian | Read more:
Photograph: Courtesy Everett Collection/Rex Features
Fitzgerald had much to say about the failure of this dream, and the fraudulences that sustain it – but his insights are not all contained within the economical pages of his greatest novel. Indeed, when Fitzgerald published The Great Gatsby in April 1925, the phrase "American dream" as we know it did not exist. Many now assume the phrase stretches back to the nation's founding, but "the American dream" was never used to describe a shared national value system until a popular 1917 novel called Susan Lenox: Her Fall and Rise, which remarked that "the fashion and home magazines … have prepared thousands of Americans … for the possible rise of fortune that is the universal American dream and hope." The OED lists this as the first recorded instance of the American dream, although it's not yet the catchphrase as we know it. That meaning is clearly emerging – but only as "possible" rise of fortune; a dream, not a promise. And as of 1917, at least some Americans were evidently beginning to recognise that consumerism and mass marketing were teaching them what to want, and that rises of fortune would be measured by the acquisition of status symbols. The phrase next appeared in print in a 1923 Vanity Fair article by Walter Lippmann, "Education and the White-Collar Class" (which Fitzgerald probably read); it warned that widening access to education was creating untenable economic pressure, as young people graduated with degrees only to find that insufficient white-collar jobs awaited. Instead of limiting access to education in order to keep such jobs the exclusive domain of the upper classes (a practice America had recently begun to justify by means of a controversial new idea called "intelligence tests"), Lippmann argued that Americans must decide that skilled labour was a proper vocation for educated people. There simply weren't enough white-collar jobs to go around, but "if education could be regarded not as a step ladder to a few special vocations, but as the key to the treasure house of life, we should not even have to consider the fatal proposal that higher education be confined to a small and selected class," a decision that would mark the "failure of the American dream" of universal education.
These two incipient instances of the phrase are both, in their different ways, uncannily prophetic; but as a catchphrase, the American dream did not explode into popular culture until the 1931 publication of a book called The Epic of America by James Truslow Adams, which spoke of "the American dream of a better, richer and happier life for all our citizens of every rank, which is the greatest contribution we have made to the thought and welfare of the world. That dream or hope has been present from the start. Ever since we became an independent nation, each generation has seen an uprising of ordinary Americans to save that dream from the forces that appear to be overwhelming it."
In the early years of the great depression Adams's book sparked a great national debate about the promise of America as a place that fosters "the genuine worth of each man or woman", whose efforts should be restricted by "no barriers beyond their own natures". Two years later, a New York Times article noted: "Get-rich-quick and gambling was the bane of our life before the smash"; they were also what caused the "smash" itself in 1929. By 1933, Adams was writing in the New York Times of the way the American dream had been hijacked: "Throughout our history, the pure gold of this vision has been heavily alloyed with the dross of materialistic aims. Not only did the wage scales and our standard of living seem to promise riches to the poor immigrant, but the extent and natural wealth of the continent awaiting exploitation offered to Americans of the older stocks such opportunities for rapid fortunes that the making of money and the enjoying of what money could buy too often became our ideal of a full and satisfying life. The struggle of each against all for the dazzling prizes destroyed in some measure both our private ideals and our sense of social obligation." As the Depression deepened, books such as Who Owns America? A New Declaration of Independence were arguing that "monopoly capitalism is morally ugly as well as economically unsound," that in America "the large majority should be able – in accordance with the tenets of the 'American dream' … to count on living in an atmosphere of equality, in a world which puts relatively few barriers between man and man." Part of the problem, however, was that the dream itself was being destroyed by "the friends of big business, who dishonour the dream by saying that it has been realised" already.
The phrase the American dream was first invented, in other words, to describe a failure, not a promise: or rather, a broken promise, a dream that was continually faltering beneath the rampant monopoly capitalism that set each struggling against all; and it is no coincidence that it was first popularised during the early years of the great depression. The impending failure had been clear to Fitzgerald by the time he finished Gatsby – and the fact that in 1925 most Americans were still recklessly chasing the dream had a great deal to do with the initial commercial and critical failure of The Great Gatsby, which would not be hailed as a masterpiece until the 50s, once hindsight had revealed its prophetic truth.
by Sarah Churchwell, The Guardian | Read more:
Photograph: Courtesy Everett Collection/Rex Features
Why We Lie
Not too long ago, one of my students, named Peter, told me a story that captures rather nicely our society's misguided efforts to deal with dishonesty. One day, Peter locked himself out of his house. After a spell, the locksmith pulled up in his truck and picked the lock in about a minute.
"I was amazed at how quickly and easily this guy was able to open the door," Peter said. The locksmith told him that locks are on doors only to keep honest people honest. One percent of people will always be honest and never steal. Another 1% will always be dishonest and always try to pick your lock and steal your television; locks won't do much to protect you from the hardened thieves, who can get into your house if they really want to. The purpose of locks, the locksmith said, is to protect you from the 98% of mostly honest people who might be tempted to try your door if it had no lock.
We tend to think that people are either honest or dishonest. In the age of Bernie Madoff and Mark McGwire, James Frey and John Edwards, we like to believe that most people are virtuous, but a few bad apples spoil the bunch. If this were true, society might easily remedy its problems with cheating and dishonesty. Human-resources departments could screen for cheaters when hiring. Dishonest financial advisers or building contractors could be flagged quickly and shunned. Cheaters in sports and other arenas would be easy to spot before they rose to the tops of their professions.
But that is not how dishonesty works. Over the past decade or so, my colleagues and I have taken a close look at why people cheat, using a variety of experiments and looking at a panoply of unique data sets—from insurance claims to employment histories to the treatment records of doctors and dentists. What we have found, in a nutshell: Everybody has the capacity to be dishonest, and almost everybody cheats—just by a little. Except for a few outliers at the top and bottom, the behavior of almost everyone is driven by two opposing motivations. On the one hand, we want to benefit from cheating and get as much money and glory as possible; on the other hand, we want to view ourselves as honest, honorable people. Sadly, it is this kind of small-scale mass cheating, not the high-profile cases, that is most corrosive to society. (...)
The results of these experiments should leave you wondering about the ways that we currently try to keep people honest. Does the prospect of heavy fines or increased enforcement really make someone less likely to cheat on their taxes, to fill out a fraudulent insurance claim, to recommend a bum investment or to steal from his or her company? It may have a small effect on our behavior, but it is probably going to be of little consequence when it comes up against the brute psychological force of "I'm only fudging a little" or "Everyone does it" or "It's for a greater good."
What, then—if anything—pushes people toward greater honesty?
There's a joke about a man who loses his bike outside his synagogue and goes to his rabbi for advice. "Next week come to services, sit in the front row," the rabbi tells the man, "and when we recite the Ten Commandments, turn around and look at the people behind you. When we get to 'Thou shalt not steal,' see who can't look you in the eyes. That's your guy." After the next service, the rabbi is curious to learn whether his advice panned out. "So, did it work?" he asks the man. "Like a charm," the man answers. "The moment we got to 'Thou shalt not commit adultery,' I remembered where I left my bike."
What this little joke suggests is that simply being reminded of moral codes has a significant effect on how we view our own behavior.
by Dan Ariely, WSJ | Read more:
"I was amazed at how quickly and easily this guy was able to open the door," Peter said. The locksmith told him that locks are on doors only to keep honest people honest. One percent of people will always be honest and never steal. Another 1% will always be dishonest and always try to pick your lock and steal your television; locks won't do much to protect you from the hardened thieves, who can get into your house if they really want to. The purpose of locks, the locksmith said, is to protect you from the 98% of mostly honest people who might be tempted to try your door if it had no lock.
We tend to think that people are either honest or dishonest. In the age of Bernie Madoff and Mark McGwire, James Frey and John Edwards, we like to believe that most people are virtuous, but a few bad apples spoil the bunch. If this were true, society might easily remedy its problems with cheating and dishonesty. Human-resources departments could screen for cheaters when hiring. Dishonest financial advisers or building contractors could be flagged quickly and shunned. Cheaters in sports and other arenas would be easy to spot before they rose to the tops of their professions.
But that is not how dishonesty works. Over the past decade or so, my colleagues and I have taken a close look at why people cheat, using a variety of experiments and looking at a panoply of unique data sets—from insurance claims to employment histories to the treatment records of doctors and dentists. What we have found, in a nutshell: Everybody has the capacity to be dishonest, and almost everybody cheats—just by a little. Except for a few outliers at the top and bottom, the behavior of almost everyone is driven by two opposing motivations. On the one hand, we want to benefit from cheating and get as much money and glory as possible; on the other hand, we want to view ourselves as honest, honorable people. Sadly, it is this kind of small-scale mass cheating, not the high-profile cases, that is most corrosive to society. (...)
The results of these experiments should leave you wondering about the ways that we currently try to keep people honest. Does the prospect of heavy fines or increased enforcement really make someone less likely to cheat on their taxes, to fill out a fraudulent insurance claim, to recommend a bum investment or to steal from his or her company? It may have a small effect on our behavior, but it is probably going to be of little consequence when it comes up against the brute psychological force of "I'm only fudging a little" or "Everyone does it" or "It's for a greater good."
What, then—if anything—pushes people toward greater honesty?
There's a joke about a man who loses his bike outside his synagogue and goes to his rabbi for advice. "Next week come to services, sit in the front row," the rabbi tells the man, "and when we recite the Ten Commandments, turn around and look at the people behind you. When we get to 'Thou shalt not steal,' see who can't look you in the eyes. That's your guy." After the next service, the rabbi is curious to learn whether his advice panned out. "So, did it work?" he asks the man. "Like a charm," the man answers. "The moment we got to 'Thou shalt not commit adultery,' I remembered where I left my bike."
What this little joke suggests is that simply being reminded of moral codes has a significant effect on how we view our own behavior.
by Dan Ariely, WSJ | Read more:
Johnny Tapia (Feb. 1967 - May 2012)
[ed. I don't follow boxing so didn't know of Mr. Tapia, but man, what a life.]
Johnny Tapia, a prizefighter who won world titles in three weight classes in a chaotic life that included jail, struggles with mental illness, suicide attempts and five times being declared clinically dead as a result of drug overdoses, was found dead at his home in Albuquerque on Sunday. He was 45.
The Albuquerque police said an autopsy would be done in the next few days. Foul play is not suspected.
Tapia, who was 5 feet 6 inches, said the raw fury he displayed in winning his world titles came from the horrific memory of seeing his mother being kidnapped and murdered when he was 8. He said he saw every opponent as his mother’s killer.
Less than a year after his mother’s death, he recounted, his uncles were making him fight older boys in matches they bet on. If he lost, they beat him, he said.
Tapia’s father had vanished before he was born, and Tapia had thought he was dead until he turned up in 2010 after being released from a federal penitentiary and DNA tests confirmed his paternity. The son slipped into a lifelong pattern of binging on cocaine and alcohol, struggling with bipolar disorder, and cycling in and out of jail and drug rehabilitation programs.
“Mi vida loca,” or my crazy life, were the words tattooed on his belly. He had made that his motto after he thought he had outgrown his first, “baby-faced assassin.”
by Douglas Martin, NY Times | Read more:
Photo: Jake Schoellkopf/Associated Press
Waking Up to Major Colonoscopy Bills
[ed. I think the take-away here is that medical billing is simply a starting point for negotiations between insurance companies, medical facilities and medical practitioners. The final payment will likely be significantly different than the original bill. Of course, along the way the patient gets caught in the middle - subject to exorbitant initial co-pays, bill collectors and other unpleasant surprises - and is the funding source of both first and last resort. What a system.]
Patients who undergo colonoscopy usually receive anesthesia of some sort in order to “sleep” through the procedure. But as one Long Island couple discovered recently, it can be a very expensive nap.
Both husband and wife selected gastroenterologists who participated in their insurance plan to perform their cancer screenings. But in both cases, the gastroenterologists chose full anesthesia with Propofol, a powerful drug that must be administered by an anesthesiologist, instead of moderate, or “conscious,” sedation that often gastroenterologists can administer themselves.
And in both cases, the gastroenterologists were assisted in the procedure by anesthesiologists who were not covered by the couple’s insurance. They billed the couple’s insurance at rates far higher than any plan would reimburse — two to four times as high, experts say.
Now the couple, Lawrence LaRose and Susan LaMontagne, of Sag Harbor, N.Y., are fending off lawyers and a debt collection agency, and facing thousands of dollars in unresolved charges. All this for a cancer screening test that public health officials say every American should have by age 50, and repeat every 10 years, to save lives — and money.
“Doctors adopt practices that cost more, insurers pay less, and patients get stuck with a tab that in many cases is inflated and arbitrary,” said Ms. LaMontagne, whose communications firm, Public Interest Media Group, is focused on health care. “I work on health care access issues every day, so if I’m having a hard time sorting this out, what does that say for other consumers?”
by Roni Caryn Rabin, NY Times | Read more:
Illustration: Scott Menchin
Monday, May 28, 2012
Crazy for Crispy
At any run-of-the-mill Japanese restaurant in North America, the menu features such traditional items as tempura, tonkatsu, and kara-age
chicken. This crispy trio has long had an important place in Japanese
cuisine. But it is surprising to find out that all three are cultural
borrowings, some dating back to time periods when Japan went to great
lengths to isolate itself from foreign influences. The batter-frying
tempura technique (used typically for vegetables and shrimp) was
borrowed from Spanish and Portuguese missionaries and traders in the
15th and 16th centuries. Tonkatsu is a breaded pork cutlet, a
version of the schnitzel from Germany and Central Europe, which was
added to Japanese cuisine probably no later than the early part of the
20th century. Kara-age originally meant "Chinese frying" and refers to
deep-frying foods that have been coated with corn starch.
In The Babbo Cookbook, the celebrity chef and restaurateur Mario Batali wrote, "The single word 'crispy' sells more food than a barrage of adjectives. ... There is something innately appealing about crispy food." If crispy food really is innately appealing, that might help explain why Japanese cuisine was so receptive to these particular "outside" foods. In turn, it is quite possible that crispy dishes such as tempura and tonkatsu were gateway foods for the worldwide acceptance of squishier Japanese delicacies, such as sushi. Tortilla chips, potato chips, French fries, fried chicken, and other crispy items may serve as the advance guard in the internationalization of eating throughout the developed (and developing) world. Crispy conquers cultural boundaries.
The hypothesis that crispy foods are innately appealing is a fascinating one. As an anthropologist interested in the evolution of cognition and the human diet, I think that maybe our attraction to crispy foods could give us insights into how people have evolved to think the food that they eat.
Eating has been as critical to human survival as sociality, language, and sex and gender roles have, but it has not received much interest from evolutionary psychologists and other scientists interested in behavioral evolution. What we eat is, of course, shaped by culture, which influences the range of foods that are deemed edible and inedible in any given environment. But eating and food choices have also been shaped by millions of years of evolution, giving us a preference for certain tastes and textures, as well as a desire to eat more than we should when some foods are readily available.
by John S. Allen, The Chronicle Review | Read more:
Photo: iStock
In The Babbo Cookbook, the celebrity chef and restaurateur Mario Batali wrote, "The single word 'crispy' sells more food than a barrage of adjectives. ... There is something innately appealing about crispy food." If crispy food really is innately appealing, that might help explain why Japanese cuisine was so receptive to these particular "outside" foods. In turn, it is quite possible that crispy dishes such as tempura and tonkatsu were gateway foods for the worldwide acceptance of squishier Japanese delicacies, such as sushi. Tortilla chips, potato chips, French fries, fried chicken, and other crispy items may serve as the advance guard in the internationalization of eating throughout the developed (and developing) world. Crispy conquers cultural boundaries.
The hypothesis that crispy foods are innately appealing is a fascinating one. As an anthropologist interested in the evolution of cognition and the human diet, I think that maybe our attraction to crispy foods could give us insights into how people have evolved to think the food that they eat.
Eating has been as critical to human survival as sociality, language, and sex and gender roles have, but it has not received much interest from evolutionary psychologists and other scientists interested in behavioral evolution. What we eat is, of course, shaped by culture, which influences the range of foods that are deemed edible and inedible in any given environment. But eating and food choices have also been shaped by millions of years of evolution, giving us a preference for certain tastes and textures, as well as a desire to eat more than we should when some foods are readily available.
by John S. Allen, The Chronicle Review | Read more:
Photo: iStock
The Things That Carried Him
The seven soldiers stood in a stiff line and fired three volleys each. This is a part of the ritual they practice again and again. The seven weapons should sound like one. When the shots are scattered — "popcorn," the soldiers call it — they've failed, and they will be mad at themselves for a long time after. On this day, with news cameras and hundreds of sets of sad eyes trained on them, they were perfect. After the final volley, Huber bent down and picked up his three polished shells from the grass.
Leatherbee wet his lips before he raised his trumpet. That was the first indication that he was a genuine bugler. There is such a shortage of buglers now — ushered in by a confluence of death, including waves of World War II and Korea veterans, the first ranks of aging Vietnam veterans, and the nearly four thousand men and women killed in Iraq — that the military has been forced to employ bands of make-believe musicians for the graveside playing of taps. They are usually ordinary soldiers who carry an electronic bugle; with the press of a button, a rendition of taps is broadcast out across fields and through trees. Taps is played without valve work, so only the small red light that shines out of the bell gives them away.
Now Leatherbee, using his lungs and his lips to control the pitch, played the first of twenty-four notes: G, G, C, G, C, E... Taps is not fast or technically difficult, and even if it were, most true Army buglers, like Leatherbee, are trained at the university level, possessing what the military calls a "civilian-acquired skill." They have each spent an additional six months in Norfolk, Virginia, for advanced work in calls. But there are still subtle differences that survive the efforts at regimentation — in embouchure, volume, and vibrato, and in how they taper the notes — and there is always the risk of a cracked note, whether due to cold or heat or the tightness that every bugler feels in his chest.
"You always run into the question," Leatherbee said later, "do I close my eyes, so that emotion won't be involved, or do I leave them open, so that more emotion will be in the sound? In my opinion, you can't close your eyes. There's a person in a casket in front of you. You want to give them as much as you can."
After Leatherbee lowered the trumpet from his lips, the six men who carried the casket to the burial vault returned to fold the flag. For some soldiers, that can be the hardest part. "Because you're right there," said one of the riflemen, Sergeant Chris Bastille. "You're maybe two feet from the family. And the younger the soldier is, the younger the family is."
"He had a few kids," Huber said.
First, the soldiers folded the flag twice lengthwise, with a slight offset at the top to ensure that the red and white would disappear within the blue. "Their hands were shaking," Dawson would remember later. "I could see that they were feeling it."
Then they made the first of thirteen triangular folds. Before the second fold, Huber took the three gleaming shells out of his pocket and pushed them inside the flag. No one would ever see them again — a flag well folded takes effort to pull apart — but he took pride in having polished them.
by Chris Jones, Esquire (May, 2008) | Read more:
Leatherbee wet his lips before he raised his trumpet. That was the first indication that he was a genuine bugler. There is such a shortage of buglers now — ushered in by a confluence of death, including waves of World War II and Korea veterans, the first ranks of aging Vietnam veterans, and the nearly four thousand men and women killed in Iraq — that the military has been forced to employ bands of make-believe musicians for the graveside playing of taps. They are usually ordinary soldiers who carry an electronic bugle; with the press of a button, a rendition of taps is broadcast out across fields and through trees. Taps is played without valve work, so only the small red light that shines out of the bell gives them away.
Now Leatherbee, using his lungs and his lips to control the pitch, played the first of twenty-four notes: G, G, C, G, C, E... Taps is not fast or technically difficult, and even if it were, most true Army buglers, like Leatherbee, are trained at the university level, possessing what the military calls a "civilian-acquired skill." They have each spent an additional six months in Norfolk, Virginia, for advanced work in calls. But there are still subtle differences that survive the efforts at regimentation — in embouchure, volume, and vibrato, and in how they taper the notes — and there is always the risk of a cracked note, whether due to cold or heat or the tightness that every bugler feels in his chest.
"You always run into the question," Leatherbee said later, "do I close my eyes, so that emotion won't be involved, or do I leave them open, so that more emotion will be in the sound? In my opinion, you can't close your eyes. There's a person in a casket in front of you. You want to give them as much as you can."
After Leatherbee lowered the trumpet from his lips, the six men who carried the casket to the burial vault returned to fold the flag. For some soldiers, that can be the hardest part. "Because you're right there," said one of the riflemen, Sergeant Chris Bastille. "You're maybe two feet from the family. And the younger the soldier is, the younger the family is."
"He had a few kids," Huber said.
First, the soldiers folded the flag twice lengthwise, with a slight offset at the top to ensure that the red and white would disappear within the blue. "Their hands were shaking," Dawson would remember later. "I could see that they were feeling it."
Then they made the first of thirteen triangular folds. Before the second fold, Huber took the three gleaming shells out of his pocket and pushed them inside the flag. No one would ever see them again — a flag well folded takes effort to pull apart — but he took pride in having polished them.
by Chris Jones, Esquire (May, 2008) | Read more:
The Beach Boys’ Crazy Summer
Brian Wilson, the lumbering savant who wrote, produced and sang an outlandish number of immortal pop songs back in the 1960s with his band, the Beach Boys, is swiveling in a chair, belly out, arms dangling, next to his faux-grand piano at the cavernous Burbank, Calif. studio where he and the rest of the group’s surviving members are rehearsing for their much-ballyhooed 50th Anniversary reunion tour, which is set to start in three days. At 24, Wilson shelved what would have been his most avant-garde album, Smile, and retreated for decades into a dusky haze of drug abuse and mental illness; now, 45 years later, he has reemerged, stable but still somewhat screwy, to give the whole sun-and-surf thing a final go.
Before that can happen, though, the reconstituted Beach Boys must learn how to sing “That’s Why God Made the Radio,” the first new A-side that Wilson has written for the band since 1980. They are not entirely happy about this. Earlier, I heard keyboardist Bruce Johnston, who replaced Wilson on the road in 1965, talking to the group’s tour manager about an upcoming satellite-radio gig. “Just so you know,” the manager said, “Sirius wants you to perform ‘That’s Why God Made the Radio’ tomorrow night.”
“Oh really?” Johnston responded. “And how are we going to do that when we don’t know it?”
And so the band has gathered, once again, around Wilson’s piano. I’d like to imagine that this is how it was when they first accustomed their vocal cords to, say, “California Girls.” Except it’s not, exactly: back then, in 1965, Wilson was the maestro, conducting each singer as his falsetto floated skyward and his fingers pecked out the accompaniment. Now he stares at a teleprompter and sings when he’s told to sing, ceding his bench to one member of the 10-man backing band that will buffer the Beach Boys in concert and looking on while another orchestrates the harmonies and handles the loftier notes. At first, the blend is rough: Wilson strains to hit the high point of the hook; frontman Mike Love and guitarist Al Jardine miss their cues. But after eight or nine passes the stray voices begin to mesh. They begin to sound like the Beach Boys. Close your eyes, shutting out Wilson’s swoosh of silver hair and Love’s four golden rings, and 1965 isn’t such a stretch.
Or it isn't until someone's iPhone rings. Jardine's. He turns away from the piano and presses the device to his ear. "I'm going to have to call you back, because--wait, what?" He hangs up, shaking his head. "Dick Clark just passed away," he says. The room begins to murmur; the makeup lady covers her mouth with her hand.
Over the next few minutes, I watch as each Beach Boy absorbs the news. Love makes light of it, pretending to strangle Jardine behind his back. “You’re next, Al,” he purrs. Johnston, a former A&R man at Columbia, pitches Clark’s death as an angle for my story. “It’s kind of ironic to have our television hero in music pass away while we’re doing this next big move,” he explains
And then there’s Wilson—always the conduit, the live wire, the pulsing limbic system of the Beach Boys. As his biographer David Leaf once put it, “Brian Wilson's special magic in the early and mid-1960s was that he was at one with his audience ... Brian had a teenage heart, until it was broken.” At first, Wilson says nothing. Then I overhear him talking to Jardine.
“We're 70 fucking years old,” he says. “You'll be 70 in September. I'll be 70 in June. I'm worried about being 70.”
“It’s still a few months off,” Jardine says.
“That's true,” Wilson mutters. He pauses for a few seconds, looking away from his bandmate. “I want to know how did we get here?” he finally says. “How did we ever fucking get here? That's what I want to know.”
by Andrew Romano, The Daily Beast | Read more:
Photo: courtesy of Capitol Records Archive
Sunday, May 27, 2012
Jonathan Franzen: the path to Freedom
[ed. Fascinating glimpse into the life of an acclaimed writer, and the process of writing a great novel.]
I'm going to begin by addressing four unpleasant questions that novelists often get asked. These questions are apparently the price we have to pay for the pleasure of appearing in public. They're maddening not just because we hear them so often but also because, with one exception, they're difficult to answer and, therefore, very much worth asking.
The first of these perennial questions is: Who are your influences?
Sometimes the person asking this question merely wants some book recommendations, but all too often the question seems to be intended seriously. And part of what annoys me about it is that it's always asked in the present tense: who are my influences? The fact is, at this point in my life, I'm mostly influenced by my own past writing. If I were still labouring in the shadow of, say, EM Forster, I would certainly be at pains to pretend that I wasn't. According to Harold Bloom, whose clever theory of literary influence helped him make a career of distinguishing "weak" writers from "strong" writers, I wouldn't even be conscious of the degree to which I was still labouring in EM Forster's shadow. Only Harold Bloom would be fully conscious of that.
Direct influence makes sense only with very young writers, who, in the course of figuring out how to write, first try copying the styles and attitudes and methods of their favourite authors. I personally was very influenced, at the age of 21, by CS Lewis, Isaac Asimov, Louise Fitzhugh, Herbert Marcuse, PG Wodehouse, Karl Kraus, my then-fianceé, and The Dialectic of Enlightenment by Max Horkheimer and Theodor Adorno. For a while, in my early 20s, I put a lot of effort into copying the sentence rhythms and comic dialogue of Don DeLillo; I was also very taken with the strenuously vivid and all-knowing prose of Robert Coover and Thomas Pynchon. But to me these various "influences" seem not much more meaningful than the fact that, when I was 15, my favourite music group was the Moody Blues. A writer has to begin somewhere, but where exactly he or she begins is almost random.
It would be somewhat more meaningful to say that I was influenced by Franz Kafka. By this I mean that it was Kafka's novel The Trial, as taught by the best literature professor I ever had, that opened my eyes to the greatness of what literature can do, and made me want to try to create some myself. Kafka's brilliantly ambiguous rendering of Josef K, who is at once a sympathetic and unjustly persecuted Everyman and a self-pitying and guilt-denying criminal, was my portal to the possibilities of fiction as a vehicle of self-investigation: as a method of engagement with the difficulties and paradoxes of my own life. Kafka teaches us how to love ourselves even as we're being merciless toward ourselves; how to remain humane in the face of the most awful truths about ourselves. The stories that recognise people as they really are – the books whose characters are at once sympathetic subjects and dubious objects – are the ones capable of reaching across cultures and generations. This is why we still read Kafka.
The bigger problem with the question about influences, however, is that it seems to presuppose that young writers are lumps of soft clay on which certain great writers, dead or living, have indelibly left their mark. And what maddens the writer trying to answer the question honestly is that almost everything a writer has ever read leaves some kind of mark. To list every writer I've learned something from would take me hours, and it still wouldn't account for why some books matter to me so much more than other books: why, even now, when I'm working, I often think about The Brothers Karamazov and The Man Who Loved Children and never about Ulysses or To the Lighthouse. How did it happen that I did not learn anything from Joyce or Woolf, even though they're both obviously "strong" writers?
The common understanding of influence, whether Harold Bloomian or more conventional, is far too linear and one-directional. When I write, I don't feel like a craftsman influenced by earlier craftsmen who were themselves influenced by earlier craftsmen. I feel like a member of a single, large virtual community in which I have dynamic relationships with other members of the community, most of whom are no longer living. By means of what I write and how I write, I fight for my friends and I fight against my enemies. I want more readers to appreciate the glory of the 19th-century Russians; I'm indifferent to whether readers love James Joyce; and my work represents an active campaign against the values I dislike: sentimentality, weak narrative, overly lyrical prose, solipsism, self-indulgence, misogyny and other parochialisms, sterile game-playing, overt didacticism, moral simplicity, unnecessary difficulty, informational fetishes, and so on. Indeed, much of what might be called actual "influence" is negative: I don't want to be like this writer or that writer. (...)
The second perennial question is: What time of day do you work, and what do you write on?
by Jonathan Franzen, The Guardian | Read more:
I'm going to begin by addressing four unpleasant questions that novelists often get asked. These questions are apparently the price we have to pay for the pleasure of appearing in public. They're maddening not just because we hear them so often but also because, with one exception, they're difficult to answer and, therefore, very much worth asking.
The first of these perennial questions is: Who are your influences?
Sometimes the person asking this question merely wants some book recommendations, but all too often the question seems to be intended seriously. And part of what annoys me about it is that it's always asked in the present tense: who are my influences? The fact is, at this point in my life, I'm mostly influenced by my own past writing. If I were still labouring in the shadow of, say, EM Forster, I would certainly be at pains to pretend that I wasn't. According to Harold Bloom, whose clever theory of literary influence helped him make a career of distinguishing "weak" writers from "strong" writers, I wouldn't even be conscious of the degree to which I was still labouring in EM Forster's shadow. Only Harold Bloom would be fully conscious of that.
Direct influence makes sense only with very young writers, who, in the course of figuring out how to write, first try copying the styles and attitudes and methods of their favourite authors. I personally was very influenced, at the age of 21, by CS Lewis, Isaac Asimov, Louise Fitzhugh, Herbert Marcuse, PG Wodehouse, Karl Kraus, my then-fianceé, and The Dialectic of Enlightenment by Max Horkheimer and Theodor Adorno. For a while, in my early 20s, I put a lot of effort into copying the sentence rhythms and comic dialogue of Don DeLillo; I was also very taken with the strenuously vivid and all-knowing prose of Robert Coover and Thomas Pynchon. But to me these various "influences" seem not much more meaningful than the fact that, when I was 15, my favourite music group was the Moody Blues. A writer has to begin somewhere, but where exactly he or she begins is almost random.
It would be somewhat more meaningful to say that I was influenced by Franz Kafka. By this I mean that it was Kafka's novel The Trial, as taught by the best literature professor I ever had, that opened my eyes to the greatness of what literature can do, and made me want to try to create some myself. Kafka's brilliantly ambiguous rendering of Josef K, who is at once a sympathetic and unjustly persecuted Everyman and a self-pitying and guilt-denying criminal, was my portal to the possibilities of fiction as a vehicle of self-investigation: as a method of engagement with the difficulties and paradoxes of my own life. Kafka teaches us how to love ourselves even as we're being merciless toward ourselves; how to remain humane in the face of the most awful truths about ourselves. The stories that recognise people as they really are – the books whose characters are at once sympathetic subjects and dubious objects – are the ones capable of reaching across cultures and generations. This is why we still read Kafka.
The bigger problem with the question about influences, however, is that it seems to presuppose that young writers are lumps of soft clay on which certain great writers, dead or living, have indelibly left their mark. And what maddens the writer trying to answer the question honestly is that almost everything a writer has ever read leaves some kind of mark. To list every writer I've learned something from would take me hours, and it still wouldn't account for why some books matter to me so much more than other books: why, even now, when I'm working, I often think about The Brothers Karamazov and The Man Who Loved Children and never about Ulysses or To the Lighthouse. How did it happen that I did not learn anything from Joyce or Woolf, even though they're both obviously "strong" writers?
The common understanding of influence, whether Harold Bloomian or more conventional, is far too linear and one-directional. When I write, I don't feel like a craftsman influenced by earlier craftsmen who were themselves influenced by earlier craftsmen. I feel like a member of a single, large virtual community in which I have dynamic relationships with other members of the community, most of whom are no longer living. By means of what I write and how I write, I fight for my friends and I fight against my enemies. I want more readers to appreciate the glory of the 19th-century Russians; I'm indifferent to whether readers love James Joyce; and my work represents an active campaign against the values I dislike: sentimentality, weak narrative, overly lyrical prose, solipsism, self-indulgence, misogyny and other parochialisms, sterile game-playing, overt didacticism, moral simplicity, unnecessary difficulty, informational fetishes, and so on. Indeed, much of what might be called actual "influence" is negative: I don't want to be like this writer or that writer. (...)
The second perennial question is: What time of day do you work, and what do you write on?
by Jonathan Franzen, The Guardian | Read more:
The Self Illusion: An Interview With Bruce Hood
[ed. Jonah Lehrer inteviews Bruce Hood, author of The Self Illusion, on the nature of self and what it means when we use that term.]
LEHRER: The title of The Self Illusion is literal. You argue that the self – this entity at the center of our personal universe – is actually just a story, a “constructed narrative.” Could you explain what you mean?
HOOD: The best stories make sense. They follow a logical path where one thing leads to another and provide the most relevant details and signposts along the way so that you get a sense of continuity and cohesion. This is what writers refer to as the narrative arc – a beginning, middle and an end. If a sequence of events does not follow a narrative, then it is incoherent and fragmented so does not have meaning. Our brains think in stories. The same is true for the self and I use a distinction that William James drew between the self as “I” and “me.” Our consciousness of the self in the here and now is the “I” and most of the time, we experience this as being an integrated and coherent individual – a bit like the character in the story. The self which we tell others about, is autobiographical or the “me” which again is a coherent account of who we think we are based on past experiences, current events and aspirations for the future. (...)
LEHRER: If the self is an illusion, then why does it exist? Why do we bother telling a story about ourselves?
HOOD: For the same reason that our brains create a highly abstracted version of the world around us. It is bad enough that our brain is metabolically hogging most of our energy requirements, but it does this to reduce the workload to act. That’s the original reason why the brain evolved in the first place – to plan and control movements and keep track of the environment. It’s why living creatures that do not act or navigate around their environments do not have brains. So the brain generates maps and models on which to base current and future behaviors. Now the value of a map or a model is the extent to which it provides the most relevant useful information without overburdening you with too much detail.
The same can be said for the self. Whether it is the “I” of consciousness or the “me” of personal identity, both are summaries of the complex information that feeds into our consciousness. The self is an efficient way of having experience and interacting with the world. For example, imagine you ask me whether I would prefer vanilla or chocolate ice cream? I know I would like chocolate ice cream. Don’t ask me why, I just know. When I answer with chocolate, I have the seemingly obvious experience that my self made the decision. However, when you think about it, my decision covers a vast multitude of hidden processes, past experiences and cultural influences that would take too long to consider individually. Each one of them fed into that decision.
LEHRER: Let’s say the self is just a narrative. Who, then, is the narrator? Which part of me is writing the story that becomes me?
HOOD: This is the most interesting question and also the most difficult to answer because we are entering into the realms of consciousness. For example, only this morning as I was waking up, I was aware that I was gathering my thoughts together and I suddenly became fixated by this phrase, “gathering my thoughts.” I felt I could focus on my thoughts, turn them over in my mind and consider how I was able to do this. Who was doing the gathering and who was focusing? This was a compelling experience of the conscious self.
I would argue that while I had the very strong impression that I was gathering my thoughts together, you do have to question how did the thought to start this investigation begin? Certainly, most of us never bother to think about this, so I must have had an unconscious agenda that this would be an interesting exercise. Maybe it was your question that I read a few days ago or maybe this is a problem that has been ticking over in my brain for some time. It seemed like a story that I was playing out in my head to try and answer a question about how I was thinking. But unless you believe in a ghost in the machine, it is impossible to interrogate your own mind independently. In other words, the narrator and the audience are one and the same.
by Jonah Lehrer, Wired | Read more:
LEHRER: The title of The Self Illusion is literal. You argue that the self – this entity at the center of our personal universe – is actually just a story, a “constructed narrative.” Could you explain what you mean?
HOOD: The best stories make sense. They follow a logical path where one thing leads to another and provide the most relevant details and signposts along the way so that you get a sense of continuity and cohesion. This is what writers refer to as the narrative arc – a beginning, middle and an end. If a sequence of events does not follow a narrative, then it is incoherent and fragmented so does not have meaning. Our brains think in stories. The same is true for the self and I use a distinction that William James drew between the self as “I” and “me.” Our consciousness of the self in the here and now is the “I” and most of the time, we experience this as being an integrated and coherent individual – a bit like the character in the story. The self which we tell others about, is autobiographical or the “me” which again is a coherent account of who we think we are based on past experiences, current events and aspirations for the future. (...)
LEHRER: If the self is an illusion, then why does it exist? Why do we bother telling a story about ourselves?
HOOD: For the same reason that our brains create a highly abstracted version of the world around us. It is bad enough that our brain is metabolically hogging most of our energy requirements, but it does this to reduce the workload to act. That’s the original reason why the brain evolved in the first place – to plan and control movements and keep track of the environment. It’s why living creatures that do not act or navigate around their environments do not have brains. So the brain generates maps and models on which to base current and future behaviors. Now the value of a map or a model is the extent to which it provides the most relevant useful information without overburdening you with too much detail.
The same can be said for the self. Whether it is the “I” of consciousness or the “me” of personal identity, both are summaries of the complex information that feeds into our consciousness. The self is an efficient way of having experience and interacting with the world. For example, imagine you ask me whether I would prefer vanilla or chocolate ice cream? I know I would like chocolate ice cream. Don’t ask me why, I just know. When I answer with chocolate, I have the seemingly obvious experience that my self made the decision. However, when you think about it, my decision covers a vast multitude of hidden processes, past experiences and cultural influences that would take too long to consider individually. Each one of them fed into that decision.
LEHRER: Let’s say the self is just a narrative. Who, then, is the narrator? Which part of me is writing the story that becomes me?
HOOD: This is the most interesting question and also the most difficult to answer because we are entering into the realms of consciousness. For example, only this morning as I was waking up, I was aware that I was gathering my thoughts together and I suddenly became fixated by this phrase, “gathering my thoughts.” I felt I could focus on my thoughts, turn them over in my mind and consider how I was able to do this. Who was doing the gathering and who was focusing? This was a compelling experience of the conscious self.
I would argue that while I had the very strong impression that I was gathering my thoughts together, you do have to question how did the thought to start this investigation begin? Certainly, most of us never bother to think about this, so I must have had an unconscious agenda that this would be an interesting exercise. Maybe it was your question that I read a few days ago or maybe this is a problem that has been ticking over in my brain for some time. It seemed like a story that I was playing out in my head to try and answer a question about how I was thinking. But unless you believe in a ghost in the machine, it is impossible to interrogate your own mind independently. In other words, the narrator and the audience are one and the same.
by Jonah Lehrer, Wired | Read more:
Subscribe to:
Posts (Atom)