Wednesday, May 30, 2012

Designer Bugs

The prospect of artificial life is so outlandish that we rarely even mean the words. Most of the time we mean clever androids or computers that talk. Even the pages of science fiction typically stop short: in the popular dystopian narrative, robots are always taking over, erecting armies, firing death rays and sometimes even learning to love, but underneath their replicant skin, they tend to be made of iron ore. From the Terminator to the Matrix to the awakening of HAL, what preoccupies the modern imagination is the sentient evolution of machines, not artificial life itself.

But inside the laboratories of biotechnology, a more literal possibility is taking hold: What if machines really were alive? To some extent, this is already happening. Brewers and bakers have long relied on the diligence of yeast to make beer and bread, and in medical manufacturing, it has become routine to harness organisms like Penicillium to generate drugs. At DuPont, engineers are using modified E. coli to produce polyester for carpet, and the pharmaceutical giant Sanofi is using yeast injected with strips of synthetic DNA to manufacture medicine. But the possibility of designing a new organism, entirely from synthetic DNA, to produce whatever compounds we want, would mark a radical leap forward in biotechnology and a paradigm shift in manufacturing.

The appeal of biological machinery is manifold. For one thing, because organisms reproduce, they can generate not only their target product but also more factories to do the same. Then too, microbes use novel fuel. Chances are, unless you’ve slipped off the grid, virtually every machine you own, from your iPhone to your toaster oven, depends on burning fossil fuels to work. Even if you have slipped off the grid, manufacturing those devices required massive carbon emissions. This is not necessarily the case for biomachinery. A custom organism could produce the same plastic or metal as an industrial plant while feeding on the compounds in pollution or the energy of the sun.

Then there is the matter of yield. Over the last 60 years, agricultural production has boomed in large part through plant modification, chemical additives and irrigation. But as the world population continues to soar, adding nearly a billion people over the past decade, major aquifers are giving out, and agriculture may not be able to keep pace with the world’s needs. If a strain of algae could secrete high yields of protein, using less land and water than traditional crops, it may represent the best hope to feed a booming planet.

Finally, the rise of biomachinery could usher in an era of spot production. “Biology is the ultimate distributed manufacturing platform,” Drew Endy, an assistant professor at Stanford University, told me recently. Endy is trained as an engineer but has become a leading proponent of synthetic biology. He sketched a picture of what “distributed manufacturing” by microbes might look like: say a perfume company could design a bacterium to produce an appealing aroma; “rather than running this in a large-scale fermenter, they would upload the DNA sequences onto the future equivalent of iTunes,” he said. “People all over the world could then pay a fee to download the information.” Then, Endy explained, customers could simply synthesize the bugs at home and grow them on their skin. “They could transform epidermal ecosystems to have living production of scents and fragrances,” he said. “Living perfume!”

Whether all this could really happen — or should — depends on whom you ask. The challenge of building a synthetic bacterium from raw DNA is as byzantine as it probably sounds. It means taking four bottles of chemicals — the adenine, thymine, cytosine and guanine that make up DNA — and linking them into a daisy chain at least half a million units long, then inserting that molecule into a host cell and hoping it will spring to life as an organism that not only grows and reproduces but also manufactures exactly what its designer intended. (A line about hubris, Icarus and Frankenstein typically follows here.) Since the late 1990s, laboratories around the world have been experimenting with synthetic biology, but many scientists believe that it will take decades to see major change. “We’re still really early,” Endy said. “Or to say it differently, we’re still really bad.”

Venter disagrees. The future, he says, may be sooner than we think. Much of the groundwork is already done. In 2003, Venter’s lab used a new method to piece together a strip of DNA that was identical to a natural virus, then watched it spring to action and attack a cell. In 2008, they built a longer genome, replicating the DNA of a whole bacterium, and in 2010 they announced that they brought a bacterium with synthetic DNA to life. That organism was still mostly a copy of one in nature, but as a flourish, Venter and his team wrote their names into its DNA, along with quotes from James Joyce and J. Robert Oppenheimer and even secret messages. As the bacteria reproduced, the quotes and messages and names remained in the colony’s DNA.

In theory, this leaves just one step between Venter and a custom species. If he can write something more useful than his name into the synthetic DNA of an organism, changing its genetic function in some deliberate way, he will have crossed the threshold to designer life.

Unless he already has.

by Wil S. Hylton, NY Times |  Read more: 
Photo: Brad Swonetz

Tuesday, May 29, 2012

Doc Watson (March, 1923 - May, 2012)

The Evolution of the American Dream

In the New York Times earlier this year, Paul Krugman wrote of an economic effect called "The Great Gatsby curve," a graph that measures fiscal inequality against social mobility and shows that America's marked economic inequality means it has correlatively low social mobility. In one sense this hardly seems newsworthy, but it is telling that even economists think that F Scott Fitzgerald's masterpiece offers the most resonant (and economical) shorthand for the problems of social mobility, economic inequality and class antagonism that we face today. Nietzsche – whose Genealogy of Morals Fitzgerald greatly admired – called the transformation of class resentment into a moral system "ressentiment"; in America, it is increasingly called the failure of the American dream, a failure now mapped by the "Gatsby curve".

Fitzgerald had much to say about the failure of this dream, and the fraudulences that sustain it – but his insights are not all contained within the economical pages of his greatest novel. Indeed, when Fitzgerald published The Great Gatsby in April 1925, the phrase "American dream" as we know it did not exist. Many now assume the phrase stretches back to the nation's founding, but "the American dream" was never used to describe a shared national value system until a popular 1917 novel called Susan Lenox: Her Fall and Rise, which remarked that "the fashion and home magazines … have prepared thousands of Americans … for the possible rise of fortune that is the universal American dream and hope." The OED lists this as the first recorded instance of the American dream, although it's not yet the catchphrase as we know it. That meaning is clearly emerging – but only as "possible" rise of fortune; a dream, not a promise. And as of 1917, at least some Americans were evidently beginning to recognise that consumerism and mass marketing were teaching them what to want, and that rises of fortune would be measured by the acquisition of status symbols. The phrase next appeared in print in a 1923 Vanity Fair article by Walter Lippmann, "Education and the White-Collar Class" (which Fitzgerald probably read); it warned that widening access to education was creating untenable economic pressure, as young people graduated with degrees only to find that insufficient white-collar jobs awaited. Instead of limiting access to education in order to keep such jobs the exclusive domain of the upper classes (a practice America had recently begun to justify by means of a controversial new idea called "intelligence tests"), Lippmann argued that Americans must decide that skilled labour was a proper vocation for educated people. There simply weren't enough white-collar jobs to go around, but "if education could be regarded not as a step ladder to a few special vocations, but as the key to the treasure house of life, we should not even have to consider the fatal proposal that higher education be confined to a small and selected class," a decision that would mark the "failure of the American dream" of universal education.

These two incipient instances of the phrase are both, in their different ways, uncannily prophetic; but as a catchphrase, the American dream did not explode into popular culture until the 1931 publication of a book called The Epic of America by James Truslow Adams, which spoke of "the American dream of a better, richer and happier life for all our citizens of every rank, which is the greatest contribution we have made to the thought and welfare of the world. That dream or hope has been present from the start. Ever since we became an independent nation, each generation has seen an uprising of ordinary Americans to save that dream from the forces that appear to be overwhelming it."

In the early years of the great depression Adams's book sparked a great national debate about the promise of America as a place that fosters "the genuine worth of each man or woman", whose efforts should be restricted by "no barriers beyond their own natures". Two years later, a New York Times article noted: "Get-rich-quick and gambling was the bane of our life before the smash"; they were also what caused the "smash" itself in 1929. By 1933, Adams was writing in the New York Times of the way the American dream had been hijacked: "Throughout our history, the pure gold of this vision has been heavily alloyed with the dross of materialistic aims. Not only did the wage scales and our standard of living seem to promise riches to the poor immigrant, but the extent and natural wealth of the continent awaiting exploitation offered to Americans of the older stocks such opportunities for rapid fortunes that the making of money and the enjoying of what money could buy too often became our ideal of a full and satisfying life. The struggle of each against all for the dazzling prizes destroyed in some measure both our private ideals and our sense of social obligation." As the Depression deepened, books such as Who Owns America? A New Declaration of Independence were arguing that "monopoly capitalism is morally ugly as well as economically unsound," that in America "the large majority should be able – in accordance with the tenets of the 'American dream' … to count on living in an atmosphere of equality, in a world which puts relatively few barriers between man and man." Part of the problem, however, was that the dream itself was being destroyed by "the friends of big business, who dishonour the dream by saying that it has been realised" already.

The phrase the American dream was first invented, in other words, to describe a failure, not a promise: or rather, a broken promise, a dream that was continually faltering beneath the rampant monopoly capitalism that set each struggling against all; and it is no coincidence that it was first popularised during the early years of the great depression. The impending failure had been clear to Fitzgerald by the time he finished Gatsby – and the fact that in 1925 most Americans were still recklessly chasing the dream had a great deal to do with the initial commercial and critical failure of The Great Gatsby, which would not be hailed as a masterpiece until the 50s, once hindsight had revealed its prophetic truth.

by Sarah Churchwell, The Guardian |  Read more:
Photograph: Courtesy Everett Collection/Rex Features

Why We Lie

Not too long ago, one of my students, named Peter, told me a story that captures rather nicely our society's misguided efforts to deal with dishonesty. One day, Peter locked himself out of his house. After a spell, the locksmith pulled up in his truck and picked the lock in about a minute.

"I was amazed at how quickly and easily this guy was able to open the door," Peter said. The locksmith told him that locks are on doors only to keep honest people honest. One percent of people will always be honest and never steal. Another 1% will always be dishonest and always try to pick your lock and steal your television; locks won't do much to protect you from the hardened thieves, who can get into your house if they really want to. The purpose of locks, the locksmith said, is to protect you from the 98% of mostly honest people who might be tempted to try your door if it had no lock.

We tend to think that people are either honest or dishonest. In the age of Bernie Madoff and Mark McGwire, James Frey and John Edwards, we like to believe that most people are virtuous, but a few bad apples spoil the bunch. If this were true, society might easily remedy its problems with cheating and dishonesty. Human-resources departments could screen for cheaters when hiring. Dishonest financial advisers or building contractors could be flagged quickly and shunned. Cheaters in sports and other arenas would be easy to spot before they rose to the tops of their professions.

But that is not how dishonesty works. Over the past decade or so, my colleagues and I have taken a close look at why people cheat, using a variety of experiments and looking at a panoply of unique data sets—from insurance claims to employment histories to the treatment records of doctors and dentists. What we have found, in a nutshell: Everybody has the capacity to be dishonest, and almost everybody cheats—just by a little. Except for a few outliers at the top and bottom, the behavior of almost everyone is driven by two opposing motivations. On the one hand, we want to benefit from cheating and get as much money and glory as possible; on the other hand, we want to view ourselves as honest, honorable people. Sadly, it is this kind of small-scale mass cheating, not the high-profile cases, that is most corrosive to society. (...)

The results of these experiments should leave you wondering about the ways that we currently try to keep people honest. Does the prospect of heavy fines or increased enforcement really make someone less likely to cheat on their taxes, to fill out a fraudulent insurance claim, to recommend a bum investment or to steal from his or her company? It may have a small effect on our behavior, but it is probably going to be of little consequence when it comes up against the brute psychological force of "I'm only fudging a little" or "Everyone does it" or "It's for a greater good."

What, then—if anything—pushes people toward greater honesty?

There's a joke about a man who loses his bike outside his synagogue and goes to his rabbi for advice. "Next week come to services, sit in the front row," the rabbi tells the man, "and when we recite the Ten Commandments, turn around and look at the people behind you. When we get to 'Thou shalt not steal,' see who can't look you in the eyes. That's your guy." After the next service, the rabbi is curious to learn whether his advice panned out. "So, did it work?" he asks the man. "Like a charm," the man answers. "The moment we got to 'Thou shalt not commit adultery,' I remembered where I left my bike."

What this little joke suggests is that simply being reminded of moral codes has a significant effect on how we view our own behavior.

 by Dan Ariely, WSJ |  Read more:

David Hockney, Afternoon Swimming, 1980
via:

Johnny Tapia (Feb. 1967 - May 2012)


[ed. I don't follow boxing so didn't know of Mr. Tapia, but man, what a life.]

Johnny Tapia, a prizefighter who won world titles in three weight classes in a chaotic life that included jail, struggles with mental illness, suicide attempts and five times being declared clinically dead as a result of drug overdoses, was found dead at his home in Albuquerque on Sunday. He was 45.

The Albuquerque police said an autopsy would be done in the next few days. Foul play is not suspected.

Tapia, who was 5 feet 6 inches, said the raw fury he displayed in winning his world titles came from the horrific memory of seeing his mother being kidnapped and murdered when he was 8. He said he saw every opponent as his mother’s killer.

Less than a year after his mother’s death, he recounted, his uncles were making him fight older boys in matches they bet on. If he lost, they beat him, he said.

Tapia’s father had vanished before he was born, and Tapia had thought he was dead until he turned up in 2010 after being released from a federal penitentiary and DNA tests confirmed his paternity. The son slipped into a lifelong pattern of binging on cocaine and alcohol, struggling with bipolar disorder, and cycling in and out of jail and drug rehabilitation programs.

“Mi vida loca,” or my crazy life, were the words tattooed on his belly. He had made that his motto after he thought he had outgrown his first, “baby-faced assassin.”

by Douglas Martin, NY Times |  Read more:
Photo: Jake Schoellkopf/Associated Press

Waking Up to Major Colonoscopy Bills


[ed. I think the take-away here is that medical billing is simply a starting point for negotiations between insurance companies, medical facilities and medical practitioners. The final payment will likely be significantly different than the original bill. Of course, along the way the patient gets caught in the middle - subject to exorbitant initial co-pays, bill collectors and other unpleasant surprises - and is the funding source of both first and last resort. What a system.]

Patients who undergo colonoscopy usually receive anesthesia of some sort in order to “sleep” through the procedure. But as one Long Island couple discovered recently, it can be a very expensive nap.

Both husband and wife selected gastroenterologists who participated in their insurance plan to perform their cancer screenings. But in both cases, the gastroenterologists chose full anesthesia with Propofol, a powerful drug that must be administered by an anesthesiologist, instead of moderate, or “conscious,” sedation that often gastroenterologists can administer themselves.

And in both cases, the gastroenterologists were assisted in the procedure by anesthesiologists who were not covered by the couple’s insurance. They billed the couple’s insurance at rates far higher than any plan would reimburse — two to four times as high, experts say.

Now the couple, Lawrence LaRose and Susan LaMontagne, of Sag Harbor, N.Y., are fending off lawyers and a debt collection agency, and facing thousands of dollars in unresolved charges. All this for a cancer screening test that public health officials say every American should have by age 50, and repeat every 10 years, to save lives — and money.

“Doctors adopt practices that cost more, insurers pay less, and patients get stuck with a tab that in many cases is inflated and arbitrary,” said Ms. LaMontagne, whose communications firm, Public Interest Media Group, is focused on health care. “I work on health care access issues every day, so if I’m having a hard time sorting this out, what does that say for other consumers?”

by Roni Caryn Rabin, NY Times |  Read more:
Illustration: Scott Menchin

Monday, May 28, 2012


Steven Yazzi. Coyote Series.
via:

Crazy for Crispy

At any run-of-the-mill Japanese restaurant in North America, the menu features such traditional items as tempura, tonkatsu, and kara-age chicken. This crispy trio has long had an important place in Japanese cuisine. But it is surprising to find out that all three are cultural borrowings, some dating back to time periods when Japan went to great lengths to isolate itself from foreign influences. The batter-frying tempura technique (used typically for vegetables and shrimp) was borrowed from Spanish and Portuguese missionaries and traders in the 15th and 16th centuries. Tonkatsu is a breaded pork cutlet, a version of the schnitzel from Germany and Central Europe, which was added to Japanese cuisine probably no later than the early part of the 20th century. Kara-age originally meant "Chinese frying" and refers to deep-frying foods that have been coated with corn starch.

In The Babbo Cookbook, the celebrity chef and restaurateur Mario Batali wrote, "The single word 'crispy' sells more food than a barrage of adjectives. ... There is something innately appealing about crispy food." If crispy food really is innately appealing, that might help explain why Japanese cuisine was so receptive to these particular "outside" foods. In turn, it is quite possible that crispy dishes such as tempura and tonkatsu were gateway foods for the worldwide acceptance of squishier Japanese delicacies, such as sushi. Tortilla chips, potato chips, French fries, fried chicken, and other crispy items may serve as the advance guard in the internationalization of eating throughout the developed (and developing) world. Crispy conquers cultural boundaries.

The hypothesis that crispy foods are innately appealing is a fascinating one. As an anthropologist interested in the evolution of cognition and the human diet, I think that maybe our attraction to crispy foods could give us insights into how people have evolved to think the food that they eat.

Eating has been as critical to human survival as sociality, language, and sex and gender roles have, but it has not received much interest from evolutionary psychologists and other scientists interested in behavioral evolution. What we eat is, of course, shaped by culture, which influences the range of foods that are deemed edible and inedible in any given environment. But eating and food choices have also been shaped by millions of years of evolution, giving us a preference for certain tastes and textures, as well as a desire to eat more than we should when some foods are readily available.

by John S. Allen, The Chronicle Review |  Read more:
Photo: iStock

The Things That Carried Him

The seven soldiers stood in a stiff line and fired three volleys each. This is a part of the ritual they practice again and again. The seven weapons should sound like one. When the shots are scattered — "popcorn," the soldiers call it — they've failed, and they will be mad at themselves for a long time after. On this day, with news cameras and hundreds of sets of sad eyes trained on them, they were perfect. After the final volley, Huber bent down and picked up his three polished shells from the grass.

Leatherbee wet his lips before he raised his trumpet. That was the first indication that he was a genuine bugler. There is such a shortage of buglers now — ushered in by a confluence of death, including waves of World War II and Korea veterans, the first ranks of aging Vietnam veterans, and the nearly four thousand men and women killed in Iraq — that the military has been forced to employ bands of make-believe musicians for the graveside playing of taps. They are usually ordinary soldiers who carry an electronic bugle; with the press of a button, a rendition of taps is broadcast out across fields and through trees. Taps is played without valve work, so only the small red light that shines out of the bell gives them away.

Now Leatherbee, using his lungs and his lips to control the pitch, played the first of twenty-four notes: G, G, C, G, C, E... Taps is not fast or technically difficult, and even if it were, most true Army buglers, like Leatherbee, are trained at the university level, possessing what the military calls a "civilian-acquired skill." They have each spent an additional six months in Norfolk, Virginia, for advanced work in calls. But there are still subtle differences that survive the efforts at regimentation — in embouchure, volume, and vibrato, and in how they taper the notes — and there is always the risk of a cracked note, whether due to cold or heat or the tightness that every bugler feels in his chest.

"You always run into the question," Leatherbee said later, "do I close my eyes, so that emotion won't be involved, or do I leave them open, so that more emotion will be in the sound? In my opinion, you can't close your eyes. There's a person in a casket in front of you. You want to give them as much as you can."

After Leatherbee lowered the trumpet from his lips, the six men who carried the casket to the burial vault returned to fold the flag. For some soldiers, that can be the hardest part. "Because you're right there," said one of the riflemen, Sergeant Chris Bastille. "You're maybe two feet from the family. And the younger the soldier is, the younger the family is."

"He had a few kids," Huber said.

First, the soldiers folded the flag twice lengthwise, with a slight offset at the top to ensure that the red and white would disappear within the blue. "Their hands were shaking," Dawson would remember later. "I could see that they were feeling it."

Then they made the first of thirteen triangular folds. Before the second fold, Huber took the three gleaming shells out of his pocket and pushed them inside the flag. No one would ever see them again — a flag well folded takes effort to pull apart — but he took pride in having polished them.

by Chris Jones, Esquire (May, 2008) |  Read more:

The Beach Boys’ Crazy Summer


Brian Wilson, the lumbering savant who wrote, produced and sang an outlandish number of immortal pop songs back in the 1960s with his band, the Beach Boys, is swiveling in a chair, belly out, arms dangling, next to his faux-grand piano at the cavernous Burbank, Calif. studio where he and the rest of the group’s surviving members are rehearsing for their much-ballyhooed 50th Anniversary reunion tour, which is set to start in three days. At 24, Wilson shelved what would have been his most avant-garde album, Smile, and retreated for decades into a dusky haze of drug abuse and mental illness; now, 45 years later, he has reemerged, stable but still somewhat screwy, to give the whole sun-and-surf thing a final go.

Before that can happen, though, the reconstituted Beach Boys must learn how to sing “That’s Why God Made the Radio,” the first new A-side that Wilson has written for the band since 1980. They are not entirely happy about this. Earlier, I heard keyboardist Bruce Johnston, who replaced Wilson on the road in 1965, talking to the group’s tour manager about an upcoming satellite-radio gig. “Just so you know,” the manager said, “Sirius wants you to perform ‘That’s Why God Made the Radio’ tomorrow night.”

“Oh really?” Johnston responded. “And how are we going to do that when we don’t know it?”

And so the band has gathered, once again, around Wilson’s piano. I’d like to imagine that this is how it was when they first accustomed their vocal cords to, say, “California Girls.” Except it’s not, exactly: back then, in 1965, Wilson was the maestro, conducting each singer as his falsetto floated skyward and his fingers pecked out the accompaniment. Now he stares at a teleprompter and sings when he’s told to sing, ceding his bench to one member of the 10-man backing band that will buffer the Beach Boys in concert and looking on while another orchestrates the harmonies and handles the loftier notes. At first, the blend is rough: Wilson strains to hit the high point of the hook; frontman Mike Love and guitarist Al Jardine miss their cues. But after eight or nine passes the stray voices begin to mesh. They begin to sound like the Beach Boys. Close your eyes, shutting out Wilson’s swoosh of silver hair and Love’s four golden rings, and 1965 isn’t such a stretch.

Or it isn't until someone's iPhone rings. Jardine's. He turns away from the piano and presses the device to his ear. "I'm going to have to call you back, because--wait, what?" He hangs up, shaking his head. "Dick Clark just passed away," he says. The room begins to murmur; the makeup lady covers her mouth with her hand.

Over the next few minutes, I watch as each Beach Boy absorbs the news. Love makes light of it, pretending to strangle Jardine behind his back. “You’re next, Al,” he purrs. Johnston, a former A&R man at Columbia, pitches Clark’s death as an angle for my story. “It’s kind of ironic to have our television hero in music pass away while we’re doing this next big move,” he explains

And then there’s Wilson—always the conduit, the live wire, the pulsing limbic system of the Beach Boys. As his biographer David Leaf once put it, “Brian Wilson's special magic in the early and mid-1960s was that he was at one with his audience ... Brian had a teenage heart, until it was broken.” At first, Wilson says nothing. Then I overhear him talking to Jardine.

“We're 70 fucking years old,” he says. “You'll be 70 in September. I'll be 70 in June. I'm worried about being 70.”

“It’s still a few months off,” Jardine says.

“That's true,” Wilson mutters. He pauses for a few seconds, looking away from his bandmate. “I want to know how did we get here?” he finally says. “How did we ever fucking get here? That's what I want to know.”

by Andrew Romano, The Daily Beast |  Read more:
Photo: courtesy of Capitol Records Archive

Sunday, May 27, 2012


[ed. A special day. Congratulations Hil and Phil!]

Henri Rousseau- Le Chat Tigre. Oil on canvas, undated.
via:

Jonathan Franzen: the path to Freedom

[ed. Fascinating glimpse into the life of an acclaimed writer, and the process of writing a great novel.]

I'm going to begin by addressing four unpleasant questions that novelists often get asked. These questions are apparently the price we have to pay for the pleasure of appearing in public. They're maddening not just because we hear them so often but also because, with one exception, they're difficult to answer and, therefore, very much worth asking.

The first of these perennial questions is: Who are your influences?

Sometimes the person asking this question merely wants some book recommendations, but all too often the question seems to be intended seriously. And part of what annoys me about it is that it's always asked in the present tense: who are my influences? The fact is, at this point in my life, I'm mostly influenced by my own past writing. If I were still labouring in the shadow of, say, EM Forster, I would certainly be at pains to pretend that I wasn't. According to Harold Bloom, whose clever theory of literary influence helped him make a career of distinguishing "weak" writers from "strong" writers, I wouldn't even be conscious of the degree to which I was still labouring in EM Forster's shadow. Only Harold Bloom would be fully conscious of that.

Direct influence makes sense only with very young writers, who, in the course of figuring out how to write, first try copying the styles and attitudes and methods of their favourite authors. I personally was very influenced, at the age of 21, by CS Lewis, Isaac Asimov, Louise Fitzhugh, Herbert Marcuse, PG Wodehouse, Karl Kraus, my then-fianceé, and The Dialectic of Enlightenment by Max Horkheimer and Theodor Adorno. For a while, in my early 20s, I put a lot of effort into copying the sentence rhythms and comic dialogue of Don DeLillo; I was also very taken with the strenuously vivid and all-knowing prose of Robert Coover and Thomas Pynchon. But to me these various "influences" seem not much more meaningful than the fact that, when I was 15, my favourite music group was the Moody Blues. A writer has to begin somewhere, but where exactly he or she begins is almost random.

It would be somewhat more meaningful to say that I was influenced by Franz Kafka. By this I mean that it was Kafka's novel The Trial, as taught by the best literature professor I ever had, that opened my eyes to the greatness of what literature can do, and made me want to try to create some myself. Kafka's brilliantly ambiguous rendering of Josef K, who is at once a sympathetic and unjustly persecuted Everyman and a self-pitying and guilt-denying criminal, was my portal to the possibilities of fiction as a vehicle of self-investigation: as a method of engagement with the difficulties and paradoxes of my own life. Kafka teaches us how to love ourselves even as we're being merciless toward ourselves; how to remain humane in the face of the most awful truths about ourselves. The stories that recognise people as they really are – the books whose characters are at once sympathetic subjects and dubious objects – are the ones capable of reaching across cultures and generations. This is why we still read Kafka.

The bigger problem with the question about influences, however, is that it seems to presuppose that young writers are lumps of soft clay on which certain great writers, dead or living, have indelibly left their mark. And what maddens the writer trying to answer the question honestly is that almost everything a writer has ever read leaves some kind of mark. To list every writer I've learned something from would take me hours, and it still wouldn't account for why some books matter to me so much more than other books: why, even now, when I'm working, I often think about The Brothers Karamazov and The Man Who Loved Children and never about Ulysses or To the Lighthouse. How did it happen that I did not learn anything from Joyce or Woolf, even though they're both obviously "strong" writers?

The common understanding of influence, whether Harold Bloomian or more conventional, is far too linear and one-directional. When I write, I don't feel like a craftsman influenced by earlier craftsmen who were themselves influenced by earlier craftsmen. I feel like a member of a single, large virtual community in which I have dynamic relationships with other members of the community, most of whom are no longer living. By means of what I write and how I write, I fight for my friends and I fight against my enemies. I want more readers to appreciate the glory of the 19th-century Russians; I'm indifferent to whether readers love James Joyce; and my work represents an active campaign against the values I dislike: sentimentality, weak narrative, overly lyrical prose, solipsism, self-indulgence, misogyny and other parochialisms, sterile game-playing, overt didacticism, moral simplicity, unnecessary difficulty, informational fetishes, and so on. Indeed, much of what might be called actual "influence" is negative: I don't want to be like this writer or that writer. (...)

The second perennial question is: What time of day do you work, and what do you write on?

by Jonathan Franzen, The Guardian | Read more:

U n’ Me by Scott Westmoreland
via:

The Self Illusion: An Interview With Bruce Hood

[ed. Jonah Lehrer inteviews Bruce Hood, author of The Self Illusion, on the nature of self and what it means when we use that term.]

LEHRER: The title of The Self Illusion is literal. You argue that the self – this entity at the center of our personal universe – is actually just a story, a “constructed narrative.” Could you explain what you mean?

HOOD: The best stories make sense. They follow a logical path where one thing leads to another and provide the most relevant details and signposts along the way so that you get a sense of continuity and cohesion. This is what writers refer to as the narrative arc – a beginning, middle and an end. If a sequence of events does not follow a narrative, then it is incoherent and fragmented so does not have meaning. Our brains think in stories. The same is true for the self and I use a distinction that William James drew between the self as “I” and “me.” Our consciousness of the self in the here and now is the “I” and most of the time, we experience this as being an integrated and coherent individual – a bit like the character in the story. The self which we tell others about, is autobiographical or the “me” which again is a coherent account of who we think we are based on past experiences, current events and aspirations for the future. (...)

LEHRER: If the self is an illusion, then why does it exist? Why do we bother telling a story about ourselves?

HOOD: For the same reason that our brains create a highly abstracted version of the world around us. It is bad enough that our brain is metabolically hogging most of our energy requirements, but it does this to reduce the workload to act. That’s the original reason why the brain evolved in the first place – to plan and control movements and keep track of the environment. It’s why living creatures that do not act or navigate around their environments do not have brains. So the brain generates maps and models on which to base current and future behaviors. Now the value of a map or a model is the extent to which it provides the most relevant useful information without overburdening you with too much detail.

The same can be said for the self. Whether it is the “I” of consciousness or the “me” of personal identity, both are summaries of the complex information that feeds into our consciousness. The self is an efficient way of having experience and interacting with the world. For example, imagine you ask me whether I would prefer vanilla or chocolate ice cream? I know I would like chocolate ice cream. Don’t ask me why, I just know. When I answer with chocolate, I have the seemingly obvious experience that my self made the decision. However, when you think about it, my decision covers a vast multitude of hidden processes, past experiences and cultural influences that would take too long to consider individually. Each one of them fed into that decision.

LEHRER: Let’s say the self is just a narrative. Who, then, is the narrator? Which part of me is writing the story that becomes me?

HOOD: This is the most interesting question and also the most difficult to answer because we are entering into the realms of consciousness. For example, only this morning as I was waking up, I was aware that I was gathering my thoughts together and I suddenly became fixated by this phrase, “gathering my thoughts.” I felt I could focus on my thoughts, turn them over in my mind and consider how I was able to do this. Who was doing the gathering and who was focusing? This was a compelling experience of the conscious self.

I would argue that while I had the very strong impression that I was gathering my thoughts together, you do have to question how did the thought to start this investigation begin? Certainly, most of us never bother to think about this, so I must have had an unconscious agenda that this would be an interesting exercise. Maybe it was your question that I read a few days ago or maybe this is a problem that has been ticking over in my brain for some time. It seemed like a story that I was playing out in my head to try and answer a question about how I was thinking. But unless you believe in a ghost in the machine, it is impossible to interrogate your own mind independently. In other words, the narrator and the audience are one and the same.

by Jonah Lehrer, Wired |  Read more: