Tuesday, June 7, 2016

Barbarian Days: A Surfing Life

[ed. I can't recommend this book highly enough (it won the 2016 Pulitzer Prize for autobiography). See also: this excerpt - Off Diamond Head.] 

In his new book, the New Yorker staff writer and veteran war reporter William Finnegan demonstrates the advantages of keeping meticulous mental maps. For him, memorizing a place is a matter of nostalgia, of metaphysical well-being, but also of life and death. Finnegan’s memoir is not about his professional life reporting on blood-soaked Sudan or Bosnia or Nicaragua; it’s about the “disabling enchantment” that is his lifelong hobby.

“The close, painstaking study of a tiny patch of coast, every eddy and angle, even down to individual rocks, and in every combination of tide and wind and swell…is the basic occupation of surfers at their local break,” he writes in Barbarian Days: A Surfing Life. Surfers, like children, naturally develop sensory affinity for their surroundings: they can detect minor changes in the smell of the sea, track daily the rise and fall of sandbars, are grateful for particularly sturdy roots onto which they can grab when scurrying down bluffs. The environment becomes an almost anatomical extension of them, mostly because it has to.

Unlike football or baseball or even boxing, surfing is a literarily impoverished sport. The reasons for this are practical. It’s not a spectator sport: it is hard to see surfers from the shore. That the best waves are seldom anywhere near civilization makes it an activity especially resistant to journalism, and first-rate writing by surfers is also rare: the impulse to surf—a “special brand of monomania,” Finnegan calls it—is at direct odds with the indoor obligations of writing.

In many ways this is true for any athletic activity—that its very best practitioners will very seldom be the same people who document it—but it’s particularly true of surfing, which demands more traveling, logistical planning, and waiting around than any other athletic endeavor. Even when surfers are not surfing, they’re thinking about it: listening to buoy reports, peering off cliffs with binoculars, preventing themselves from buying new boards.

Though middle school students have worn surfwear-branded clothing for decades now and surfing has become increasingly popular among the billionaires of Santa Clara County, it remains an elusive pastime in the minds of most everyone who has never done it. The reasons for this too are practical. Appropriate beaches are rare; high schools don’t have teams; and while not as prohibitively expensive as skiing, surfing requires roughly the same amount of cumbersome gear and is, if possible, even more physically uncomfortable. There are the damp, mildewed wetsuits; the feet cut by coral; the sunburns and the salt-stung eyes. Pair all this with the specter of Jeff Spicoli (the surfer and pot smoker played by Sean Penn in the 1982 film Fast Times at Ridgemont High) and the easy-to-imitate accent, and you have a hobby that is easy to mock, if not ignore. It’s certainly not a pastime anyone associates with ambition or mental agility.

Which is precisely what makes the propulsive precision of Finnegan’s writing so surprising and revelatory. For over half a century at this point, readers have taken it as a given (and writers as a professional prerogative) that lowbrow culture is deserving of bookish analysis. But unlike so many writhing attempts to extort meaning from topics that seem intellectually bankrupt, Finnegan’s treatment of surfing never feels like performance. Through the sheer intensity of his descriptive powers and the undeniable ways in which surfing has shaped his life, Barbarian Days is an utterly convincing study in the joy of treating seriously an unserious thing.

“Getting a spot wired—truly understanding it—can take years,” Finnegan writes, continuing, later, to say that “all surfers are oceanographers.” Over the course of a life spent in and out of the water, he has amassed a truly staggering amount of applied knowledge, of marine biology and carpentry and cartography. Surfing requires kinetic intuition, physical fitness, and courage in the face of an indifferent force, but it also demands the sort of mental work we don’t typically associate with extreme sports. Any good writing about an underexamined way of life must be, at least at times, expository, and Finnegan is lucid when it comes to the necessary task of explaining to the uninitiated some of the most basic tenets of surfing: why waves break where they do; how it’s possible to stand on a floating piece of fiberglass, go into a moving tube of water, and emerge looking just as you did upon entry. But despite all this, surfing, as Finnegan renders it, is more than just a fun physical activity: it’s a way of being in the world, with its own private politics and etiquette and benchmarks of success. (...)

It can also be a backdrop to a unique brand of companionship—among men, specifically, which, unlike female friendship, is not often tackled in books. The adoration with which Finnegan writes about his fellow surfers is of a sort usually seen only in soldiers’ memoirs. Bill was “aggressively relaxed—the essential California oxymoron.” Glenn “moved with unusual elegance.” Finnegan writes that “chasing waves remains for me a proximate cause of vivid friendships.” That they are often forged despite unpleasant obstacles (tropical diseases, food poisoning, near drowning) adds a kinetic dimension to what would otherwise be merely interior, emotional dynamics. And as Finnegan admits, “male egos were always subtly, or otherwise, on the line.”

Reputations are made and maintained in the ocean, but they’re premised on more than just talent. Seniority, humility, pain tolerance, and a hundred other factors contribute to a surfer’s local eminence. Speech patterns are just one of the outward signs of the insular social order that attends the sport. Surfers speak in a vivid vernacular: a mix of esoteric oceanographic detail and play-by-play narration expressed in slang. Finnegan is, of course, fluent—in it but also in the language of literature. The adjectives he attributes to waves are alternately the kind one might find in a contemporary novel—hideous, boiling, miraculous, malignant, mechanical—and the sort overheard in rusted-out pickup trucks—rifling, peaky, shifty, hairy, meaty, stupid. On the elemental aspects of surfing, Finnegan is especially capable of coming up with phrases that are at once poetic and concrete. Though “surfers have a perfection fetish…. Waves are not stationary objects in nature like roses or diamonds.” They are, instead, at once “the object of your deepest desire and adoration” but also “your adversary, your nemesis, even your mortal enemy.” Riding them is “the theoretical solution to an impossibly complex problem.”

by Alice Gregory, NY Review of Books |  Read more:
Image: Donald Miralle/Getty Images/NY Times

Listening to Speech Has Remarkable Effects On a Baby’s Brain

Imagine how an infant, looking out from her crib or her father’s arms, might see the world. Does she experience a kaleidoscope of shadowy figures looming in and out of focus, and a melange of sounds wafting in and out of hearing?

In his Principles of Psychology (1890), William James imagined the infant’s world as ‘one great blooming, buzzing confusion’. But today, we know that even very young infants have already begun to make sense of their world. They integrate sights and sounds, recognise the people who care for them, and even expect that people and other animate objects – but not inert objects – can move on their own.

Very young infants also tune in to the natural melodies carried in the lilting stream of language. These melodies are especially compelling in ‘motherese’, the singsong patterns that we tend to adopt spontaneously when we speak to infants and young children. Gradually, as infants begin to tease out distinct words and phrases, they tune in not only to the melody, but also to the meaning of the message.

Once infants utter their first words, typically at around their first birthdays, we can be sure that they have begun to harness the sounds of language to meaning. In my own family, after nearly a year of guessing what my daughters’ babbles might mean, their first words – datoo (bottle), Gaja (Roger, a beloved dog), uppie (a plea for someone to pick her up) – assured me, in a heartbeat, that they do speak my language!

In all cultures, babies’ first words are greeted with special joy. This joy is testimony to the power of language – a signature of our species and our most powerful cultural and cognitive convention. Language permits us to share the contents of our hearts and minds, in ways that are unparalleled elsewhere in the animal kingdom. It is the conduit through which we learn from and about others, across generations and across cultures.

But how, and when, do infants begin to link language to meaning?

We know that the path of language acquisition begins long before infants charm us with their first words. From the beginning, infants are listening, and they clearly prefer some sounds over others. How could we possibly know this? Newborn infants can’t point to what they like or crawl away from what they don’t. But when infants’ interest is captured by a particular sight or sound, they will suck rapidly and vigorously on a pacifier.

Using rates of sucking as a metric, infancy researchers have discovered that, at birth, infants prefer hearing the vocalisations of humans and non-human primates. Then, within months, they narrow their preference specifically to human vocalisations. And toward the end of their first year, infants become ‘native listeners’, homing in with increasing precision on the particular sounds of their own native language.

So, the early preference of newborns for listening to language sets the stage for them to zero in on their own native language sounds and to discover its words and syntax. But only recently did we discover that listening to language benefits more than language acquisition alone. It also boosts infants’ cognition.

by Sandra Waxman, Aeon |  Read more:
Image: markk

Socks

In Anna Karenina, the day after the fateful ball, resolved to forget Vronsky and resume her peaceful life with her son and husband (“my life will go on in the old way, all nice and as usual”), Anna settles herself in her compartment in the overnight train from Moscow to St. Petersburg, and takes out an uncut English novel, probably one by Trollope judging from references to fox hunting and Parliament. Tolstoy, of course, says nothing about a translation—educated Russians knew English as well as French. In contrast, very few educated English speakers have read the Russian classics in the original and, until recent years, they have largely depended on two translations, one by the Englishwoman Constance Garnett and the other by the English couple Louise and Aylmer Maude, made respectively in 1901 and 1912. The distinguished Slavic scholar and teacher Gary Saul Morson once wrote about the former:
I love Constance Garnett, and wish I had a framed picture of her on my wall, since I have often thought that what I do for a living is teach the Collected Works of Constance Garnett. She has a fine sense of English, and, especially, the sort of English that appears in British fiction of the realist period, which makes her ideal for translating the Russian masterpieces. Tolstoy and Dostoevsky were constantly reading and learning from Dickens, Trollope, George Eliot and others. Every time someone else redoes one of these works, reviewers say that the new version replaces Garnett; and then another version comes out, which, apparently, replaces Garnett again, and so on. She must have done something right.
Morson wrote these words in 1997, and would recall them bitterly. Since that time a sort of asteroid has hit the safe world of Russian literature in English translation. A couple named Richard Pevear and Larissa Volokhonsky have established an industry of taking everything they can get their hands on written in Russian and putting it into flat, awkward English. Surprisingly, these translations, far from being rejected by the critical establishment, have been embraced by it and have all but replaced Garnett, Maude, and other of the older translations. When you go to a bookstore to buy a work by Tolstoy, Dostoevsky, Gogol, or Chekhov, most of what you find is in translation by Pevear and Volokhonsky.

In an article in the July/August 2010 issue of Commentary entitled “The Pevearsion of Russian Literature,” Morson used the word “tragedy” to express his sense of the disaster that has befallen Russian literature in English translation since the P&V translations began to appear. To Morson “these are Potemkin translations—apparently definitive but actually flat and fake on closer inspection.” Morson fears that “if students and more-general readers choose P&V…[they] are likely to presume that whatever made so many regard Russian literature with awe has gone stale with time or is lost to them.”

In the summer of 2015 an interview with the rich and happy couple appeared in The Paris Review. The interviewer—referring to a comment Pevear had made to David Remnick in 2005—asked him: “You once said that one of your subliminal aims as a translator was ‘to help energize English itself.’ Can you explain what you mean?” Pevear was glad to do so:
It seemed to me that American fiction had become very bland and mostly self-centered. I thought it needed to break out of that. One thing I love about translating is the possibility it gives me to do things that you might not ordinarily do in English. I think it’s a very important part of translating. The good effect of translating is this cross-pollination of languages. Sometimes we get criticized—this is too literal, this is a Russianism—but I don’t mind that. Let’s have a little Russianism. Let’s use things like inversions. Why should they be eliminated? I guess if you’re a contemporary writer, you’re not supposed to do it, but as a translator I can. I love this freedom of movement between the two languages. I think it’s the most important thing for me—that it should enrich my language, the English language.
This bizarre idea of the translator’s task only strengthens one’s sense of the difficulty teachers of Russian literature in translation face when their students are forced to read the Russian classics in Pevear’s “energized” English. I first heard of P&V in 2007 when I received an e-mail from the writer Anna Shapiro:
I finished the Pevear/Volokhonsky translation of Anna Karenina a few weeks ago and I’m still more or less stewing about it. It leaves such a bad taste; it’s so wrong, and so oddly wrong, turning nourishment into wood. I wouldn’t have thought it possible. I’ve always maintained that Tolstoy was unruinable, because he’s such a simple writer, words piled like bricks, that it couldn’t matter; that he’s a transparent writer, so you can’t really get the flavor wrong, because in many ways he tries to have none. But they have, they’ve added some bad flavor, whereas even when Garnett makes sentences like “Vronsky eschewed farinaceous foods” it does no harm…. I imagine Pevear thinking he’s CORRECTING Tolstoy; that he’s really the much better writer.
When I leafed through the P&V translation of Anna Karenina I understood what Anna Shapiro was stewing about. The contrast to Garnett glared out at me. Garnett’s fine English, her urgent forward-moving sentences, her feeling for words—all this was gone, replaced by writing that is like singing or piano playing by someone who is not musical. For example:
Garnett: All his efforts to draw her into open discussion she confronted with a barrier that he could not penetrate, made up of a sort of amused perplexity.

P&V: To all his attempts at drawing her into an explanation she opposed the impenetrable wall of some cheerful perplexity.
Or:
Garnett: After taking leave of her guests, Anna did not sit down, but began walking up and down the room. She had unconsciously the whole evening done her utmost to arouse in Levin a feeling of love—as of late she had fallen into doing with all young men—and she knew she had attained her aim, as far as was possible in one evening, with a married and honorable man. She liked him very much, and, in spite of the striking difference, from the masculine point of view, between Vronsky and Levin, as a woman she saw something they had in common, which had made Kitty able to love both. Yet as soon as he was out of the room, she ceased to think of him.

P&V: After seeing her guests off, Anna began pacing up and down the room without sitting down. Though for the whole evening (lately she had acted the same way towards all young men) she had unconsciously done everything she could to arouse a feeling of love for her in Levin, and though she knew that she had succeeded in it, as far as one could with regard to an honest, married man in one evening, and though she liked him very much (despite the sharp contrast, from a man’s point of view, between Levin and Vronsky, as a woman she saw what they had in common, for which, too, Kitty had loved them both), as soon as he left the room, she stopped thinking about him. (...)
Another argument for putting Tolstoy into awkward contemporary-sounding English has been advanced by Pevear and Volokhonsky, and, more recently, by Marian Schwartz, namely that Tolstoy himself wrote in awkward Russian and that when we read Garnett or Maude we are not reading the true Tolstoy. Arguably, Schwartz’s attempt to “re-create Tolstoy’s style in English” surpasses P&V’s in ungainliness. Schwartz actually ruins one of the most moving scenes in the novel—when Kitty, fending off her sister’s attempt to comfort her for Vronsky’s rejection, lashes out and reminds her of her degraded position vis-à-vis the womanizing Stiva. After the outburst the sisters sit in silence. In Garnett’s version:
The silence lasted for a minute or two. Dolly was thinking of herself. That humiliation of which she was always conscious came back to her with a peculiar bitterness when her sister reminded her of it. She had not expected such cruelty from her sister, and she was angry with her. But suddenly she heard the rustle of a skirt, and with it the sound of heart-rending, smothered sobbing, and felt arms about her neck. 
Schwartz writes:

The silence lasted for a couple of minutes. Dolly was thinking about herself. Her humiliation, which was always with her, told especially painfully in her when her sister mentioned it. She had not anticipated such cruelty from her sister, and she was angry with her. Suddenly, however, she heard a dress and instead of the sound of sobs that had been held back too long, someone’s hands embracing her around the neck from below.
by Janet Malcolm, NY Review of Books |  Read more:
Image:Photofest, Vivien Leigh in Julien Duvivier’s adaptation of Anna Karenina, 1948

Sushi Robots and Vending-Machine Pizza Will Reinvent the Automat

Decades from now, historians may look back on 2016 as the year Earthlings ate pizza from vending machines, bought burritos from a box in New York’s Grand Central Terminal and devoured sushi rolled by robots.

“Automation is coming whether we want it to come or not,” said Andy Puzder, chief executive officer of CKE Restaurants Inc., which owns the Hardee’s and Carl’s Jr. fast-food chains. “It’s everywhere. It’s in everything.”

At a time when more consumers are embracing hand-made artisanal foods, 24/7 Pizza Box, Burritobox and Sushi Station are headed in the other direction. Vending-machine pizza will start popping up in Florida later this year and chipotle-chicken burritos, accompanied by guacamole and salsa, can now be ordered from an automated box. Sushi-making robots from Japan are already operating in U.S. restaurants and university cafeterias.

Vending machines are a $7.52 billion business that’s growing in the U.S., according to researcher IBISWorld Inc. Sales rose 3.3 percent last year and are expected to gain 1.8 percent a year, on average, through 2020. But most have nothing to do with freshly cooked food. The leaders are Outerwall Inc., which dispenses movies through Redbox, and Compass Group Plc, which sells snacks.

Millennials, accustomed to apps and online services such as Uber Technologies Inc., Amazon.com Inc. and GrubHub Inc., increasingly don’t want to interact with other humans when ordering dinner, calling a cab or stocking up on toilet paper. That’s why eateries including McDonald’s Corp., Panera Bread Co. and CKE Restaurants are investing in kiosks and tablets so customers can also feed their misanthropy. (...)

For those who may think eating lunch out of a vending machine is gross, Koci said he understands.

“I get it. But this is not a vending machine, it’s an automated restaurant,” he said. “There are real humans making the burritos. Everything is handmade.”

No, those humans are not super-small and no, they don’t toil in the machines. The burritos are made in kitchens that also supply restaurants, sometimes flash-frozen, and then shipped to the boxes. They’re defrosted before going into the machines. An employee checks the boxes once a day to make sure there’s fresh inventory.

The vending machines harken back to the Automat, a 20th-century fast-food restaurant that featured cubbyholes with food items behind glass doors. Put coins in a slot and the door would open for a gratuity-free snack or meal.

The bright orange Burritoboxes are higher tech. They have a touch screen, mobile-phone charging station and live-chat customer service in case there’s an issue. It takes about 90 seconds to heat a complete meal, including Cinnabon-brand gooey bites for dessert. Customers can watch music videos on the touch screen while waiting.

by Leslie Patton, Bloomberg | Read more:
Image: 24/7 Pizza Box

How the Sense of an Ending Shapes Memory

Many years ago, I listened to a string quartet perform a challenging piece of contemporary music. The piece, we were told, represented a journey of suffering and redemption. It would descend into discordant screeching for nearly 20 minutes before finally resolving harmoniously. The small concert hall was packed — there were even people seated on stage behind the performers — so there was little choice but to stick it out.

Everything unfolded as promised. The performance sounded like a succession of cats being tossed into a food processor. Eventually, though, the dissonance became resonance, the chaos became calm. It was beautiful.

But then came a sound that had not been in the score; the electronic peal of a mobile phone rang out across the tranquil auditorium. To make matters worse, the beeping arpeggios were emerging from the pocket of an audience member who was sitting on the stage. He was so close to the performers that he could easily have been downed by a solid backhand swing with the viola. It must have been tempting.

The music had been ruined. But it’s curious that 20 minutes of listening can be redeemed or destroyed by what happened in a few moments at the conclusion.

Daniel Kahneman, psychologist and Nobel laureate, tells a similar story about a man enraptured by a symphony recording that is ruined by a hideous screech — a scratch on the vinyl — in the final moments.

“But the experience was not actually ruined,” writes Kahneman, “only the memory of it.” After all, both concerts were almost complete when interrupted. The lived experience had been unblemished until the final moments. The remembered experience was awful.

When we recall things — a concert, a holiday, a bout of flu — we do not play out the recollection minute by minute like a movie in our minds. Instead, we tell ourselves a little story about what happened. And these stories have their own logic in which the order of events makes a difference. (...)

Of course, it is no coincidence that the best bit of the music was at the finale: composers, like novelists and film directors, try to end on a high.

Restaurants keen to manipulate their online reviews have discovered a similar trick: twice recently I’ve dined at restaurants in unfamiliar towns that were highly rated on TripAdvisor. Both times, the food was good but unremarkable. Both times, the proprietor pressed gifts upon us as we left — a free glass of grappa, a nice corkscrew. It seems that when people thought back and wrote their reviews, they remembered this pleasant send-off. That makes sense: if you want people to remember you fondly, it’s best to engineer things so that the last thing they remember of you is something other than signing a bill.

by Tim Harford, Undercover Economist |  Read more:
Image: via:

Monday, June 6, 2016

The Art of Pivoting

It’s June 2016 and I’m packing my bags to move back to Germany after 12 years of academic research at the University of Cambridge and surrounding institutes, like the famous MRC Laboratory of Molecular Biology, forge of Nobel Prizes and home to eminent scientists like Watson & Crick, Sanger, Perutz, the ones you know from Jeopardy or biochemistry textbooks. I had come from a Max-Planck-Institute in Germany, where I had previously completed a life science PhD in slightly under three years. When I started my degree there in 2001, I had been the fastest student to fulfil the requirements for the Diplom in biology at my home university — and already had two peer-reviewed publications in my pocket. You may see the trajectory: success, efficiency, coming from good places, going to good places; the basic ingredients for a successful academic career.

My wife and I had moved to Cambridge in 2004 to both do a brief postdoc abroad. Spice up the CV a bit, meet interesting people before settling down with a normal job back in the home country, that sort of stuff. The work I did was advanced and using technology not available to many people in Europe outside Cambridge at the time, but not revolutionary. However, combining experimental molecular biology and computational analysis of large biological datasets had just seen its first great successes, and I was a man in demand with my coding skills. Publications are the number one currency to climb the academic ladder and, by 2007, I had accumulated enough credit both in terms of scientific output as well as reputation in the field that I seriously considered an academic career for life.

Here, it may need to be explained to everyone who hasn’t spent time in academia why seriously considered is the appropriate phrase. It was a conscious decision for the long game. It’s the Tour de France or Iron Man of a career. You have to believe that you can do it and secure a position against all odds and a fierce competition. You have to be in it to win it. Chances are that you’re not going to make it, a fear that’s constantly present but there’s normally no-one you know who you could ask what life on the other side looks like, because failed academics -an arrogant view I held myself for a long time about those not making it- tend to disappear, ashamed and silent. Or get normal, unglorious jobs. According to my wife, who left academia when our second child was on the way, you got to be “stupid enough to commit to that”, given that academic salaries are poor compared even to entry-level industry positions, the workload is bigger, quite similar to that of running a start-up, and the so-called academic freedom is these days reduced to framing your interests into what funding bodies consider worth supporting.

Speaking of start-ups. 90% of start-ups fail. That’s a slightly better success rate than getting into the game that allows you to fight for a permanent academic position in the first place. In my cohort of Royal Society University Research Fellows, the success rate of obtaining a salary whilst building up a team was about 3%. What happens to the others who want to do science in academia? I’m sure many would not mind to stay postdoctoral scientists forever and pursue research in support of some other principal investigator (PI, a research group leader), but the system doesn’t cater well for that career track. Up is the only way. If you can’t make it to the group leader level, chances are that sooner or later you’re running out of funding. That’s because on the postgraduate level, especially after the financial crisis, there is a rather limited amount of money in the system that allows employment which resembles a regular job. Ambition, ego or an almost unreasonable love for the subject is the key driver for everyone else. Money is dished out competitively, and of course it’s considered an honour to be bringing your own salary to work unsocial hours for a rising star or established hot-shot. This sees many PhD level researchers leave academia sooner or later.

This isn’t necessarily a bad thing. It’s just not what many of them had envisaged when they started their journey in university because they were hoping to do independent research in an academic setting. (...)

It’s a common joke that academics have a problem with time management because of their inability to say no. Everyone higher up the food chain tells young investigators to say no. No to teaching. No to committees. No to administrative duties. “Concentrate on your science, because that’s what you’re going to be assessed on”. At the same time, it’s very clear that if the choice is between two candidates, the better departmental citizen is more likely to be successful. In fact, my good citizenship was explicitly spelled out in my Head of Department’s recommendation letter to the Royal Society, while at the same time pointing out to me that I might want to consider a few less activities.

The rules about departmental citizenship are nowhere written. It’s just what you hear between the lines in comments about the poor performer who failed to do submit his part for a communal bid or the raised eyebrow about some lazy bastard who refused to teach. Unless the system discourages anyone with the ambition to secure a permanent post actively from taking on additional responsibilities, unestablished PIs are going to pour themselves into research, teaching, administration, outreach, you name it — at 110% of what’s healthy.

Add three little kids into that mix, and it may become clear why over time I’ve acquired a collection of meds vast enough to run a burn-out clinic. (...)

Five years into my Fellowship, I felt more and more like a chased rabbit. Work was not about science anymore, work had become that abstract thing you need to do in order to secure a post. Also, with all the activities I agreed to do and to participate in, the time I actually spent doing my own hands-on research had become marginal. While my research group was at its peak and, from the outside, I looked like a very successful scientist, my job and my attitude towards it had completely changed. I began to hate my job.

Running a prolific computational biology research team at the University of Cambridge, I imagined it would be easy to switch into a management role in pharmaceutical R&D. I sent a few applications and had a few telephone conversations, but very soon it emerged that I did not have the relevant qualifications -that is: no business experience- to successfully run a group in industry. My wife explained to me that I had long surpassed the point-of-no-return, because just as you have to earn your stripes in academia to be trusted with directing research, you do have to have industrial project experience and considerable domain knowledge about drug development to be trusted with a R&D team. My most realistic chance would be a more technical role, at least to start with.

Swallowing my pride, I applied for Senior Scientist positions, or, as I thought of it, I applied to become a compute monkey for someone with a lot less academic credibility. However, while next-generation sequencing, gene expression analysis, pathway reconstruction and pipeline development were all happening in my own research group, I was clearly not the one who knew the nitty-gritty of their implementation anymore. The interviews were humiliating. “What’s your favourite Bioconductor package for RNA-seq?” — “Uh, I’d have to ask my PhD student for that.” “How do you force the precise calculation of p-values in kruskal.test?” — “I’d google it!”. Needless to say, I didn’t get a single offer.

by Boris Adryan, Medium | Read more:
Image: uncredited

How Americans Came to Die in the Middle East

The writing of this historical synopsis began yesterday, Memorial Day. It is an attempt by this former artillery officer with a father buried in a veteran’s cemetery to understand why brave Americans were sent to their death in the Middle East and are still dying there.

The hope is that we finally can learn from history and not keep repeating the same mistakes.

It’s important to stick to the facts, since the history of the Middle East already has been grossly distorted by partisan finger-pointing and by denial and cognitive dissonance among the politicians, foreign policy experts (in their own minds), and media blowhards and literati on the left and right, who now claim that they had nothing to do with grievous policy mistakes that they had once endorsed.

The key question, as in all history, is where to begin the history lesson.

We could go all the way back to religious myths, especially the ones about Moses and the Ten Commandments and about Mohammed and his flying horse. Or on a related note, we could go back to the schism that took place between Shia and Sunni Muslims in the seventh century. Such history is relevant, because American soldiers have been foolishly inserted in the middle of the competing myths and irreconcilable schism, but without the inserters acknowledging the religious minefields and steering clear of them.

We also could go back to the First World War and the defeat of the Ottoman Empire, when France and Britain carved up the Middle East into unnatural client states, when Arabs were given false promises of self-determination, when American geologists masqueraded as archeologists as they surreptitiously surveyed for oil, and when the United States joined Saudi Arabia at the hip through the joint oil venture of Aramco.

Another starting point could be 1948, when the United States, under the lead of President Truman, supported the formal establishment of the Jewish State of Israel, thus reversing the longstanding opposition to Zionism by many (most?) American and European Jews and non-Jews. One can endlessly debate the plusses and minuses of our alliance with Israel, as well as the morality of Israel’s violent founding and the violent Palestinian resistance. But it’s undeniable that the alliance has led many Muslims to put a target on Uncle Sam’s back.

Still another starting point could be the 1953 coup d’état against the democratically-elected Iranian President Mohammad Mosaddegh, orchestrated by the CIA in conjunction with the Brits. The coup was triggered when Mosaddegh demanded an auditing of the books of the Anglo-Iranian Oil Company, a British company known today as BP. He threatened nationalization when the British refused to allow the audit. He was replaced by the Shah of Iran, who was seen by many Iranians and Arabs as a puppet of the United States. (Ironically, during the Second World War, Great Britain and the Soviet Union had occupied Iran and deposed an earlier shah.)

It’s considered unpatriotic to ask how my fellow Americans would feel if the tables had been turned and Iranians had deposed an American president and replaced him with their lackey. Therefore, I won’t ask.

It also would be unpatriotic to ask how we’d feel if Iranians had shot down one of our passenger jets, as we had shot down one of theirs in 1988 as it was crossing the Persian Gulf to Dubai from Tehran. Again, I’m not asking.

Anyway, let’s return to the Shah. Starting with President Nixon and continuing with President Carter, the USA sold weapons to the Shah worth billions of dollars. There was even an agreement to sell nuclear reactors to him. Those weapons would later be used by Iran against the U.S. in the Persian Gulf after we had sided with Saddam Hussein in his war against Iran.

At a state dinner in Tehran on December 31, 1977, the Shah toasted President Carter. Carter responded effusively, saying that Iran was “an island of stability in one of the more troubled areas of the world.” He went on to say: This is a great tribute to you, Your Majesty, and to your leadership and to the respect and the admiration and love which your people give to you.”

Actually, most Iranians hated the Shah. Two years later, on January 16, 1979, the unpopular Shah fled into exile after losing control of the country to Shiite cleric Ayatollah Ruhollah Khomeini and his Iranian Revolution.

Then in October of that year, Carter allowed the Shah to come to the USA for medical treatment. Responding with rage, Iranian students stormed the U.S. embassy in Tehran and took embassy personnel hostage, in a hostage drama that would last 444 days, including a failed attempt to rescue the hostages that left dead American soldiers and burnt helicopters in Iran. The drama ended on the day that Carter left office.

But none of the above events is where our history of American lives lost in the Middle East should begin. It should begin in the summer of 1979, with a report written by a low-level Defense Department official by the name of Paul Wolfowitz. His “Limited Contingency Study” assessed the political, geopolitical, sectarian, ethnic, and military situation in the Middle East and recommended a more active American involvement in the region, including possible military intervention to blunt the Soviet Union’s influence, protect our access to oil, and thwart the ambitions of Iraq under its dictator, Saddam Hussein.

Wolfowitz would later become a deputy to Defense Secretary Donald Rumsfeld under the presidency of George W. Bush.

Note that Wolfowitz’s paper was written long before 9/11 and long before the toppling of Saddam Hussein in the Second Gulf War after he was accused of having weapons of mass destruction.

Until the Wolfowitz report, the USA had taken a rather passive and indirect role in the Middle East, placing it secondary to other geopolitical matters and using proxies and intelligence “spooks” to protect its interests in the region. Of course this low-level interference in the affairs of other nations was not seen as low level by the targets of the actions. To use common vernacular, it pissed them off, just as it would have pissed us off if the roles had been reversed. But again, it’s unpatriotic to consider the feelings of others, especially if they are seen as the enemy, or backwards, or religious zealots.

Strategic and tactical thinking began to change with the Wolfowitz paper. Plans started to be developed for military action to replace more benign approaches. Eventually, the plans indeed resulted in military actions, ranging from full-scale war to bombing from the air to drone warfare, in such places as Lebanon, Afghanistan, Iraq, Kuwait, Libya, Syria, Yemen, Pakistan, and Somalia (the locale of “Blackhawk Down”), with side actions outside of the Middle East in Bosnia and Kosovo.

In each case the American military performed admirably and often exceptionally, but less so for Defense Department analysts, for Congress and the White House, for the press on the left and right, or for the public at large—most of whom got caught up in the passions of the moment and didn’t understand the cultures they were dealing with and didn’t think through the unintended consequences of military actions in lands where Western concepts of justice, fairness, equality, tolerance, pluralism, religious freedom, diversity, and multiculturalism were as foreign and out of place as an American tourist wearing flipflops and shorts in a mosque.

by Craig Cantoni, Mish Talk | Read more:
Image: via:

A New Theory Explains How Consciousness Evolved

Ever since Charles Darwin published On the Origin of Species in 1859, evolution has been the grand unifying theory of biology. Yet one of our most important biological traits, consciousness, is rarely studied in the context of evolution. Theories of consciousness come from religion, from philosophy, from cognitive science, but not so much from evolutionary biology. Maybe that’s why so few theories have been able to tackle basic questions such as: What is the adaptive value of consciousness? When did it evolve and what animals have it?

The Attention Schema Theory (AST), developed over the past five years, may be able to answer those questions. The theory suggests that consciousness arises as a solution to one of the most fundamental problems facing any nervous system: Too much information constantly flows in to be fully processed. The brain evolved increasingly sophisticated mechanisms for deeply processing a few select signals at the expense of others, and in the AST, consciousness is the ultimate result of that evolutionary sequence. If the theory is right—and that has yet to be determined—then consciousness evolved gradually over the past half billion years and is present in a range of vertebrate species.

Even before the evolution of a central brain, nervous systems took advantage of a simple computing trick: competition. Neurons act like candidates in an election, each one shouting and trying to suppress its fellows. At any moment only a few neurons win that intense competition, their signals rising up above the noise and impacting the animal’s behavior. This process is called selective signal enhancement, and without it, a nervous system can do almost nothing.

by Michael Graziano, The Atlantic |  Read more:
Image: Chris Helgren / Reuters

Saturday, June 4, 2016

We Have No Idea What Aging Looks Like

My friend Deborah from college loves to tell this story: One of the first times we hung out, we started talking about her solo travels to Burma and assorted other spots in Southeast Asia. I was 19 years old, and like most 19-year-olds, nearly all my friends were people I met through school in some fashion, meaning that virtually all my friends were people within a two-year age range of myself (four years max, though given the dynamics of high school and even collegiate hierarchies, anything more than two years was a stretch). But as she was regaling me with her thrilling tales, I realized she couldn’t have traveled so extensively if she were my age, and it dawned on me that I was talking to someone Older.

I’d heard you weren’t supposed to ask people how old they were—what if they were Old?!—but I couldn’t help myself. I asked her how old she was, and she told me, and, according to her, I gasped, fluttered my hand to my chest, and said, “But you look so good!“

Deborah was 26.

I turn 40 this week, and this story, which was embarrassing to me the first time she told it—she had the good sense to wait to relay it to me until I was in my 30s and therefore old enough to appreciate it—has now become hilarious. It’s hilarious that I thought 26 was shockingly old, and that I thought 26 would be old enough to show signs of aging in a way that would be detrimental to one’s conventional beauty. (In fact, it seems that would be anything over 31, if we’re going by sheer numbers here—and while I’m tempted to call bullshit on that, given that people may be more satisfied with their looks the older they get, I also know that age 31 was probably when I looked objectively my best.)

We still don’t really know what aging looks like. Certainly younger people don’t, and everyone reading this is younger than someone. I used to be vaguely flattered when younger people would express surprise when I’d mention my age, until I recalled my own response to Deborah’s ancient 26. It wasn’t that I knew what 26 looked like and that she looked younger than that; it was that I had no idea what looking 26 might actually entail, just that it was older than what I’d been led to believe was the height of my own attractiveness, and that therefore the fact that she looked great at 26 meant she was an outlier and therefore warranted a cry of “But you look sogood!” When a younger person tells me I “don’t look 40”—or, my favorite, that I’m “well preserved” (!), I accept it with grace but always wonder if they’ll later recall that moment with their own embarrassment. Because I do look 40, and I’mnotparticularly “preserved.” They just have no idea what 40 looks like, and it’s not their fault. Until it was within eyeshot, I didn’t know myself.

What we consider older (or younger) is always in relation to ourselves. Older was once my 26-year-old friend; now that my circle of friends has loosened beyond the age constrictions of school and I have friends in their 50s, even people in their 60s don’t seem so old to me. My parents, once hopelessly old to me, I now see as—I can’t say young, but when I wanted to talk about Mad Men with them, my mother said they were saving television for “deep retirement.” Meaning not the retirement they’re in now—my father retired from paid work nearly 10 years ago, and my mother retired from homemaking as well, a feminist arrangement I adore—but a later form of retirement, when they’re too frail to travel extensively as they’re doing now. That is: When they’re Old.

There’s a particular sort of human-interest news piece that takes a person over 70 who is doing something—anything, really—and treats the fact that they are not sewn into a La-Z-Boy as a small miracle. We are supposed to find this inspiring, and I suppose it is. But it is not unique. The fact that younger folk still regard active elderly people as outliers says little about them, and everything about us. We expect old people to curl up and—well, die, I suppose (though our society is still so scared shitless of death that we spend 28 percent of our Medicare dollars in the last six months of life). So when they don’t, we’re surprised, even though we shouldn’t be. There are indeed old people who spend their days mostly watching television and complaining about their aches, but there are young people who do that too. My grandmother, who turns 90 next month, teaches line dancing lessons at her retirement home. I’m proud of her. She is not an outlier.

This idea that old people—whatever each of us considers to be old—are outliers for not fitting into what we expect of them goes double for beauty. That makes a sort of sense, given that the hallmarks of beauty are so closely associated with youth, so when a woman of a certain age still has some of those hallmarks, it is remarkable. Except: It’s not, not really, given that so much of the attention we do give to famous older women has less to do with their beauty and more with their grooming. Take the case of Helen Mirren, whom the media has long crowned as the sexy senior (which started happening 15 years ago, incidentally, back when she was the same age Julia Louis-Dreyfus is now). She’s a lovely woman, and exceptionally accomplished, but the attention paid to her sex appeal after age 50 has largely been about her refusal to style herself in a matronly fashion. (I don’t know enough about celebrity fashion to say for sure, but I’m guessing that she ushered in today’s era, when celebrities over 50 aren’t afraid to show some skin, and look great in it.) When I walk through this city, I see a lot of older women who groom themselves just as beautifully, and I’m not just talking about the Iris Apfels of the world. I’m talking my gym buddy Lynn, whose loose bun and oh-so-slightly-off-the-shoulder tees echo her life as a dancer; I’m talking my neighbor Dorothy, whose loose movie-star curls fall in her face when she talks; I’m talking real women you know, who take care of themselves, and who may or may not have the bone structure of Carmen Dell’Orefice but who look pretty damn good anyway. Part of the joke of Amy Schumer’s sublime “Last Fuckable Day” sketch was the fact that all of the women in it were perfectly good-looking. We know that women don’t shrivel up and die after 50, but we’re still not sure how to truly acknowledge it, so we continue to rely on outdated conversations about aging. I mean, the opening slide of that Amy Schumer sketch is: “Uncensored: Hide Your Mom.”

There’s a paradox built into acknowledging older women’s beauty: By calling attention to both their appearance and their age, we continue to treat older women who continue an otherwise unremarkable level of grooming as exceptions. That’s not to say that we shouldn’t do so; Advanced Style, for example, is near-radical in its presentation of older women, and I’d hate for it to become just…Style. And I absolutely don’t want to say that we should start sweeping older women back under the male gaze; escaping that level of scrutiny is one of the benefits of growing older. I’m also aware of the folly of using the way we talk about celebrities as a stand-in for how we talk about age more generally—the only people whose ages we collectively examine are famous people, whose ages only come up for discussion in regard to looks if we’re all like A) Wow, that person doesn’t look that old (Cicely Tyson, 91), or B) Wow, that person looks way older than that (Ted Cruz, 45). Nobody is like, Wow, Frances McDormand is 58? And she looks it too! Still, celebrities are a useful comparison point for how our notions of age are changing, even if the ways we talk about it aren’t. Anne Bancroft was 36 when she was cast as Mrs. Robinson. A selection of women who are 36 today: Zooey Deschanel, Laura Prepon, Mindy Kaling, Rosamund Pike, Claire Danes. Kim Kardashian turns 36 in October. Can you imagine any of these people being cast as a scandalously older woman today?

by Autumn Whitefield-Madrano, New Inquiry |  Read more:
Image: uncredited

Muhammad Ali (January, 1942 - June, 2016)


"I've done something new for this fight. I done wrestled with an alligator, I done tussled with a whale; handcuffed lightning, thrown thunder in jail; only last week, I murdered a rock, injured a stone, hospitalized a brick; I'm so mean I make medicine sick." (Muhammad Ali, Rumble in the Jungle).

[ed. The Greatest. See also: The Outsized Life of Muhammad Ali.]

Friday, June 3, 2016

Massive Attack (feat. Hope Sandoval)

Bots are awesome! Humans? Not so much.

[ed. wtf?]

In the past few days my personal resume bot has exchanged over 24,000 messages via Facebook Messenger and SMS. It’s chatted with folks from every industry and has introduced me to people at Facebook, Microsoft, and Google — plus a half dozen small, compelling teams.

What I learned about humans and AI while sifting through those conversations is fascinating and also a little disturbing.

I’ve distilled that data into useful nuggets you should consider before jumping on the bot bandwagon.

The Backstory of #EstherBot


Earlier this week I built and launched EstherBot, a personal resume bot that can tell you about my career, interests, and values. It shot to the #2 spot on Product Hunt and my Medium post about why and how I built it spread like wildfire – racking up over 1k recommends. (Get instructions for building your own free bot here.)

EstherBot speaks to the current zeitgeist. The era of messaging has arrived along with a botpocalypse, but few people have seen examples that go beyond the personal assistant, travel butler, or shopping concierge. To some, those feel like solutions for the 1% rather than the 99%.

EstherBot is relatable and understandable. The idea is simple — the resume hasn’t really changed that much in the digital age. While you’re producing all this information about yourself in the way that you use social media, your resume doesn’t actively seek out opportunities that you might be interested in. Your resume doesn’t constantly learn and get better by observing you. Instead, you have to do all this manual work, just like you used to. Why?

There’s a ton of data that could be used to connect you to better opportunities. Data including hobbies, values, location preferences, multimedia samples of your work. On and on. A resume simply can’t hold all of that, but a bot can.

by Esther Crawford, Chatbots Magazine |  Read more:
Image: uncredited