Saturday, December 22, 2012

What Does a Conductor Do?

I'm standing on a podium, with an enameled wand cocked between my fingers and sweat dampening the small of my back. Ranks of young musicians eye me skeptically. They know I don’t belong here, but they’re waiting for me to pretend I do. I raise my arm in the oppressive silence and let it drop. Miraculously, Mozart’s overture to Don Giovanni explodes in front of me, ragged but recognizable, violently thrilling. This feels like an anxiety dream, but it’s actually an attempt to answer a question that the great conductor Riccardo Muti asked on receiving an award last year: “What is it, really, I do?”

I have been wondering what, exactly, a conductor does since around 1980, when I led a JVC boom box in a phenomenal performance of Beethoven’s Seventh Symphony in my bedroom. I was bewitched by the music—the poignant plod of the second movement, the crazed gallop of the fourth—and fascinated by the sorcery. In college, I took a conducting course, presided over a few performances of my own compositions, and led the pit orchestra for a modern-dance program. Those crumbs of experience left me in awe of the constellation of skills and talents required of a conductor—and also made me somewhat skeptical that waving a stick creates a coherent interpretation.

Ever since big ensembles became the basis of orchestral music, about 200 years ago, doubt has dogged the guy on the podium. Audiences wonder whether he (or, increasingly, she) has any effect; players are sure they could do better; and even conductors occasionally feel superfluous. “I’m in a bastard profession, a dishonest profession,” agonized Dimitri Mitropoulos, who led the New York Philharmonic in the fifties. “The others make all the music, and I get the salary and the credit.” Call it the Maestro Paradox: The person responsible for the totality of sound produces none.

by Justin Davidson, New York Magazine | Read more:
Photo: Gjon Mili, Time Life Pictures/Getty Images

Friday, December 21, 2012

NRA Offensive Exposes Deep U.S. Divisions on Guns

[ed. Yes, let's turn our schools into prisons and have our children cower in fear of being murdered every day. I have a feeling this is the end of the NRA. I find it repugnant that the title of this article even uses the word "offensive", as if this were some kind of game or war manuever. More like a terrorist group holding our culture hostage.]

Any chance for national unity on U.S. gun violence appeared to wane a week after the Connecticut school massacre, as the powerful NRA gun rights lobby called on Friday for armed guards in every school and gun-control advocates vehemently rejected the proposal.

The solution offered by the National Rifle Association defied a push by President Barack Obama for new gun laws, such as bans on high-capacity magazines and certain semiautomatic rifles.

At a hotel near the White House, NRA Chief Executive Wayne LaPierre said a debate among lawmakers would be long and ineffective, and that school children were better served by immediate action to send officers with firearms into schools.

LaPierre delivered an impassioned defense of the firearms that millions of Americans own, in a rare NRA news briefing after the Newtown, Connecticut, shooting in which a gunman killed his mother, and then 20 children and six adults at an elementary school.

"Why is the idea of a gun good when it's used to protect our president or our country or our police, but bad when it's used to protect our children in their schools?" LaPierre asked in comments twice interrupted by anti-NRA protesters whom guards forced from the room.

Speaking to about 200 reporters and editors but taking no questions, LaPierre dared politicians to oppose armed guards.

"Is the press and political class here in Washington so consumed by fear and hatred of the NRA and America's gun owners," he asked, "that you're willing to accept a world where real resistance to evil monsters is a lone, unarmed school principal?"

by David Ingram and Patricia Zengerle, Yahoo News | Read more:
Photo: REUTERS/Joshua Roberts

His Story: A Writer of Words and Music

When the poet, novelist, piano player and spoken-word recording artist Gil Scott-Heron died unexpectedly last May, at 62, he left behind a prickly and galvanizing body of work. His best songs — “The Revolution Will Not Be Televised,” “Whitey on the Moon,” “We Almost Lost Detroit” — are rarely heard on classic-rock radio; they’re too eccentric and polemical and might kill a workingman’s lunchtime buzz. But they’ll still stop you in your tracks.

Leave it to Scott-Heron to save some of his best for last. This posthumously published memoir, “The Last Holiday,” is an elegiac culmination to his musical and literary career. He’s a real writer, a word man, and it is as wriggling and vital in its way as Bob Dylan’s“Chronicles: Volume One.”

The Dylan comparison is worth picking up on for a moment. The critic Greil Marcus coined the phrase “the old, weird America” to refer to the influences that Mr. Dylan and the Band raked into their music on “The Basement Tapes.” In “The Last Holiday” Scott-Heron taps into the far side of that older and weirder America — that is, the fully African-American side. This memoir reads a bit like Langston Hughes filtered through the scratchy and electrified sensibilities of John Lee Hooker, Dick Gregory and Spike Lee.

For a relatively slim book, this one gets a lot of things said, not just about Scott-Heron’s own life but also about America in the second half of the 20th century. It encompasses Chicago, where he was born in 1949. There are sections in rural Tennessee, where he went to live with his grandmother after his Jamaican father abandoned the family to play professional soccer in Scotland. A few of these Tennessee passages are nearly as lovely as anything in James Agee’s prose poem “Knoxville: Summer of 1915.” Later Scott-Heron’s mother uprooted him to the Bronx.

This book is a warm memorial to the strong women in his life. One was his grandmother, who instilled in him a love of learning. The other was his mother, who came back into Scott-Heron’s life after his grandmother’s death. Both are electric presences in these pages.

In department stores in the 1950s, his grandmother refused to give up her place in line to whites. His mother fought for her son when he got into trouble for playing boogie-woogie music on a school Steinway, and when he was accidentally relegated to vocational classes. Administrators learned to fear and respect her. One said to the author: “Heron, your mother is a very impressive lady.”

Scott-Heron’s account of his school years evokes the entire arc of the African-American educational experience during the past century. He attended segregated schools in Tennessee before, bravely, in 1962, becoming one of the first blacks to desegregate a junior high school. Later, while living in the projects in the Chelsea section of Manhattan, he began attending, while in 10th grade, the prestigious Fieldston School in the Bronx on a full scholarship.

It was not an overwhelmingly positive experience. “I can never accuse the people of Fieldston, neither the students nor the faculty, of being racist,” he writes. “I can accuse the students of knowing each other for years and preferring to hang out with each other instead of some guy who just got there. I can accuse the teachers of having taught my classmates for 10 years and me for 10 minutes.”

This book is finally a testament to his unfettered drive as an artist. He left the historically black Lincoln College in Pennsylvania in 1968, after his freshman year, to write his first novel. That novel, a murder mystery called “The Vulture,” and a book of poems, “Small Talk at 125th and Lenox,” were quickly published. He began to record his songs soon after and his first album, also titled “Small Talk at 125th and Lenox,” was released in 1970.

In 1971 he drove down to Johns Hopkins University — he describes himself at the time as “about 6-foot-2 plus three inches of Afro” — and talked his way into the creative writing program. He got a master’s degree from Johns Hopkins in 1972 and taught writing until his musical career took off.

by Dwight Garner, NY Times |  Read more:
Photo: Mischa Richter




via: (inactive)

Only Rock and Roll

The minstrel, in his black-and-white domain, has had a poor time of it lately. The Black and White Minstrel Show, a stalwart of the BBC’s Saturday night schedule for twenty years, is talked of now almost exclusively in terms of moral wrongness, as if white men blacking up were merely an exercise in racial mockery and not the remnant of a cross-cultural theatrical form with intertwining roots in mid-nineteenth-century Southern society. Nevertheless, by the time of its final broadcast in 1978, the Black and White Minstrel Show was ready to fade away, not only on account of what seemed increasingly dubious representations in a country with a growing black population (though few among it came from the United States, where the minstrel show had flourished) but also because its burlesque manner had been superseded by an updated minstrelsy, which is more popular today than ever.

Rock and roll performers, like certain jazz crooners before them, dispensed with the stark device of blackface and depended instead on black voice. The American originator, obviously, was Elvis Presley, whose first single, “That’s All Right”, released in 1954, was a cover version of a song by the black singer Arthur “Big Boy” Crudup (he also recorded Crudup’s “My Baby Left Me” and “So Glad You’re Mine”). In Britain, lagging behind the US here as in most areas of innovation, the vocal twists and shouts associated with blues and gospel, the guitar pulses, the performers’ beseeching gestures, were most successfully absorbed by the young Rolling Stones. They would eventually steer their music into a style that can be called indigenous, with the African American just one among several ancestors; but in common with other groups that formed at about the same time and thrived in the light shed by the Stones’ success – the Yardbirds, the Pretty Things, the Animals, Manfred Mann, John Mayall’s Bluesbreakers, Peter Green’s Fleetwood Mac – the Stones started out trying to sound black, vocally and instrumentally. None of them had visited the US at the time of the group’s first releases, never mind the still-segregated South, and it seems probable they had met few black Americans. Besotted by both the blues and its frisky nephew, rhythm and blues, British groups isolated the music from its cultural and geographical territories, which oddly enough permitted it to prosper, freed from historic guilt and inverted deference. Mick Jagger, a distinctive singer with a number of transgressive extras in his gimmick bag – epicene ugly-beautiful looks for one thing, appealing delinquent attitude for another – became the Al Jolson of the teenybop era.  (...)

The sudden easy availability of long-playing records by the likes of Robert Johnson and Lightnin’ Hopkins opened a door into broader areas of black culture. A visit to the local library yielded a copy of Paul Oliver’s Blues Fell This Morning. From there, it was a short move to James Baldwin, whose crossover work The Fire Next Time was being read throughout the US just as “Johnny B. Goode” was appearing on Ready Steady Go!. The reader of Baldwin was unlikely to remain ignorant of Martin Luther King. It became possible, as it had not been before, for R&B enthusiasts from Dartford to Dundee to understand that when Muddy Waters sighed “Oh yeah-ah-ah-ah . . . . Ever’thing’s gonna be alright this morning”, he was not putting on the agony but making a statement about his life. The words of the Stones’ first No 1 hit, “It’s All Over Now” (1964), a country-and-western song by the black singer Bobby Womack, are bluntly literal and not susceptible to double meaning; but it was indeed all over for a certain way of attending to pop music, and, with that, of seeing the world. And the first pressing of the first Rolling Stones 45 rpm disc had much to do with it.

With Brian Jones in control of the balance, the early Stones were a cohesive R&B band with forward thrust and an attractive all-over rudeness in delivery (mistaken, at the time, for being “anti-establishment”). Keith Richards’s attack on “Carol” isn’t Chuck Berry’s, but it’s not bad for someone still in his teens. No matter how good they would become, though, they could never be the real thing. And no matter who they first thought they were singing to, they found their act overtaken by listeners for whom looks and image were as important as rhythm. Whereas the old-timers were on first-name terms with the blues (“Come on in now, Muddy, my husband’s just now left”; “Hey, Bo Diddley . . .” etc), the Stones – Jagger in particular – reserved their possessive feelings for the camera.

Both the Beatles and the Stones had a behind-the-scenes member in the form of the manager: Brian Epstein coddled the Beatles, while Andrew Loog Oldham shaped the anti-Beatles: scruffy cockneys (or mockneys, as Philip Norman says), where the Merseysiders were spruce; scowling, where they smiled; uncouth, ununiformed, apparently untameable. One group played before the Queen; the other appeared in front of the judges (Mick, Keith and Brian were all busted for drugs in 1967, and Brian again a year later; he was fired from the Stones some months before his death in July 1969). In The John Lennon Letters, Hunter Davies writes that John was forbidden to disclose in correspondence with fans that he was married to Cynthia Powell. The Stones, meanwhile, provided the rib for a gorgeous new being to mate with their own unearthly selves: the rock chick – Marianne Faithfull, Anita Pallenberg, Marsha Hunt. While Paul McCartney yearned for trouble-free yesterdays, Mick drawled suggestively through “Brown Sugar”, a song supposedly about his love for Hunt, though it could as easily have come from his liking for Blind Boy Fuller: “I got me a brownskin woman; / Tell me, Momma, where you get your sugar from” (“My Brownskin Sugar Plum”, 1935).

by James Campbell, TLS |  Read more:

“Brightening with each swipe of a workman’s cloth, stained glass in the Christian Science Mapparium in Boston, Massachusetts, shows political boundaries and coastlines charted after millennia of mapmaking.”

From “Revolution in Mapping,” February 1998, National Geographic magazine

New Facebook Policy: Dec. 20, 2012

[ed. So, if I understand this right, the filters FB uses to prevent spam any unwanted messages will work only if the sender doesn't bribe FB with a payment?]

If you see a message from someone you don't want to hear from in your Inbox, you can always select “Move to Other” or “Report Spam” from the Actions menu. You can also block people that you don’t want to hear from on Facebook.

Inbox delivery test

Facebook Messages is designed to get the most relevant messages into your Inbox and put less relevant messages into your Other folder. We rely on signals about the message to achieve this goal.

Some of these signals are social – we use social signals such as friend connections to determine whether a message is likely to be one you want to see in your Inbox.

Some of these signals are algorithmic – we use algorithms to identify spam and use broader signals from the social graph, such as friend of friend connections or people you may know, to help determine relevance.

Today we’re starting a small experiment to test the usefulness of economic signals to determine relevance. This test will give a small number of people the option to pay to have a message routed to the Inbox rather than the Other folder of a recipient that they are not connected with.

Several commentators and researchers have noted that imposing a financial cost on the sender may be the most effective way to discourage unwanted messages and facilitate delivery of messages that are relevant and useful.

This test is designed to address situations where neither social nor algorithmic signals are sufficient. For example, if you want to send a message to someone you heard speak at an event but are not friends with, or if you want to message someone about a job opportunity, you can use this feature to reach their Inbox. For the receiver, this test allows them to hear from people who have an important message to send them.

This message routing feature is only for personal messages between individuals in the U.S. In this test, the number of messages a person can have routed from their Other folder to their Inbox will be limited to a maximum of one per week.

We’ll continue to iterate and evolve Facebook Messages over the coming months.

via Facebook |  Read more:

A Social Offender for Our Times

What are you working on?" is academe's standard conversation starter, and for the past five years Geoffrey Nunberg has had a nonstandard response: "a book on assholes."

"You get giggles," says the linguist at the University of California at Berkeley, "or you get, 'You must have a lot of time on your hands'—the idea being that a word that vulgar and simple can't possibly be worth writing about." Scholars have tended to regard the book as a jeu d'esprit, not a serious undertaking. Their reaction intrigued Nunberg: "When people say a word is beneath consideration, it's a sign that there's a lot going on."

Aaron James can relate. He is a professor of philosophy at the University of California at Irvine and the author of Assholes: A Theory (Doubleday), which was published in late October, a few months after Nunberg's book, Ascent of the A-Word: Assholism, the First Sixty Years (PublicAffairs). James took up the project with some trepidation. "I felt a real sense of risk about writing something that might not appeal to my intellectual friends." (...)

A Cornell man, a Deke, a perfect asshole." Thus did Norman Mailer introduce Lieutenant Dove, literature's first asshole, in the 1948 novel The Naked and the Dead. From the start, Nunberg writes, the word carried overtones of class. "Asshole" launched its "attack from the ground level, in the name of ordinary Joes, people whose moral authority derives not from their rank or breeding but their authenticity, which is exactly the thing that the asshole lacks."

So what is an asshole, exactly? How is he (and assholes are almost always men) distinct from other types of social malefactors? Are assholes born that way, or is their boorishness culturally conditioned? What explains the spike in the asshole population?

James was at the beach when he began mulling those questions. "I was watching one of the usual miscreants surf by on a wave and thought, Gosh, he's an asshole." Not an intellectual breakthrough, he concedes, but his reaction had what he calls "cognitive content." In other words, his statement was more than a mere expression of feeling. He started sketching a theory of assholes, refining his thinking at the Center for Advanced Study in the Behavioral Sciences at Stanford, where he spent a year as a fellow in 2009.

He consulted Rousseau (who, James notes, was something of an asshole himself on account of his shabby parenting skills), Hobbes (especially his views on the "Foole" who breaks the social contract), Kant (his notion of self-conceit in particular), and more-recent scholarship on psychopaths. He spoke with psychologists, lawyers, and anthropologists, all of whom suggested asshole reading lists. "There are a lot of similar characters studied in other disciplines, like the free rider or the amoralist or the cheater," James says, calling his time at Stanford an "interdisciplinary education in asshole theory."

James argues for a three-part definition of assholes that boils down to this: Assholes act out of a deep-rooted sense of entitlement, a habitual and persistent belief that they deserve special treatment. (Nunberg points out that use of the phrase "sense of entitlement" tracks the spread of "asshole"—both have spiked since the 1970s.) How to distinguish an asshole from a scumbag, a jerk, a prick, or a schmuck? Assholes are systematic. We all do assholeish things, but only an asshole feels fully justified in always acting like an asshole. As James puts it, "If one is special on one's birthday, the asshole's birthday comes every day."

by Evan R. Goldstein, Chronicle of Higher Education | Read more:
Photo: Istock

Intelligence Without Design

Most people have a hard time believing that existence appeared out of nowhere. So they turn to available worldviews that incorporate purpose and meaning, which are generally religious. This has resulted in a deep split between a strict materialist worldview and other worldviews that see some intelligence within existence. They justify their respective positions by reducing the issues to a polarity between religious spirituality and secular materialism. This conflict cuts to the foundation of how and why existence exists. At one extreme are “arguments from design,” on the other side are materialist, evolutionary perspectives. (In the seeming middle are those who say God designed evolution, which is really just a more sophisticated argument from design.)

The vital questions are “purpose versus purposelessness” or “meaning versus meaninglessness.” Purposelessness is abhorrent to intelligent design believers; purpose is anathema to most of traditional science. The materialist scientists have no truck with what they consider the anthropomorphic indulgences that the “designers” exhibit. They focus on the absurdity of a transcendent God—an easy target. Intelligent “intelligent designers” rail against science’s narrow vision, its refusal to give any real explanation for the extraordinary confluence of statistically improbable events and finely tuned, coordinated configurations of exacting precision down to mathematically unimaginably small sub-atomic levels that allow this universe to be at all.

Design advocates point out that science can only prove design does not fit a very narrow scientific paradigm. Materialist scientists in turn say that without science as a baseline for truth and objectivity, any flight of fancy is possible. Obviously these two positions operate out of separate worldviews with different conceptions of “proof.” Scientific worldviews only address what is falsifiable and provable by the scientific method. But the scientific assumption that no intelligence is involved in the construction of the universe is likewise not provable by science.

Our approach to evolution cannot ultimately be proven in the ordinary sense of proof. This is because “proof” is always embedded in a worldview, and thus is always circular. So we speak to reasonableness and “most-likelihood.” After all, these are all one has to go on when the moorings of science prove insufficient to deal with life’s important questions and issues.

How this topic is viewed has vital consequences. Let’s first agree that evolution of some sort operates within the play of existence. The view we are putting forth features an intelligence without a designer or a specific design. We find that this perspective gives a better explanation of the evolutionary process, including where humanity finds itself at this historic and dangerous evolutionary moment. It is also a source of hope, offering a realistic possibility that we are facing an inevitable evolutionary challenge that can be met.

Does greater complexity in evolution entail improvement? Of course, the easy way out is to refuse to link evolutionary change with improvement, claiming that values are simply manufactured by human needs—evolution’s fancy way of giving humans a survival edge because it allows us the illusions of purpose and meaning that give impetus to social change when necessary.

Scientific reductionists believe that all explanations could be reduced to the laws of physics—if we knew enough. Both scientific reductionists and emergentists argue that because on a purely physical level the cosmos constructs increasing complexities that both replicate and evolve, life and then consciousness would necessarily likewise appear, given a proper environment to do so. In other words, because of the way evolution works, statistically speaking, given the right combination of chemicals in the right environment, life is bound to appear. And then, because consciousness has some significant evolutionary survival advantages, it, too, would come forth at some point. Seeing that life and consciousness have evolved, this argument in retrospect seeks an essentially mechanical explanation free from any hint of intention or purpose—because where would the purpose come from? Purpose would introduce a mystery that science assumes is unwarranted.

However, science typically does not inquire into where this vector toward complexity comes from and why different qualities emerge out of more complex configurations. Instead of addressing why a particular arrangement of chemicals in a particular “soup” brings forth life, science just observes and states that it does. The same is true for consciousness: it does seem to emerge from life at some point, although just why and even where it emerges are murky questions. Are amoebas conscious, or plants, or ants, or snakes, or apes? And then there is the self-reflecting consciousness of humans, which seems paired with an evolutionarily new linguistic ability. Did this, too, necessarily emerge simply because language gives social animals enormous facilities in cooperation, which in turn enables humans to out-compete other species?

The question of the emergence of something seemingly new out of something old lies at the heart of whether something besides purposeless and totally indifferent mechanisms are going on within the makeup of existence. In other words, is some kind of intelligence embedded in the very structure of existence itself that moves within the vectors of evolution to construct complexities, and even more, to bring about emergent qualities—including life and consciousness?

by Diana Alstad and Joel Kramer, Guernica |  Read more:
Photo: Hubble Space Telescope courtesy of NASA, ESA, STScI, J. Hester and P. Scowen (Arizona State University)

Rider on the Storm

In the summer of 1959, a pair of F-8 Crusader combat jets were on a routine flight to Beaufort, North Carolina with no particular designs on making history. The late afternoon sunlight glinted from the silver and orange fuselages as the US Marine Corps pilots flew high above the Carolina coast at near the speed of sound. The lead jet was piloted by 39-year-old Lt Col William Rankin, a veteran of both World War 2 and the Korean War. In another Crusader followed his wingman, Lt Herbert Nolan. The pilots were cruising at 47,000 feet to stay above a large, surly-looking column of cumulonimbus cloud which was amassing about a half mile below them, threatening to moisten the officers upon their arrival at the air field.

Mere minutes before they were scheduled to begin their descent towards Beaufort, William Rankin heard a decreasingly reassuring series of grinding sounds coming from his aircraft's engine. The airframe shuddered, and most of the indicator needles on his array of cockpit instruments flopped into their fluorescent orange “something is horribly wrong” regions. The engine had stopped cold. As the unpowered aircraft dipped earthward, Lt Col Rankin switched on his Crusader's emergency generator to electrify his radio. “Power failure,” Rankin transmitted matter-of-factly to Nolan. “May have to eject.”

Unable to restart his engine, and struggling to keep his craft from entering a near-supersonic nose dive, Rankin grasped the two emergency eject handles. He was mindful of his extreme altitude, and of the serious discomfort that would accompany the sudden decompression of an ejection; but although he lacked a pressure suit, he knew that his oxygen mask should keep him breathing in the rarefied atmosphere nine miles up. He was also wary of the ominous gray soup of a storm that lurked below; but having previously experienced a bail out amidst enemy fire in Korea, a bit of inclement weather didn't seem all that off-putting. At approximately 6:00 pm, Lt Col Rankin concluded that his aircraft was unrecoverable and pulled hard on his eject handles. An explosive charge propelled him from the cockpit into the atmosphere with sufficient force to rip his left glove from his hand, scattering his canopy, pilot seat, and other plane-related debris into the sky. Bill Rankin had spent a fair amount of time skydiving in his career—both premeditated and otherwise—but this particular dive would be unlike any that he or any living person had experienced before.  (...)

After falling for a mere 10 seconds, Bill Rankin penetrated the top of the anvil-shaped storm. The dense gray cloud smothered out the summer sun, and the temperature dropped rapidly. In less than a minute the extreme cold and wind began to inflict Rankin's extremities with frostbite; particularly his gloveless left hand. The wind was a cacophony inside his flight helmet. Freezing, injured, and unable to see more than a few feet in the murky cloud, the Lieutenant Colonel mustered all of his will to keep his hand far from the rip cord.

After falling through damp darkness for an interminable time, Rankin began to grow concerned that the automatic switch on his parachute had malfunctioned. He felt certain that he had been descending for several minutes, though he was aware that one's sense of time is a fickle thing under such distracting circumstances. He fingered the rip cord anxiously, wondering whether to give it a yank. He'd lost all feeling in his left hand, and his other limbs weren't faring much better. It was then that he felt a sharp and familiar upward tug on his harness--his parachute had deployed. It was too dark to see the chute's canopy above him, but he tugged on the risers and concluded that it had indeed inflated properly. This was a welcome reprieve from the wet-and-windy free-fall.

Unfortunately for the impaired pilot, he was nowhere near the 10,000 foot altitude he expected. Strong updrafts in the cell had decreased his terminal velocity substantially, and the volatile storm had triggered his barometric parachute switch prematurely. Bill Rankin was still far from the earth, and he was now dangling helplessly in the belly of an oblivious monstrosity.

“I'd see lightning,” Rankin would later muse, “Boy, do I remember that lightning. I never exactly heard the thunder; I felt it.” Amidst the electrical spectacle, the storm's capricious winds pressed Rankin downward until he encountered the powerful updrafts—the same updrafts that keep hailstones aloft as they accumulate ice--which dragged him and his chute thousands of feet back up into the storm. This dangerous effect is familiar to paragliding enthusiasts, who unaffectionately refer to it as cloud suck. At the apex Rankin caught up with his parachute, causing it to drape over him like a wet blanket and stir worries that he would become entangled with it and drop from the sky at a truly terminal velocity. Again he fell, and again the updrafts yanked him skyward in the darkness. He lost count of how many times this up-and-down cycle repeated. “At one point I got seasick and heaved,” he once retold.

by Alan Bellows, Damn Interesting |  Read more:
Image: Damn Interesting
h/t 3 Quarks Daily

Thursday, December 20, 2012


Guillermo Carrion, 2012. NYC, Oil and spray on canvas
via:

How College Bowls Got Over-Commercialized


[ed. ...not to mention the stadiums these games might be played in.]

In David Foster Wallace's futuristic 1996 novel Infinite Jest, years are no longer referred to by numbers: You just have to remember that Year of the Perdue Wonderchicken comes after the Year of the Trial-Size Dove Bar. This is only a little weirder and a little less funny than what has actually happened to college football. The Peach Bowl is now the Chick-fil-A Bowl, the Citrus Bowl is the Capital One Bowl, and the Motor City Bowl is the Little Caesars Pizza Bowl. Other teams will be competing at the Outback Bowl, the Insight Bowl, the MAACO Bowl Las Vegas, the Meineke Car Care Bowl, the Kraft Fight Hunger Bowl, the Military Bowl, and the Beef 'O' Brady's Bowl. You know sponsorships are a little too easy to come by when a chain restaurant known for its chili is willing to put its name that close to the word "bowl."

Unlike other major sports governing bodies, college football's—the National Collegiate Athletic Assn.—doesn't run its own postseason. Instead, it allows private companies to start their own bowl games, invite teams to play, and then—if they choose—bring on weird sponsors. It's such a free-market system that Dan Wetzel, co-author of the new book Death to the BCS—meaning the Bowl Championship Series, the organization that runs the bowl system—says, "I could start a bowl."

The current bowl system is unpopular with just about everybody. Only 26 percent of fans like it, according to a Quinnipiac University poll. Barack Obama and John McCain campaigned against it, and it pulls in a lot less money than a March Madness-type playoff system would. "It's the only business that outsources its most profitable product," says Wetzel. "Other than to see their own team, nobody says, 'I have to go to the Humanitarian Bowl!' " While no other business would spend decades investing in a program only to hand it over to a third party—in Boise, no less—dozens do every year despite a murky payoff. Says Wetzel: "There's nothing like holding up a trophy from the galleryfurniture.com Bowl."

With 70 teams playing in bowl games this year, lots of stadiums will be pretty empty. While bowls can profit from requiring schools to buy blocks of full-price tickets to sell to fans, few institutions can unload their bounty—particularly when they're playing in a bowl in Idaho. Even at the non-ridiculous 2009 FedEx (FDX) Orange Bowl (now the Discovery Orange Bowl), Virginia Tech sold only 20 percent of the 17,500 tickets it bought for $120 apiece. It lost $1.77 million.

Still, virtually all colleges play along, often so they can tell recruits and donors they went to a bowl game—even the Beef 'O' Brady's Bowl. And many coaches and athletic directors, who run the bowl system, get bonuses for getting their teams into a bowl—even the Beef 'O' Brady's Bowl. "These bowls are a scam," says Brian Frederick, executive director of the Sports Fans Coalition, a lobbyist group. "They make money by selling names to sponsors. That's why you get these awful names. The uDrove Humanitarian Bowl? What the hell is that?" It's a bowl game in Idaho.

by Bloomberg Businessweek |  Read more:
Photograph: Chris Keane/Reuters via:

Five Jobs in Reading


The game of Monopoly has four railroads: B&O, Short Line, Reading, and Pennsylvania. As a child I assumed that this meant Monopoly took place in my home town of Reading, Pennsylvania. It doesn’t, of course. It takes place in Atlantic City, never mind that the B&O didn’t send trains to Atlantic City, and that the Short Line isn’t real.

But the Philadelphia and Reading Railroad was very real. It was at one time the biggest corporation in the world, owning not only the major mid-Atlantic coal shipping railways but the ports that sent it overseas, and eventually the coal mines themselves. It was so big, in fact, that its monopoly on anthracite mining and distribution forced the Supreme Court to break it up in the 1870s, back when this was the sort of thing the Supreme Court did. The Reading also had a modest passenger line; at one time a traveler could ride the rails between Harrisburg, Shamokin, Jersey City, and Philadelphia. But when the coal industry dried up, so did the Railroad. Highways were built, American car culture exploded, and in 1976 the Reading ceased to exist. CONRAIL now runs much of what remains, although there is no passenger train and significantly less coal and iron to ship. Of course the tracks persist, crisscrossing the city as they make their way up and down the east coast, reminding commuters of what an alternative might have looked like.

Reading is not a particularly large city. Its population, which hovers around 80,000, is spread out over approximately ten square miles that stretch from the Schuylkill River uphill to Mt. Penn. It is difficult to describe the particularity of Reading’s urban depression—even more so to account for it. It was a wealthy iron and garment manufacturing town in the 1800s, continued its economic growth into the 1930s, then entered into a slow, painful process of impoverishment beginning in the 1940s. This has since included the loss of the railroad, of heavy manufacturing, and of middle-class white people. But Reading’s decline was not uniform. It also had a number of “rebirths” that began as promises of renewed glory and ended in the realities of recalcitrant poverty and crime. Reading was once, for example, “famous for its outlet shopping,” a phrase I heard frequently from mothers of friends in other towns. Indeed, Reading built a fairly large industry around closeout, mis-sized, as-is “designer brand” apparel and home goods. Buses would bring in shoppers from all over the east coast as restaurants, hair and nail salons, and coffee shops sprouted to feed and pamper them. But now there are outlet malls all over the country. Designer brands have even created outlet labels so that retail customers can shop without fear of accidentally purchasing the same shirt as some deal-hunting cheapskate, and outlet shoppers can still savor the belief that some idiot paid twice as much for their socks, even though no idiot ever has. The outlets are still in Reading, but they don’t host the same crowded, frantic weekend shopping orgies I remember from my youth. Why travel to a city that is consistently ranked among the top twenty-five most dangerous places in America when you can buy your discount jeans at a clean stucco strip mall in New Jersey? They probably even have a Chipotle.

But the outlet collapse pales in comparison to Reading’s more recent and familiar recession story. In 2007, Reading was hoping to capitalize on its proximity to New York and Philadelphia, as well as on miles of underdeveloped riverfront property. The city even began making it onto lists of “up and coming” places for housing speculators. The local professional class, safely tucked away in suburban neighborhoods, started to dream of urban boutiques and crime-free, tree-lined strolls along the river. Then in 2008 the bubble burst, plans were abandoned, and, by the 2010 census, Reading was declared the poorest city in America. The wealthy consumer class lost a pipe dream, and many of Reading’s working class and poor lost everything.

by Chris Reitz, N+1 |  Read more:
Image source: uncredited

The Ecstasy of Influence

All mankind is of one author, and is one volume; when one man dies, one chapter is not torn out of the book, but translated into a better language; and every chapter must be so translated. . . .—John Donne

Consider this tale: a cultivated man of middle age looks back on the story of an amour fou, one beginning when, traveling abroad, he takes a room as a lodger. The moment he sees the daughter of the house, he is lost. She is a preteen, whose charms instantly enslave him. Heedless of her age, he becomes intimate with her. In the end she dies, and the narrator — marked by her forever — remains alone. The name of the girl supplies the title of the story: Lolita.

The author of the story I’ve described, Heinz von Lichberg, published his tale of Lolita in 1916, forty years before Vladimir Nabokov’s novel. Lichberg later became a prominent journalist in the Nazi era, and his youthful works faded from view. Did Nabokov, who remained in Berlin until 1937, adopt Lichberg’s tale consciously? Or did the earlier tale exist for Nabokov as a hidden, unacknowledged memory? The history of literature is not without examples of this phenomenon, called cryptomnesia. Another hypothesis is that Nabokov, knowing Lichberg’s tale perfectly well, had set himself to that art of quotation that Thomas Mann, himself a master of it, called “higher cribbing.” Literature has always been a crucible in which familiar themes are continually recast. Little of what we admire in Nabokov’s Lolita is to be found in its predecessor; the former is in no way deducible from the latter. Still: did Nabokov consciously borrow and quote?

“When you live outside the law, you have to eliminate dishonesty.” The line comes from Don Siegel’s 1958 film noir, The Lineup, written by Stirling Silliphant. The film still haunts revival houses, likely thanks to Eli Wallach’s blazing portrayal of a sociopathic hit man and to Siegel’s long, sturdy auteurist career. Yet what were those words worth — to Siegel, or Silliphant, or their audience — in 1958? And again: what was the line worth when Bob Dylan heard it (presumably in some Greenwich Village repertory cinema), cleaned it up a little, and inserted it into “Absolutely Sweet Marie”? What are they worth now, to the culture at large?

Appropriation has always played a key role in Dylan’s music. The songwriter has grabbed not only from a panoply of vintage Hollywood films but from Shakespeare and F. Scott Fitzgerald and Junichi Saga’s Confessions of a Yakuza. He also nabbed the title of Eric Lott’s study of minstrelsy for his 2001 album Love and Theft. One imagines Dylan liked the general resonance of the title, in which emotional misdemeanors stalk the sweetness of love, as they do so often in Dylan’s songs. Lott’s title is, of course, itself a riff on Leslie Fiedler’s Love and Death in the American Novel, which famously identifies the literary motif of the interdependence of a white man and a dark man, like Huck and Jim or Ishmael and Queequeg — a series of nested references to Dylan’s own appropriating, minstrel-boy self. Dylan’s art offers a paradox: while it famously urges us not to look back, it also encodes a knowledge of past sources that might otherwise have little home in contemporary culture, like the Civil War poetry of the Confederate bard Henry Timrod, resuscitated in lyrics on Dylan’s newest record, Modern Times. Dylan’s originality and his appropriations are as one. (...)

In 1941, on his front porch, Muddy Waters recorded a song for the folklorist Alan Lomax. After singing the song, which he told Lomax was entitled “Country Blues,” Waters described how he came to write it. “I made it on about the eighth of October ’38,” Waters said. “I was fixin’ a puncture on a car. I had been mistreated by a girl. I just felt blue, and the song fell into my mind and it come to me just like that and I started singing.” Then Lomax, who knew of the Robert Johnson recording called “Walkin’ Blues,” asked Waters if there were any other songs that used the same tune. “There’s been some blues played like that,” Waters replied. “This song comes from the cotton field and a boy once put a record out — Robert Johnson. He put it out as named ‘Walkin’ Blues.’ I heard the tune before I heard it on the record. I learned it from Son House.” In nearly one breath, Waters offers five accounts: his own active authorship: he “made it” on a specific date. Then the “passive” explanation: “it come to me just like that.” After Lomax raises the question of influence, Waters, without shame, misgivings, or trepidation, says that he heard a version by Johnson, but that his mentor, Son House, taught it to him. In the middle of that complex genealogy, Waters declares that “this song comes from the cotton field." (...)

I was born in 1964; I grew up watching Captain Kangaroo, moon landings, zillions of TV ads, the Banana Splits, M*A*S*H, and The Mary Tyler Moore Show. I was born with words in my mouth — “Band-Aid,” “Q-tip,” “Xerox” — object-names as fixed and eternal in my logosphere as “taxicab” and “toothbrush.” The world is a home littered with pop-culture products and their emblems. I also came of age swamped by parodies that stood for originals yet mysterious to me — I knew Monkees before Beatles, Belmondo before Bogart, and “remember” the movie Summer of ’42 from aMad magazine satire, though I’ve still never seen the film itself. I’m not alone in having been born backward into an incoherent realm of texts, products, and images, the commercial and cultural environment with which we’ve both supplemented and blotted out our natural world. I can no more claim it as “mine” than the sidewalks and forests of the world, yet I do dwell in it, and for me to stand a chance as either artist or citizen, I’d probably better be permitted to name it.

Consider Walker Percy’s The Moviegoer:
Other people, so I have read, treasure memorable moments in their lives: the time one climbed the Parthenon at sunrise, the summer night one met a lonely girl in Central Park and achieved with her a sweet and natural relationship, as they say in books. I too once met a girl in Central Park, but it is not much to remember. What I remember is the time John Wayne killed three men with a carbine as he was falling to the dusty street in Stagecoach, and the time the kitten found Orson Welles in the doorway in The Third Man.
Today, when we can eat Tex-Mex with chopsticks while listening to reggae and watching a YouTube rebroadcast of the Berlin Wall’s fall — i.e., when damn near everything presents itself as familiar — it’s not a surprise that some of today’s most ambitious art is going about trying to make the familiar strange. In so doing, in reimagining what human life might truly be like over there across the chasms of illusion, mediation, demographics, marketing, imago, and appearance, artists are paradoxically trying to restore what’s taken for “real” to three whole dimensions, to reconstruct a univocally round world out of disparate streams of flat sights.

by Jonathan Lethem, Harpers (2007) | Read more:
Image: Wikipedia

When You Swallow A Grenade


It is hard to imagine a time when a scratch could so easily lead to death. Albert Alexander died precisely at the dawn of the Antibiotic Era. Shortly after failing to save Alexander’s life, Florey harvested more penicillin and gave it to another patient at the hospital, a 15-year-old boy who had developed an infection during surgery. They cured him in a few days. Within three years of Alexander’s death, Pfizer was manufacturing penicillin on an industrial scale, packing 7500-gallon tanks with mold, fed on corn steep liquor. In that same year, Selman Waksman, a Rutgers microbiologist, and his colleagues discovered antibiotics made by soil bacteria, such as streptomycin and neomycin.

What made antibiotics so wildly successful was the way they attacked bacteria while sparing us. Penicillin, for example, stops many types of bacteria from building their cell walls. Our own cells are built in a fundamentally different way, and so the drug has no effect. While antibiotics can discriminate between us and them, however, they can’t discriminate between them and them–between the bacteria that are making us sick and then ones we carry when we’re healthy. When we take a pill of vancomycin, it’s like swallowing a grenade. It may kill our enemy, but it kills a lot of bystanders, too.  (...)

Each of us is home to several thousand species. (I’m only talking about bacteria, by the way–viruses, fungi, and protozoans stack an even higher level of diversity on top of the bacterial biodiversity.) My own belly button, I’ve been reliably informed, contains at least 53 species. Many of the species I harbor are different than the ones you harbor. But if you look at the kinds of genes carried by those species, our microbiomes look very similar. That’s partly because surviving on a human body requires certain skills, so any species that is going to last long in your lungs, say, will need many of the same genes.

But the similarity speaks to something else. The microbiome keeps us healthy. It breaks down some of our food into digestible molecules, it detoxifies poisons, it serves as a shield on our skin and internal linings to keep out pathogens, and it nurtures our immune systems, instructing them in the proper balance between vigilance and tolerance. It’s a dependence we’ve been evolving for 700 million years, ever since our early animal ancestors evolved bodies that bacteria could colonize. (Even jellyfish and sponges have microbiomes.) If you think of the human genome as all the genes it takes to run a human body, the 20,000 protein-coding genes found in our own DNA are not enough. We are a superorganism that deploys as many as 20 million genes.

It’s not easy to track what happens to this complex organ of ours when we take antibiotics. Monitoring the microbiome of a single person demands a lot of medical, microbiological, and genomic expertise. And it’s hard to generalize, since each case has its own quirks. What happens to the microbiome depends on the particular kind of bacteria infecting people, the kind of antibiotics people take, the state of their microbiome beforehand, their own health, and even their own genes (well, the human genes, at least). And then there’s the question of how long these effects last. If there’s a change to the microbiome for a few weeks, does that change vanish within a few months? Or are there effects only emerge years later?

Scientists are only now beginning to get answers to those questions. In a paper just published online in the journal Gut, Andres Moya of the University of Valencia and his colleagues took an unprecedented look at a microbiome weathering a storm of antibiotics. The microbiome belonged to a 68-year-old man who had developed an infection in his pacemaker. A two-week course of antbiotics cleared it up nicely. Over the course of his treatment, Moya and his colleagues collected stool samples from the man every few days, and then six weeks afterwards. They identified the species in the stool, as well as the genes that the bacteria switched on and off.

What’s most striking about Moya’s study is how the entire microbiome responded to the antibiotics as if it was under a biochemical mortar attack. The bacteria started producing defenses to keep the deadly molecules from getting inside them. To get rid of the drugs that did get inside them, they produced pumps to blast them back out. Meanwhile, the entire microbiome powered down its metabolism. This is probably a good strategy for enduring antibiotics, which typically attack the molecules that bacteria use to grow. As the bacteria shut down, they had a direct effect on their host: they stopped making vitamins and carrying out other metabolic tasks.

by Carl Zimmer, National Geographic |  Read more:
Photo: Health Research Board on Flickr via Creative Commons