Sunday, March 15, 2015
A Kingdom in Splinters
What language did Adam and Eve speak in the Garden of Eden? Today the question might seem not only quaint, but daft. Thus, the philologist Andreas Kempe could speculate, in his “Die Sprache des Paradises” (“The Language of Paradise”) of 1688, that in the Garden God spoke Swedish to Adam and Adam replied in Danish while the serpent—wouldn’t you know it?—seduced Eve in French. Others suggested Flemish, Old Norse, Tuscan dialect, and, of course, Hebrew. But as James Turner makes clear in his magisterial and witty history, which ranges from the ludicrous to the sublime, philologists regarded the question not just as one addressing the origins of language, but rather as seeking out the origins of what makes us human; it was a question at once urgent and essential.1 After all, animals do express themselves: they chitter and squeak, they bay and roar and whinny. But none of them, so far as we know, wields grammar and syntax; none of them is capable of articulate and reasoned discourse. We have long prided ourselves, perhaps excessively, on this distinction. But on the evidence Turner so amply provides, we might also wonder whether the true distinction lies not simply in our ability to utter rational speech, but in the sheer obsessive love of language itself; that is, in philology, the “love of words.”
This abiding passion for words, cultivated fervently from antiquity into modern times—or at least until around 1800, in Turner’s view—encompassed a huge range of subjects as it developed: not only grammar and syntax, but rhetoric, textual editing and commentary, etymology and lexicography, as well as, eventually, anthropology, archeology, biblical exegesis, linguistics, literary criticism, and even law. It comprised three large areas: textual philology, theories about the origins of language, and, much later, comparative studies of different related languages. Two texts predominated: Homer, considered sacred by the ancient Greeks, and the Bible, a contested area of interpretation for both Jews and Christians. As for theories of language origins, these go back to the pre-Socratics and Plato; the controversy was over whether language was divinely given, with words corresponding to the things they named, or arrived at by convention (the nomos versus physis debate). As for comparative studies, these arose in the eighteenth-century, largely as a result of Sir William Jones’s discovery of the common Indo-European matrix of most European languages. Encounters with “exotic,” that is, non-European, peoples in the course of the Renaissance voyages of discovery were another important source; here American Indian languages in their variety and complexity offered an especially rich, if perplexing, new field of inquiry.
To follow Turner’s account of all this is akin to witnessing the gradual construction of a vast and intricate palace-complex of the mind, carried out over centuries, with all its towers and battlements, crenellations and cupolas, as well as its shadier and sometimes disreputable alleyways and culs-de-sac, only to witness it disintegrate, by almost imperceptible stages, into fragmented ruins, a kingdom in splinters. The remnants of that grand complex, its shards and tottering columns, as it were, are our discrete academic disciplines today with their strict perimeters and narrow confines. To illustrate the difference, take Charles Eliot Norton (1827–1908), one of Turner’s heroes (and the subject of his Liberal Education of Charles Eliot Norton of 2002): Norton was the first professor of art history at Harvard and, indeed, one of the founders of the discipline, but he was also, among many other things, an expert on Dante who “taught and wrote art history as a philologist, an interpreter of texts.” Nowadays a polymath like Norton would not be hired, let alone get tenure, at any American university; he would be viewed as a dubious interloper on others’ turf.
In fact, traditional philology nowadays is less a ruin than the shadow of a ruin; no, even less than that, the vestige of a shadow. Turner acknowledges, and laments, this from the outset; he notes that “many college-educated Americans no longer recognize the word.” He adds that, “for most of the twentieth century, philology was put down, kicked around, abused, and snickered at, as the archetype of crabbed, dry-as-dust, barren, and by and large pointless academic knowledge. Did I mention mind-numbingly boring?” Worse, “it totters along with arthritic creakiness.” With friends like these, we might ask, can philology sink any further into oblivion than it already has? But the unspoken question here—“shall these bones live?”—is one that Turner poses and resolves triumphantly. He breathes life back into philology. There is not a dull page in this long book (and I include here its sixty-five pages of meticulous and sometimes mischievous endnotes). He accomplishes this by setting his account firmly in a detailed if inevitably brisk historical narrative interspersed with vivid cameos of individual scholars, the renowned as well as the notorious, the plainly deranged alongside the truly radiant.
Here I should disclose a distant interest. I once flirted with the idea of devoting myself to philology. I was soon dissuaded by my encounters with philologists in the flesh. The problem was not that they were dry; in fact, their cool, faintly cadaverous aplomb was a distinct relief amid the relentlessly “relevant” atmosphere of the 1960s. Dry but often outrageously eccentric, they were far from being George Eliot’s Casaubon toiling, and making others toil, to leave some small but significant trace in the annals of desiccation. No, it was rather their sheer single-mindedness coupled with a hidden ferocity that gave me pause. When I first met the late Albert Jamme, the renowned epigrapher of Old South Arabian, this Belgian Jesuit startled me by exclaiming at the top of his voice, “I hate my bed!” When I politely suggested that he get a new mattress, he shot back with “No, no! I hate my bed because it keeps me from my texts!” And the undiluted vitriol of Jamme’s opinions of his colleagues (all three of them!), both in conversation and in print, was scary; from him and others I learned that nothing distills venom more quickly than disagreement over a textual reading. At times there was something decidedly otherworldly about other philologists I met. In the 1970s, when I studied at the University of Tübingen and had the good fortune to work with Manfred Ullmann, the great lexicographer of Classical Arabic, he startled me one day by excitedly brandishing a file card on which was written the Arabic word for “clitoris” (bazr) and exclaiming, “Kli-tO-ris! What do ordinary folk know about Kli-tO-ris?” (More than you imagine, I thought.) Needless to say, it was the word—its etymology, its cognates, its morphology—that captivated him.
As for philological single-mindedness, when a celebrated German Assyriologist of my acquaintance (who shall remain nameless) got married, he rose abruptly from the wedding banquet to announce “Jetzt zur Arbeit!” (“Now to work!”) and headed for the door, a volume of cuneiform texts tucked under one arm; only the outraged intervention of his new mother-in-law kept him from leaving. Such anecdotes about philologists—their pugnacity, their obsessiveness, their downright daffiness—could fill a thick volume. Such anecdotes taught me not only that I wasn’t learned enough to become a philologist, I wasn’t unhinged enough either.
by Eric Ormsby, TNC | Read more:
Image: Papyrus of Callimachus's Aetia. via
This abiding passion for words, cultivated fervently from antiquity into modern times—or at least until around 1800, in Turner’s view—encompassed a huge range of subjects as it developed: not only grammar and syntax, but rhetoric, textual editing and commentary, etymology and lexicography, as well as, eventually, anthropology, archeology, biblical exegesis, linguistics, literary criticism, and even law. It comprised three large areas: textual philology, theories about the origins of language, and, much later, comparative studies of different related languages. Two texts predominated: Homer, considered sacred by the ancient Greeks, and the Bible, a contested area of interpretation for both Jews and Christians. As for theories of language origins, these go back to the pre-Socratics and Plato; the controversy was over whether language was divinely given, with words corresponding to the things they named, or arrived at by convention (the nomos versus physis debate). As for comparative studies, these arose in the eighteenth-century, largely as a result of Sir William Jones’s discovery of the common Indo-European matrix of most European languages. Encounters with “exotic,” that is, non-European, peoples in the course of the Renaissance voyages of discovery were another important source; here American Indian languages in their variety and complexity offered an especially rich, if perplexing, new field of inquiry.To follow Turner’s account of all this is akin to witnessing the gradual construction of a vast and intricate palace-complex of the mind, carried out over centuries, with all its towers and battlements, crenellations and cupolas, as well as its shadier and sometimes disreputable alleyways and culs-de-sac, only to witness it disintegrate, by almost imperceptible stages, into fragmented ruins, a kingdom in splinters. The remnants of that grand complex, its shards and tottering columns, as it were, are our discrete academic disciplines today with their strict perimeters and narrow confines. To illustrate the difference, take Charles Eliot Norton (1827–1908), one of Turner’s heroes (and the subject of his Liberal Education of Charles Eliot Norton of 2002): Norton was the first professor of art history at Harvard and, indeed, one of the founders of the discipline, but he was also, among many other things, an expert on Dante who “taught and wrote art history as a philologist, an interpreter of texts.” Nowadays a polymath like Norton would not be hired, let alone get tenure, at any American university; he would be viewed as a dubious interloper on others’ turf.
In fact, traditional philology nowadays is less a ruin than the shadow of a ruin; no, even less than that, the vestige of a shadow. Turner acknowledges, and laments, this from the outset; he notes that “many college-educated Americans no longer recognize the word.” He adds that, “for most of the twentieth century, philology was put down, kicked around, abused, and snickered at, as the archetype of crabbed, dry-as-dust, barren, and by and large pointless academic knowledge. Did I mention mind-numbingly boring?” Worse, “it totters along with arthritic creakiness.” With friends like these, we might ask, can philology sink any further into oblivion than it already has? But the unspoken question here—“shall these bones live?”—is one that Turner poses and resolves triumphantly. He breathes life back into philology. There is not a dull page in this long book (and I include here its sixty-five pages of meticulous and sometimes mischievous endnotes). He accomplishes this by setting his account firmly in a detailed if inevitably brisk historical narrative interspersed with vivid cameos of individual scholars, the renowned as well as the notorious, the plainly deranged alongside the truly radiant.
Here I should disclose a distant interest. I once flirted with the idea of devoting myself to philology. I was soon dissuaded by my encounters with philologists in the flesh. The problem was not that they were dry; in fact, their cool, faintly cadaverous aplomb was a distinct relief amid the relentlessly “relevant” atmosphere of the 1960s. Dry but often outrageously eccentric, they were far from being George Eliot’s Casaubon toiling, and making others toil, to leave some small but significant trace in the annals of desiccation. No, it was rather their sheer single-mindedness coupled with a hidden ferocity that gave me pause. When I first met the late Albert Jamme, the renowned epigrapher of Old South Arabian, this Belgian Jesuit startled me by exclaiming at the top of his voice, “I hate my bed!” When I politely suggested that he get a new mattress, he shot back with “No, no! I hate my bed because it keeps me from my texts!” And the undiluted vitriol of Jamme’s opinions of his colleagues (all three of them!), both in conversation and in print, was scary; from him and others I learned that nothing distills venom more quickly than disagreement over a textual reading. At times there was something decidedly otherworldly about other philologists I met. In the 1970s, when I studied at the University of Tübingen and had the good fortune to work with Manfred Ullmann, the great lexicographer of Classical Arabic, he startled me one day by excitedly brandishing a file card on which was written the Arabic word for “clitoris” (bazr) and exclaiming, “Kli-tO-ris! What do ordinary folk know about Kli-tO-ris?” (More than you imagine, I thought.) Needless to say, it was the word—its etymology, its cognates, its morphology—that captivated him.
As for philological single-mindedness, when a celebrated German Assyriologist of my acquaintance (who shall remain nameless) got married, he rose abruptly from the wedding banquet to announce “Jetzt zur Arbeit!” (“Now to work!”) and headed for the door, a volume of cuneiform texts tucked under one arm; only the outraged intervention of his new mother-in-law kept him from leaving. Such anecdotes about philologists—their pugnacity, their obsessiveness, their downright daffiness—could fill a thick volume. Such anecdotes taught me not only that I wasn’t learned enough to become a philologist, I wasn’t unhinged enough either.
by Eric Ormsby, TNC | Read more:
Image: Papyrus of Callimachus's Aetia. via
Driving Mr. Albert
[ed. See also: Cukoo Spit and Ski Jumps.]
Our world — Earth — was covered with lava, then granite mountains. Oceans formed, a wormy thing crawled from the sea. There were pea-brained brontosauri and fiery meteor showers and gnawing, hairy-backed monsters that kept coming and coming — these furious little stumps, human beings, us. Under the hot sun, we roasted different colors, fornicated, and fought. Full of wonder, we attached words to the sky and the mountains and the water, and claimed them as our own. We named ourselves Homer, Sappho, Humperdinck, and Nixon. We made bewitching sonatas and novels and paintings. Stargazed and built great cities. Exterminated some people. Settled the West. Cooked meat and slathered it with special sauce. Did the hustle. Built the strip mall.And in the end, after billions of years of evolution, a pink two-story motel rose up on a drag of asphalt in Berkeley, California. The Flamingo Motel. There, a man stepped out onto the balcony in a bright beam of millennial sunlight, holding the original universe in his hands, in a Tupperware container, and for one flickering moment he saw into the future. I can picture this man now: he needs a haircut, he needs some coffee.
But not yet, not before we rewind and start again. Not long ago. In Maine on a bus. In Massachusetts on a train. In Connecticut behind the wheel of a shiny, teal-colored rental car. The engine purrs. I should know, I’m the driver. I’m on my way to pick up an eighty-four-year-old man named Thomas Harvey, who lives in a modest, low-slung 1950s ranch that belongs to his sixty-seven-year-old girlfriend, Cleora. To get there you caroom through New Jersey’s exurbia, through swirls of dead leaves and unruly thickets of oak and pine that give way to well-ordered fields of roan, buttermilk, and black snorting atoms — horses. Harvey greets me at the door, stooped and chuckling nervously, wearing a red-and-white plaid shirt and a solid-blue Pendleton tie that still bears a waterlogged $10 price tag from some earlier decade. He has peckled, blowsy skin runneled with lines, an eagle nose, stubbed yellow teeth, bitten nails, and a spray of white hair as fine as corn silk that shifts with the wind over the bald patches on his head. He could be one of a million beach-bound, black-socked Florida retirees, not the man who, by some odd happenstance of life, possesses the brain of Albert Einstein — literally cut it out of the dead scientist’s head.
Harvey has stoked a fire in the basement, which is dank and dark, and I sit among crocheted rugs and genie bottles of blown glass, Ethiopian cookbooks, and macramé. It has taken me more than a year to find Harvey, and during that time I’ve had a dim, inchoate feeling — one that has increased in luminosity — that if I could somehow reach him and Einstein’s brain, I might unravel their strange relationship, one that arcs across this century and America itself. And now, before the future arrives and the supercomputers of the world fritz out and we move to lunar colonies — before all that hullabaloo — Harvey and I are finally sitting here together.
That day Harvey tells me the story he’s told before — to friends and family and pilgrims — one that has made him an odd celebrity even in this age of odd celebrity. He tells it deliberately, assuming that I will be impressed by it as a testament to the rightness of his actions rather than as a cogent defense of them. “You see,” he says, “I was just so fortunate to have been there. Just so lucky.”
“Fortunate” is one word, “improbable” is another.
by Michael Paterniti, Harpers | Read more:
Image: uncredited
Saturday, March 14, 2015
Geomancy Almanac
It's hard to overestimate the importance of his contributions to the intellectual development of Europe - by the time of his death, the library was one of the most celebrated collections in Europe.
Beyond the Churn
Page-views suggest we crave short, informative text, ‘clickbaited’ with images of half-naked or bleeding bodies – even faster variations on the TV soundbites I once helped locate on tape. The marketplace has something to tell us. But to say that the 24/7, quick-and-dirty news cycle exists because people want it is incomplete logic. Poor people in a blighted urban food desert – devoid of garden or grocer but rife with Burger Kings and Dairy Queens – don’t consume fast food every day because their bodies are hungry for French fries. They consume it because they’re hungry for food. Its lack of nutrient density often means they have to keep eating – creating a confusing 21st century conundrum for the evolved human body: to be at once obese and malnourished.
In a media landscape of zip-fast reports as stripped of context as a potato might be stripped of fibre, most news stories fail to satiate. We don’t consume news all day because we’re hungry for information – we consume it because we’re hungry for connection. That’s the confusing conundrum for the 21st century heart and mind: to be at once over-informed and grasping for understanding.
I’ve begun college writing classes by asking students to name the first image that comes to mind at the term ‘atomic bomb’. Nearly every answer, every time, is ‘mushroom cloud’. They’ve seen that black-and-white photograph in high-school textbooks alongside brief paragraphs about mass death. But they can’t remember much about it. Who dropped the nuclear weapon? What year? In what country and for what reason? They memorised it all once, but it wasn’t relevant enough to their lives to stick.
Then the students read an excerpt from Hiroshima Diary (1955), the personal account of the Japanese physician Michihiko Hachiya, who in 1945 was enjoying his quiet garden when he saw a flash of light and found himself naked.
‘A large splinter was protruding from a mangled wound in my thigh, and something warm trickled into my mouth,’ Hachiya wrote. ‘My cheek was torn, I discovered as I felt it gingerly, with the lower lip laid wide open. Embedded in my neck was a sizable fragment of glass which I matter-of-factly dislodged, and with the detachment of one stunned and shocked I studied it and my blood-stained hand. Where was my wife?’
As he runs to help himself and others, Hachiya sees people moving ‘like scarecrows, their arms held out from their bodies’ to avoid the pain of burnt flesh touching burnt flesh. His attention to fact befits a man of science, but in rendering the sights, sounds and smells of the bomb’s wake, Hachiya is an artist. He relays the tale chronologically and with little judgment, allowing readers to find their own way to meaning. After reading his account, students look stunned and speak softly. Though generations and continents removed, they recognize Hachiya's fears as their own; 'atomic bomb' has zoomed in from detached concept to on-the-ground reality.
We have the chance now to reach such understandings through digital journalism. Recent years have seen a surge of timely, immersive, nonfiction commissioned and relayed by digital media startups such as Atavist, Narratively, Guernica and many others. Meanwhile, think tanks such as the Nieman Storyboard at Harvard and the Future of Digital Longform project at Columbia are examining the integration of time-honoured story and its exciting new formats.
On journalistic training grounds, a slower joining of reporting and artistry is taking shape in college classrooms: writing programmes in English departments and fine-arts schools increasingly honour non‑fiction as a genre alongside fiction and poetry; such concentrations most often cater to memoir, but a handful of schools now offer robust opportunities in reportage, profile-writing, the essay. However, journalism schools, 15 years after a curriculum shifted beneath my feet, show little sign of seeking the artistic wisdom of creative programmes. The most formative course of my college training – a reporting wringer in which each of us wrote five news pieces per week for the lauded student paper while working all semester on one long, carefully crafted, front-page feature – no longer exists as a core requirement. Yes, today’s multimedia training demands contributed to that curricular decision, but journalism has long feared that ‘creativity’ means ‘making stuff up’ – a self-destructive fear, since a news story without creativity isn’t a story at all.
True story comprises two strands, spiralling: the specific and the universal. The earthly and transcendent, literal and metaphorical, tangible and intangible. The binding agent is the act of storytelling – often by the reliable devices of description, setting, structure, metaphor, character, but always through thoughtful ordering of words, images, sound. When we sever that bridge between objective fact and subjective meaning in the interest of speed or protocol, TV anchors awkwardly interview six-year-old witnesses to shooting rampages, and reporters convey military suicides as tallies in a descending order of deemed significance known as the ‘inverted pyramid’. This approach, though sometimes useful, ultimately desensitises or disturbs us. It fails to match the moment.
by Sarah Smarsh, Aeon | Read more:
Image: via:
In a media landscape of zip-fast reports as stripped of context as a potato might be stripped of fibre, most news stories fail to satiate. We don’t consume news all day because we’re hungry for information – we consume it because we’re hungry for connection. That’s the confusing conundrum for the 21st century heart and mind: to be at once over-informed and grasping for understanding.I’ve begun college writing classes by asking students to name the first image that comes to mind at the term ‘atomic bomb’. Nearly every answer, every time, is ‘mushroom cloud’. They’ve seen that black-and-white photograph in high-school textbooks alongside brief paragraphs about mass death. But they can’t remember much about it. Who dropped the nuclear weapon? What year? In what country and for what reason? They memorised it all once, but it wasn’t relevant enough to their lives to stick.
Then the students read an excerpt from Hiroshima Diary (1955), the personal account of the Japanese physician Michihiko Hachiya, who in 1945 was enjoying his quiet garden when he saw a flash of light and found himself naked.
‘A large splinter was protruding from a mangled wound in my thigh, and something warm trickled into my mouth,’ Hachiya wrote. ‘My cheek was torn, I discovered as I felt it gingerly, with the lower lip laid wide open. Embedded in my neck was a sizable fragment of glass which I matter-of-factly dislodged, and with the detachment of one stunned and shocked I studied it and my blood-stained hand. Where was my wife?’
As he runs to help himself and others, Hachiya sees people moving ‘like scarecrows, their arms held out from their bodies’ to avoid the pain of burnt flesh touching burnt flesh. His attention to fact befits a man of science, but in rendering the sights, sounds and smells of the bomb’s wake, Hachiya is an artist. He relays the tale chronologically and with little judgment, allowing readers to find their own way to meaning. After reading his account, students look stunned and speak softly. Though generations and continents removed, they recognize Hachiya's fears as their own; 'atomic bomb' has zoomed in from detached concept to on-the-ground reality.
We have the chance now to reach such understandings through digital journalism. Recent years have seen a surge of timely, immersive, nonfiction commissioned and relayed by digital media startups such as Atavist, Narratively, Guernica and many others. Meanwhile, think tanks such as the Nieman Storyboard at Harvard and the Future of Digital Longform project at Columbia are examining the integration of time-honoured story and its exciting new formats.
On journalistic training grounds, a slower joining of reporting and artistry is taking shape in college classrooms: writing programmes in English departments and fine-arts schools increasingly honour non‑fiction as a genre alongside fiction and poetry; such concentrations most often cater to memoir, but a handful of schools now offer robust opportunities in reportage, profile-writing, the essay. However, journalism schools, 15 years after a curriculum shifted beneath my feet, show little sign of seeking the artistic wisdom of creative programmes. The most formative course of my college training – a reporting wringer in which each of us wrote five news pieces per week for the lauded student paper while working all semester on one long, carefully crafted, front-page feature – no longer exists as a core requirement. Yes, today’s multimedia training demands contributed to that curricular decision, but journalism has long feared that ‘creativity’ means ‘making stuff up’ – a self-destructive fear, since a news story without creativity isn’t a story at all.
True story comprises two strands, spiralling: the specific and the universal. The earthly and transcendent, literal and metaphorical, tangible and intangible. The binding agent is the act of storytelling – often by the reliable devices of description, setting, structure, metaphor, character, but always through thoughtful ordering of words, images, sound. When we sever that bridge between objective fact and subjective meaning in the interest of speed or protocol, TV anchors awkwardly interview six-year-old witnesses to shooting rampages, and reporters convey military suicides as tallies in a descending order of deemed significance known as the ‘inverted pyramid’. This approach, though sometimes useful, ultimately desensitises or disturbs us. It fails to match the moment.
by Sarah Smarsh, Aeon | Read more:
Image: via:
Fractured Israel
Deeply divided and foul of mood, Israelis are headed toward what seems like a referendum on their long-serving, silver-tongued prime minister, the hard-line Benjamin Netanyahu.
But with so many of them having despaired of peace talks with the Palestinians, the focus is mostly on Netanyahu's personality, his expense scandals and the soaring cost of living.
And as no candidate is likely to win big in the wild jumble of Israel's political landscape, the outcome of the March 17 election could well be a joint government between Netanyahu and his moderate challenger Isaac Herzog. It's an irony, because the animosities are overwhelming.
Much has changed in the world since Netanyahu first became prime minister in 1996, but Israel remains stuck with the question of what to do with the highly strategic, biblically resonant, Palestinian-populated lands it captured almost a half-century ago.
Israelis know it is their existential issue, but it seems almost too complex for a democracy. After decades of failed peace talks under every sort of government, the whole festering thing has become such a vexation that politicians seem to fear it, and voters look away.
When he called the early election in November, Netanyahu seemed a shoo-in, but somewhere things went wrong. Notorious around the world for American-accented eloquence in the service of a tough stance, he is extraordinarily divisive at home, where he has been prime minister for the past six years, and for nine in total.
His speech last week before the U.S. Congress, urging a tighter deal than he believes is brewing on Iran's nuclear program, was typical: He impressed some Israelis, while infuriating others who sensed a political ploy.
Polls show his nationalist Likud Party running slightly behind Herzog's Labor Party, rebranded the Zionist Union in a bid for nationalist votes. There are scenarios in which Herzog - improbably mild-mannered in a high-decibel land - becomes prime minister. And that would change the music: Herzog is a conciliator genuinely interested in ending the occupation of lands captured in the 1967 war.
Some things to watch for:
Israel is Nearly Ungovernable
Despite its reputation for plucky unity, the country is badly fragmented - and that's reflected in parliament under the proportional representation system.
Combined, the two big parties get far less than half the vote. Then one finds a nationalist party appealing to Russian speakers, another for secular liberals and two for the squeezed middle class. A united list represents the one-fifth of citizens who are Arabs and is itself divided between communist, nationalist and Islamist factions. There are four religious parties, for Jews of European versus Middle Eastern descent and for varying degrees of nationalism.
The schisms are real, reflecting a society so diverse that at times it seems to be flying apart. The discourse is of one's rival destroying the country, through stupidity or evil. A TV debate between the main candidates other than Netanyahu and Herzog quickly degenerated into shouted accusations of fascism, criminality and treason.
But with so many of them having despaired of peace talks with the Palestinians, the focus is mostly on Netanyahu's personality, his expense scandals and the soaring cost of living.
And as no candidate is likely to win big in the wild jumble of Israel's political landscape, the outcome of the March 17 election could well be a joint government between Netanyahu and his moderate challenger Isaac Herzog. It's an irony, because the animosities are overwhelming.Much has changed in the world since Netanyahu first became prime minister in 1996, but Israel remains stuck with the question of what to do with the highly strategic, biblically resonant, Palestinian-populated lands it captured almost a half-century ago.
Israelis know it is their existential issue, but it seems almost too complex for a democracy. After decades of failed peace talks under every sort of government, the whole festering thing has become such a vexation that politicians seem to fear it, and voters look away.
When he called the early election in November, Netanyahu seemed a shoo-in, but somewhere things went wrong. Notorious around the world for American-accented eloquence in the service of a tough stance, he is extraordinarily divisive at home, where he has been prime minister for the past six years, and for nine in total.
His speech last week before the U.S. Congress, urging a tighter deal than he believes is brewing on Iran's nuclear program, was typical: He impressed some Israelis, while infuriating others who sensed a political ploy.
Polls show his nationalist Likud Party running slightly behind Herzog's Labor Party, rebranded the Zionist Union in a bid for nationalist votes. There are scenarios in which Herzog - improbably mild-mannered in a high-decibel land - becomes prime minister. And that would change the music: Herzog is a conciliator genuinely interested in ending the occupation of lands captured in the 1967 war.
Some things to watch for:
Israel is Nearly Ungovernable
Despite its reputation for plucky unity, the country is badly fragmented - and that's reflected in parliament under the proportional representation system.
Combined, the two big parties get far less than half the vote. Then one finds a nationalist party appealing to Russian speakers, another for secular liberals and two for the squeezed middle class. A united list represents the one-fifth of citizens who are Arabs and is itself divided between communist, nationalist and Islamist factions. There are four religious parties, for Jews of European versus Middle Eastern descent and for varying degrees of nationalism.
The schisms are real, reflecting a society so diverse that at times it seems to be flying apart. The discourse is of one's rival destroying the country, through stupidity or evil. A TV debate between the main candidates other than Netanyahu and Herzog quickly degenerated into shouted accusations of fascism, criminality and treason.
by Dan Perry, AP | Read more:
Image: Matanya via Flickr and WikipediaFriday, March 13, 2015
Shitphone: A Love Story
Last month my fourth iPhone in six years was, in medical terms, crashing. The screen, which had pulled away from its glue, was behaving strangely. The charging port, no matter how thoroughly I cleaned it, only occasionally took power. Repair would be expensive, especially considering that my contract would be up in about six months. Buying a newer iPhone would mean spending $650 up-front, spending $450 with a new two-year contract or amortizing the price with my carrier’s new early upgrade plan. I felt trapped, as every smartphone owner occasionally does, between two much more powerful entities that take me, an effectively captive chain-buying contract iPhone user, for granted. I began to take offense at the malfunctioning iPhone’s familiarity. Our relationship was strained and decreasingly rational. I was on a trip and away from home for a few weeks, out of sorts and out of climate, slightly unmoored and very impatient.
And so the same stubborn retail-limbic response that prevented me from avoiding this mess in the first place — by buying an AppleCare insurance plan — activated once more, and I placed an order I had been thinking about for months: One BLU Advance 4.0 Unlocked Dual Sim Phone (White), $89.99 suggested retail (but usually listed lower), $76.14 open-box with overnight shipping. 1,829 customer reviews, 4.3 stars. “This isn’t the best phone out there, but it is by far the best phone for only around $80–90,” wrote Amazon reviewer Anne.
Yes, Anne, sounds perfect, let’s do it. Shitphone would be delivered the next day.
I’ve been living happily in an electronics shitworld long enough that I’ve begun evangelizing for it. My last television, a Hisense pulled from the storeroom of a North Carolina Walmart by an employee who didn’t know it was there, is a simple and vibrant LED TV with bad sound. My stereo is built around an Insignia receiver (Best Buy house label) that powers speakers from a company called Micca ($55.60, 347 customer reviews, 4.7 stars) and it sounds… pretty good! My router is made by TP-LINK ($18.99, 575 customer reviews, 4.3 stars), and keeps me online about as reliably as my Netgear did. I bought my mother a neat little Baytek Bluetooth speaker for $26.99 (54 customer reviews, 4.6 stars), which she loves, even if its programmed voice draws out the “ess” in “Connected SuccESSSfully” in a way that suggests a strictly mechanical familiarity with English. I impulse-buy off-brand earbuds with mixed results and derive great satisfaction from discovering good ones. I bought a used Macbook for work but use a $280 Chromebook whenever possible. It is my aspirational shit-top, and I consider this situation a failure. Mainstream laptops are far enough along in the commoditization process that, for the purposes of browsing and emailing and chatting and dealing with photos — a near-totality of my computer usage — almost anything available will do. The top-selling laptop on Amazon is a $250 Asus that runs Windows 8. It would suit my needs nicely. We’ll see what happens when the Mac dies.
Off-brand electronics are, like their branded counterparts, interesting for a limited amount of time: The highest-end branded version of a product offers a chance to taste the luxurious future of technology; the shitworld version lets you preview a more practical future — the future most of the global electronics-buying public will actually enjoy. Take the Jambox, a small and dazzlingly expensive prism of speakers and battery and wireless radios that plays music from nearly any phone at a respectable volume; it was a sensation for a few years after its introduction in 2010. By 2013, off-brand speakers were making major inroads online, allowing shoppers like me to feel like we were somehow gaming the system (this requires, of course, a narrow and convenient definition of “the system”). The year after, Amazon, America’s primary portal to consumer electronics shitworld (and recently one of its proud citizens), had released its own version of the Jambox concept under the pointedly dull name “AmazonBasics Portable Bluetooth Speaker” (731 customer reviews, 4.4 stars). Soon, basic picnic-ready wireless speakers may become an undistinguished, disposable part of many consumers’ lifestyles, like USB sticks or batteries — a point at which branded versions are a minority sustained only by those consumers looking for Bluetooth speakers that signify luxury, style, or taste. Off-brand electronics are alluring only when they feel like deals — that is, only as long as there are more popular branded alternatives which they can imply are overpriced. They’re interesting, in other words, for as long as they make the buyer feel smart.
Homogeneity is what you should expect from shitphones, because it’s what you get. Buy a BLU or an Unnecto or a Posh Mobile or a Prestigio or a Yezz or an InFocus or an iRulu and you can expect similar boxes of parts, sorted by price point. The guts will likely be low-to-mid-range hardware from MediaTek, which is mostly invisible in the U.S. market but is the second largest supplier of mobile phone systems-on-chip in the world. This means the phones will share not just specifications but quirky features: even some of the cheapest phones let you use two SIM cards, for example, and many of them have an FM radio. The shells, which must fit around MediaTek’s core technology, stick to a few basic styles: For the bigger phones, seamless rectangles of a particular thickness; for the small ones, round-back thick-bezel handsets that evoke the iPhone 3G. For cheaper phones, you’ll get Android 4.2x. For a few more dollars, Android 4.4x. As is the case with major brands, shitphones with the latest version of Android, 5.0, are just becoming available.
Premium branded phones are the culmination of decades of research in wireless technology, computing, materials, and design. Shitphones are the culmination of decades of research in wireless technology, computing, materials, and design — minus a year or two. Shitphones are generally not actually shitty. They are, if you isolate them from the distorting effect of highly competitive preference-driven smartphone retail and marketing, the absence of which helps keep them so cheap, marvels of engineering and execution, assembled with precision and care and able to accomplish tasks that a half-dozen years ago would have been inconceivable for a portable device. iPhones are really just shitphones from the future.
This is what commoditization feels like: genuine novelty rapidly reduced to thankless anonymity. The iPhone and its high-end competitors benefited for years as the most visible and functional instance of a profoundly and globally novel new product. To be one of pioneering brands at the beginning of a new technological era — to sell someone his first magical hand device — is to apply a temporary multiplier to everything from brand recognition to loyalty to profit. But their brands, now, are just temporary protective spells cast against the inevitable. As we approach the 10-year anniversary of the release of the iPhone, the category it blew up is starting to feel familiar. By now, an American who purchased a smartphone on contract in 2009 has not just bought but discarded at least three devices, and as smartphones mature, that is the reality of their use: to improve is to disappear just a little more. Aren’t we all just emailing and Instagramming and Facebooking and Snapchatting and WhatsApping and Angry-Birdsing anyway?
And so the same stubborn retail-limbic response that prevented me from avoiding this mess in the first place — by buying an AppleCare insurance plan — activated once more, and I placed an order I had been thinking about for months: One BLU Advance 4.0 Unlocked Dual Sim Phone (White), $89.99 suggested retail (but usually listed lower), $76.14 open-box with overnight shipping. 1,829 customer reviews, 4.3 stars. “This isn’t the best phone out there, but it is by far the best phone for only around $80–90,” wrote Amazon reviewer Anne.Yes, Anne, sounds perfect, let’s do it. Shitphone would be delivered the next day.
I’ve been living happily in an electronics shitworld long enough that I’ve begun evangelizing for it. My last television, a Hisense pulled from the storeroom of a North Carolina Walmart by an employee who didn’t know it was there, is a simple and vibrant LED TV with bad sound. My stereo is built around an Insignia receiver (Best Buy house label) that powers speakers from a company called Micca ($55.60, 347 customer reviews, 4.7 stars) and it sounds… pretty good! My router is made by TP-LINK ($18.99, 575 customer reviews, 4.3 stars), and keeps me online about as reliably as my Netgear did. I bought my mother a neat little Baytek Bluetooth speaker for $26.99 (54 customer reviews, 4.6 stars), which she loves, even if its programmed voice draws out the “ess” in “Connected SuccESSSfully” in a way that suggests a strictly mechanical familiarity with English. I impulse-buy off-brand earbuds with mixed results and derive great satisfaction from discovering good ones. I bought a used Macbook for work but use a $280 Chromebook whenever possible. It is my aspirational shit-top, and I consider this situation a failure. Mainstream laptops are far enough along in the commoditization process that, for the purposes of browsing and emailing and chatting and dealing with photos — a near-totality of my computer usage — almost anything available will do. The top-selling laptop on Amazon is a $250 Asus that runs Windows 8. It would suit my needs nicely. We’ll see what happens when the Mac dies.
Off-brand electronics are, like their branded counterparts, interesting for a limited amount of time: The highest-end branded version of a product offers a chance to taste the luxurious future of technology; the shitworld version lets you preview a more practical future — the future most of the global electronics-buying public will actually enjoy. Take the Jambox, a small and dazzlingly expensive prism of speakers and battery and wireless radios that plays music from nearly any phone at a respectable volume; it was a sensation for a few years after its introduction in 2010. By 2013, off-brand speakers were making major inroads online, allowing shoppers like me to feel like we were somehow gaming the system (this requires, of course, a narrow and convenient definition of “the system”). The year after, Amazon, America’s primary portal to consumer electronics shitworld (and recently one of its proud citizens), had released its own version of the Jambox concept under the pointedly dull name “AmazonBasics Portable Bluetooth Speaker” (731 customer reviews, 4.4 stars). Soon, basic picnic-ready wireless speakers may become an undistinguished, disposable part of many consumers’ lifestyles, like USB sticks or batteries — a point at which branded versions are a minority sustained only by those consumers looking for Bluetooth speakers that signify luxury, style, or taste. Off-brand electronics are alluring only when they feel like deals — that is, only as long as there are more popular branded alternatives which they can imply are overpriced. They’re interesting, in other words, for as long as they make the buyer feel smart.
Homogeneity is what you should expect from shitphones, because it’s what you get. Buy a BLU or an Unnecto or a Posh Mobile or a Prestigio or a Yezz or an InFocus or an iRulu and you can expect similar boxes of parts, sorted by price point. The guts will likely be low-to-mid-range hardware from MediaTek, which is mostly invisible in the U.S. market but is the second largest supplier of mobile phone systems-on-chip in the world. This means the phones will share not just specifications but quirky features: even some of the cheapest phones let you use two SIM cards, for example, and many of them have an FM radio. The shells, which must fit around MediaTek’s core technology, stick to a few basic styles: For the bigger phones, seamless rectangles of a particular thickness; for the small ones, round-back thick-bezel handsets that evoke the iPhone 3G. For cheaper phones, you’ll get Android 4.2x. For a few more dollars, Android 4.4x. As is the case with major brands, shitphones with the latest version of Android, 5.0, are just becoming available.
Premium branded phones are the culmination of decades of research in wireless technology, computing, materials, and design. Shitphones are the culmination of decades of research in wireless technology, computing, materials, and design — minus a year or two. Shitphones are generally not actually shitty. They are, if you isolate them from the distorting effect of highly competitive preference-driven smartphone retail and marketing, the absence of which helps keep them so cheap, marvels of engineering and execution, assembled with precision and care and able to accomplish tasks that a half-dozen years ago would have been inconceivable for a portable device. iPhones are really just shitphones from the future.
This is what commoditization feels like: genuine novelty rapidly reduced to thankless anonymity. The iPhone and its high-end competitors benefited for years as the most visible and functional instance of a profoundly and globally novel new product. To be one of pioneering brands at the beginning of a new technological era — to sell someone his first magical hand device — is to apply a temporary multiplier to everything from brand recognition to loyalty to profit. But their brands, now, are just temporary protective spells cast against the inevitable. As we approach the 10-year anniversary of the release of the iPhone, the category it blew up is starting to feel familiar. By now, an American who purchased a smartphone on contract in 2009 has not just bought but discarded at least three devices, and as smartphones mature, that is the reality of their use: to improve is to disappear just a little more. Aren’t we all just emailing and Instagramming and Facebooking and Snapchatting and WhatsApping and Angry-Birdsing anyway?
by John Herrman, Medium | Read more:
Image: Hocus-Focus/GettyProtection Without a Vaccine
Last month, a team of scientists announced what could prove to be an enormous step forward in the fight against H.I.V.
Scientists at Scripps Research Institute said they had developed an artificial antibody that, once in the blood, grabbed hold of the virus and inactivated it. The molecule can eliminate H.I.V. from infected monkeys and protect them from future infections.
But this treatment is not a vaccine, not in any ordinary sense. By delivering synthetic genes into the muscles of the monkeys, the scientists are essentially re-engineering the animals to resist disease. Researchers are testing this novel approach not just against H.I.V., but also Ebola, malaria, influenza and hepatitis.
“The sky’s the limit,” said Michael Farzan, an immunologist at Scripps and lead author of the new study.
Dr. Farzan and other scientists are increasingly hopeful that this technique may be able to provide long-term protection against diseases for which vaccines have failed. The first human trial based on this strategy — called immunoprophylaxis by gene transfer, or I.G.T. — is underway, and several new ones are planned.
“It could revolutionize the way we immunize against public health threats in the future,” said Dr. Gary J. Nabel, the chief scientific officer of Sanofi, a pharmaceutical company that produces a wide range of vaccines.
Whether I.G.T. will succeed is still an open question. Researchers still need to gauge its safety and effectiveness in humans. And the prospect of genetically engineering people to resist infectious diseases may raise concerns among patients.
“The reality is we are touching third rails, and so it’s going to take some explanation,” said Dr. David Baltimore, a Nobel Prize recipient and virologist at Caltech who is testing I.G.T. against a number of diseases.
Conventional vaccines prompt the immune system to learn how to make antibodies by introducing it to weakened or dead pathogens, or even just their molecular fragments. Our immune cells produce a range of antibodies, some of which can fight these infections.
In some cases, these antibodies provide strong defenses. Vaccinations against diseases such as smallpox and measles can lead to almost complete protection.
But against other diseases, conventional vaccines often fail to produce effective antibodies. H.I.V., for example, comes in so many different strains that a vaccine that can protect against one will not work against others.
I.G.T. is altogether different from traditional vaccination. It is instead a form of gene therapy. Scientists isolate the genes that produce powerful antibodies against certain diseases and then synthesize artificial versions. The genes are placed into viruses and injected into human tissue, usually muscle.
The viruses invade human cells with their DNA payloads, and the synthetic gene is incorporated into the recipient’s own DNA. If all goes well, the new genes instruct the cells to begin manufacturing powerful antibodies.
Scientists at Scripps Research Institute said they had developed an artificial antibody that, once in the blood, grabbed hold of the virus and inactivated it. The molecule can eliminate H.I.V. from infected monkeys and protect them from future infections.
But this treatment is not a vaccine, not in any ordinary sense. By delivering synthetic genes into the muscles of the monkeys, the scientists are essentially re-engineering the animals to resist disease. Researchers are testing this novel approach not just against H.I.V., but also Ebola, malaria, influenza and hepatitis.“The sky’s the limit,” said Michael Farzan, an immunologist at Scripps and lead author of the new study.
Dr. Farzan and other scientists are increasingly hopeful that this technique may be able to provide long-term protection against diseases for which vaccines have failed. The first human trial based on this strategy — called immunoprophylaxis by gene transfer, or I.G.T. — is underway, and several new ones are planned.
“It could revolutionize the way we immunize against public health threats in the future,” said Dr. Gary J. Nabel, the chief scientific officer of Sanofi, a pharmaceutical company that produces a wide range of vaccines.
Whether I.G.T. will succeed is still an open question. Researchers still need to gauge its safety and effectiveness in humans. And the prospect of genetically engineering people to resist infectious diseases may raise concerns among patients.
“The reality is we are touching third rails, and so it’s going to take some explanation,” said Dr. David Baltimore, a Nobel Prize recipient and virologist at Caltech who is testing I.G.T. against a number of diseases.
Conventional vaccines prompt the immune system to learn how to make antibodies by introducing it to weakened or dead pathogens, or even just their molecular fragments. Our immune cells produce a range of antibodies, some of which can fight these infections.
In some cases, these antibodies provide strong defenses. Vaccinations against diseases such as smallpox and measles can lead to almost complete protection.
But against other diseases, conventional vaccines often fail to produce effective antibodies. H.I.V., for example, comes in so many different strains that a vaccine that can protect against one will not work against others.
I.G.T. is altogether different from traditional vaccination. It is instead a form of gene therapy. Scientists isolate the genes that produce powerful antibodies against certain diseases and then synthesize artificial versions. The genes are placed into viruses and injected into human tissue, usually muscle.
The viruses invade human cells with their DNA payloads, and the synthetic gene is incorporated into the recipient’s own DNA. If all goes well, the new genes instruct the cells to begin manufacturing powerful antibodies.
by Carl Zimmer, NY Times | Read more:
Image: John HerseyThursday, March 12, 2015
Opportunity Gap
[ed. See also: Richer and Poorer.]
The event is billed as a lecture on a new book of social science. But the speaker visiting Cambridge’s Lesley University this Monday night sounds like a political candidate on the hustings. Robert D. Putnam — Harvard political scientist, trumpeter of community revival, consultant to the last four presidents — is on campus to sound an alarm. "What I want to talk to you about," he tells some 40 students and academics, is "the most important domestic challenge facing our country today. I want to talk about a growing gap between rich kids and poor kids."
Two decades ago, Putnam shot to fame with "Bowling Alone," an essay-turned-best-selling-book that amassed reams of data to chart the collapse of American community. His research popularized a concept known as "social capital." The framework, used in fields like sociology and economics, refers to social networks and the norms of reciprocity and trust they create. "He’s one of the most important social scientists of our time," says Gary King, director of Harvard’s Institute for Quantitative Social Science, because of his ability to blend scientific rigor with popular appeal.
But tonight Putnam sets the science aside, at least to start. He opens his Cambridge talk with a story. It’s about two young women, Miriam and Mary Sue. Their families, he says, both originally came from the same small Ohio town. Miriam, who had well-educated parents, went off to an ultra-elite East Coast university. Mary Sue, the daughter of high-school graduates who never held a steady job, ended up on a harrowing path of abuse, distrust, and isolation.
Removing a sheet of paper from a folder — the notes from an interview that one of his researchers conducted with Mary Sue — Putnam reads off the particulars. Mary Sue’s parents split up when she was 5. Her mother turned to stripping, leaving Mary Sue alone and hungry for days. Her only friend until she went to school was a mouse who lived in her apartment. Caught selling pot at 16, she spent time in juvenile detention, flunked out of high school, and got a diploma online. Mary Sue wistfully recalls the stillborn baby she had at 13. She now dates an older man with two infants born to two different mothers.
"To Mary Sue," Putnam says, "this feels like the best she can hope for."
He pauses. "Honestly, it’s hard for me to tell the story."
Miriam is Putnam’s own granddaughter. Mary Sue (a pseudonym) is almost exactly the same age. And the backdrop to this tale is the professor’s hometown of Port Clinton, once an egalitarian community where people looked after all kids, regardless of their backgrounds. In Putnam’s telling, Port Clinton now symbolizes the class disparities that have swept the country in recent decades — a "split-screen American nightmare" where the high-school lot contains one kid’s BMW parked beside the jalopy in which a homeless classmate lives.
"In Port Clinton now, nobody thinks of Mary Sue as one of ‘our kids,’" Putnam says. "They think she’s somebody else’s kid — let them worry about her."
At 74, the professor is embarking on a campaign with one basic goal: getting educated Americans to worry about the deteriorating lives of kids like Mary Sue. It kicks into high gear this week with the publication of his new book, Our Kids: The American Dream in Crisis (Simon & Schuster). The basic argument: To do well in life, kids need family stability, good schools, supportive neighbors, and parental investment of time and money. All of those advantages are increasingly available to the Miriams of the world and not to the Mary Sues, a disparity that Putnam calls "the opportunity gap.
Ever since the Occupy Wall Street movement emerged in 2011, much public discussion has focused on the unequal distribution of income in today’s America. Traditionally, though, that kind of inequality hasn’t greatly concerned Americans, Putnam writes. What they have worried about is a related, though distinct, issue: equality of opportunity and social mobility. Across the political spectrum, Putnam writes, Americans historically paid lots of attention to the prospects for the next generation: "whether young people from different backgrounds are, in fact, getting onto the ladder at about the same place and, given equal merit and energy, are equally likely to scale it."
The event is billed as a lecture on a new book of social science. But the speaker visiting Cambridge’s Lesley University this Monday night sounds like a political candidate on the hustings. Robert D. Putnam — Harvard political scientist, trumpeter of community revival, consultant to the last four presidents — is on campus to sound an alarm. "What I want to talk to you about," he tells some 40 students and academics, is "the most important domestic challenge facing our country today. I want to talk about a growing gap between rich kids and poor kids."
Two decades ago, Putnam shot to fame with "Bowling Alone," an essay-turned-best-selling-book that amassed reams of data to chart the collapse of American community. His research popularized a concept known as "social capital." The framework, used in fields like sociology and economics, refers to social networks and the norms of reciprocity and trust they create. "He’s one of the most important social scientists of our time," says Gary King, director of Harvard’s Institute for Quantitative Social Science, because of his ability to blend scientific rigor with popular appeal.
But tonight Putnam sets the science aside, at least to start. He opens his Cambridge talk with a story. It’s about two young women, Miriam and Mary Sue. Their families, he says, both originally came from the same small Ohio town. Miriam, who had well-educated parents, went off to an ultra-elite East Coast university. Mary Sue, the daughter of high-school graduates who never held a steady job, ended up on a harrowing path of abuse, distrust, and isolation.Removing a sheet of paper from a folder — the notes from an interview that one of his researchers conducted with Mary Sue — Putnam reads off the particulars. Mary Sue’s parents split up when she was 5. Her mother turned to stripping, leaving Mary Sue alone and hungry for days. Her only friend until she went to school was a mouse who lived in her apartment. Caught selling pot at 16, she spent time in juvenile detention, flunked out of high school, and got a diploma online. Mary Sue wistfully recalls the stillborn baby she had at 13. She now dates an older man with two infants born to two different mothers.
"To Mary Sue," Putnam says, "this feels like the best she can hope for."
He pauses. "Honestly, it’s hard for me to tell the story."
Miriam is Putnam’s own granddaughter. Mary Sue (a pseudonym) is almost exactly the same age. And the backdrop to this tale is the professor’s hometown of Port Clinton, once an egalitarian community where people looked after all kids, regardless of their backgrounds. In Putnam’s telling, Port Clinton now symbolizes the class disparities that have swept the country in recent decades — a "split-screen American nightmare" where the high-school lot contains one kid’s BMW parked beside the jalopy in which a homeless classmate lives.
"In Port Clinton now, nobody thinks of Mary Sue as one of ‘our kids,’" Putnam says. "They think she’s somebody else’s kid — let them worry about her."
At 74, the professor is embarking on a campaign with one basic goal: getting educated Americans to worry about the deteriorating lives of kids like Mary Sue. It kicks into high gear this week with the publication of his new book, Our Kids: The American Dream in Crisis (Simon & Schuster). The basic argument: To do well in life, kids need family stability, good schools, supportive neighbors, and parental investment of time and money. All of those advantages are increasingly available to the Miriams of the world and not to the Mary Sues, a disparity that Putnam calls "the opportunity gap.
Ever since the Occupy Wall Street movement emerged in 2011, much public discussion has focused on the unequal distribution of income in today’s America. Traditionally, though, that kind of inequality hasn’t greatly concerned Americans, Putnam writes. What they have worried about is a related, though distinct, issue: equality of opportunity and social mobility. Across the political spectrum, Putnam writes, Americans historically paid lots of attention to the prospects for the next generation: "whether young people from different backgrounds are, in fact, getting onto the ladder at about the same place and, given equal merit and energy, are equally likely to scale it."
by Marc Parry, Chronicle of Higher Education | Read more:
Image: Bryce VickmarkUltrasound Therapies Target Brain Cancers and Alzheimer’s Disease
From imaging babies to blasting apart kidney stones, ultrasound has proved to be a versatile tool for physicians. Now, several research teams aim to unleash the technology on some of the most feared brain diseases.
The blood-brain barrier, a tightly packed layer of cells that lines the brain's blood vessels, protects it from infections, toxins, and other threats but makes the organ frustratingly hard to treat. A strategy that combines ultrasound with microscopic blood-borne bubbles can briefly open the barrier, in theory giving drugs or the immune system access to the brain. In the clinic and the lab, that promise is being evaluated.
This month, in one of the first clinical tests, Todd Mainprize, a neurosurgeon at the University of Toronto in Canada, hopes to use ultrasound to deliver a dose of chemotherapy to a malignant brain tumor. And in some of the most dramatic evidence of the technique's potential, a research team reports this week in Science Translational Medicine that they used it to rid mice of abnormal brain clumps similar to those in Alzheimer's disease, restoring lost memory and cognitive functions. If such findings can be translated from mice to humans, “it will revolutionize the way we treat brain disease,” says biophysicist Kullervo Hynynen of the Sunnybrook Research Institute in Toronto, who originated the ultrasound method. (...)
Safely and temporarily opening the blood-brain barrier is a long-sought goal in medicine. About a decade ago, Hynynen began exploring a strategy combining ultrasound and microbubbles. The premise is that ultrasound causes such bubbles to expand and contract, jostling the cells forming the blood-brain barrier and making it slightly leaky.
That could help cancer physicians such as Mainprize deliver chemotherapy drugs into the brain. Hynynen also hypothesized that the brief leakage would rev up the brain's inflammatory response against β amyloid—the toxic protein that clumps outside neurons in Alzheimer's and may be responsible for killing them. Disposing of such debris is normally the role of the microglia, a type of brain cell. But previous studies have shown that when β amyloid forms clumps in the brain, it “seems to overwhelm microglia,” Bacskai says. Exposing the cells to anti bodies that leak in when the blood-brain barrier is breached could spur them to “wake up and do their jobs,” he says. Some antibodies in blood may also bind directly to the β-amyloid protein and flag the clumps for destruction. (...)
The blood-brain barrier, a tightly packed layer of cells that lines the brain's blood vessels, protects it from infections, toxins, and other threats but makes the organ frustratingly hard to treat. A strategy that combines ultrasound with microscopic blood-borne bubbles can briefly open the barrier, in theory giving drugs or the immune system access to the brain. In the clinic and the lab, that promise is being evaluated.
This month, in one of the first clinical tests, Todd Mainprize, a neurosurgeon at the University of Toronto in Canada, hopes to use ultrasound to deliver a dose of chemotherapy to a malignant brain tumor. And in some of the most dramatic evidence of the technique's potential, a research team reports this week in Science Translational Medicine that they used it to rid mice of abnormal brain clumps similar to those in Alzheimer's disease, restoring lost memory and cognitive functions. If such findings can be translated from mice to humans, “it will revolutionize the way we treat brain disease,” says biophysicist Kullervo Hynynen of the Sunnybrook Research Institute in Toronto, who originated the ultrasound method. (...)Safely and temporarily opening the blood-brain barrier is a long-sought goal in medicine. About a decade ago, Hynynen began exploring a strategy combining ultrasound and microbubbles. The premise is that ultrasound causes such bubbles to expand and contract, jostling the cells forming the blood-brain barrier and making it slightly leaky.
That could help cancer physicians such as Mainprize deliver chemotherapy drugs into the brain. Hynynen also hypothesized that the brief leakage would rev up the brain's inflammatory response against β amyloid—the toxic protein that clumps outside neurons in Alzheimer's and may be responsible for killing them. Disposing of such debris is normally the role of the microglia, a type of brain cell. But previous studies have shown that when β amyloid forms clumps in the brain, it “seems to overwhelm microglia,” Bacskai says. Exposing the cells to anti bodies that leak in when the blood-brain barrier is breached could spur them to “wake up and do their jobs,” he says. Some antibodies in blood may also bind directly to the β-amyloid protein and flag the clumps for destruction. (...)
This week, neuroscientist Jürgen Götz of the Queensland Brain Institute in St. Lucia, Australia, and his Ph.D. student Gerhard Leinenga report that they have built on Hynynen and Aubert's protocol, using a different mouse model of Alzheimer's. After injecting these animals with a solution of microscopic bubbles, they scanned an ultrasound beam in a zigzag pattern across each animal's entire skull, rather than focusing on discrete areas as others have done. After six to eight weekly treatments, the team tested the rodents on three different memory tasks. Alzheimer's mice in the control group, which received microbubble injections but no stimulation, showed no improvement. Mice whose blood-brain barriers had been made permeable, in contrast, saw “full restoration of memory in all three tasks,” Götz says.
by Emily Underwood, Science | Read more:
Image: Emmanuel Thevenot/Lab of Isabelle Aubert/ Sunnybrook Research Institute
Wednesday, March 11, 2015
Barack and Me
I couldn’t sleep for shit.
Friday night had turned into Saturday morning, and I was staring at the ceiling in a hotel room in Washington, D.C., only blocks from the White House, recovering from my third hot shower of the night. The fever that had developed from an 11-hour Amtrak trip down the East Coast a day earlier hadn’t left my body, and the only way I knew how to deal with the chills was to take hot showers and hope for the best.
But that wasn’t the real reason for my insomnia and this body-zapping panic: I would be speaking to the president of the United States of America in 10 hours. On Air Force One. Before his speech in Selma, Alabama, on the Edmund Pettus Bridge to commemorate the 50th anniversary of the march that took place on what became known as Bloody Sunday.
On Monday, I had received an email from the White House offering “a potential opportunity with President Obama in the very near future.” The opportunity was to be a part of a roundtable of five journalists who would have 30 minutes to talk with the president.
As the week progressed, however, the stakes grew. With the date inching closer, the details became clearer. On Friday, the final email:
Lying in bed, staring at the ceiling, just a sunrise away from that one question, I still wasn’t sure what I was going to ask. I had written one question down, but I wasn’t convinced it was the question. And I was running out of time.
All I could think about was why I was here. Or, more accurately, what brought me here. I knew what I’d wanted to ask for years. I just didn’t know if, when the time came, I’d actually ask it.
I've been chasing Barack Obama for more than a decade. I watched his 2004 speech at the Democratic National Convention while deep in the throes of college application essays. It was a speech that I needed to hear, a speech that felt as if it were specifically for me. Before I knew it, I was working on Capitol Hill in 2007 as a college intern for Senator Ted Kennedy, where I would occasionally catch a glimpse of the then-Senator Obama traveling on the underground monorail from the Senate to the Capitol floor. I reveled in the excitement when he announced his presidency that February. I volunteered for that campaign in 2008 in New Hampshire, taking to the streets of New England with a megaphone following his victory, and hoping to one day be a part of his actual staff. In 2011, looking for a way out of graduate school, I applied for a job as a blogger in his reelection campaign — and I almost got that job, before then not getting that job.
My current job — the second attempt to drop out of graduate school — is a result of not getting a job with the Obama campaign. Living in New York is a result of not getting a job with the Obama administration. And my slow crawl away from politics and toward writing is a direct result of chasing — and never quite catching — the world that surrounds President Obama. The chase has felt never-ending. But in a way, I owe everything to the chase.
The chase was on my mind as I rode in a car to Joint Base Andrews on Saturday morning. It’s what I thought about on the shuttle to Air Force One with the four other journalists, Charles Blow from the New York Times, Zerlina Maxwell from Essence, White House correspondent April Ryan from the American Urban Radio Networks, and DeWayne Wickham, a USA Today columnist and dean of Morgan State University’s School of Global Journalism & Communication. And that chase is what I thought of when we arrived at Andrews and stood before Air Force One. (...)
Air Force One is a plane on PEDs. It rumbles with such force that we were told attempting to record the roundtable on our personal devices would be a challenge, and that the stenographer would have a transcript of proceedings ready for us later that day. In terms of size, it appeared to have swallowed two double-aisled commercial airliners. But it’s still a plane. It has wheels, it has wings, it takes off, and it goes into the air.
There were stairs everywhere, and so many rooms. And many of these rooms had doors. The floor plan felt like a labyrinth of narrow walkways, leading to beige area after beige area. Both times I left my part of the cabin by myself, I got lost. And even though I was never lost for more than 10 seconds, I immediately felt that let-go-of-your-mom’s-hand-at–Six Flags lost, scared that I was either going to get in trouble or never find my way back.
Every now and then, during a break in conversation, I’d retreat to my notebook and stare at my question. I’d written a second one focused on Selma, but it wasn’t right. It was a cop-out question. A question anyone could have asked. So I knew what I had to do. I needed to change a word here, move a sentence there, make it more concise, but I knew it was absolutely the type of question I was asked here to put forward.
Friday night had turned into Saturday morning, and I was staring at the ceiling in a hotel room in Washington, D.C., only blocks from the White House, recovering from my third hot shower of the night. The fever that had developed from an 11-hour Amtrak trip down the East Coast a day earlier hadn’t left my body, and the only way I knew how to deal with the chills was to take hot showers and hope for the best.
But that wasn’t the real reason for my insomnia and this body-zapping panic: I would be speaking to the president of the United States of America in 10 hours. On Air Force One. Before his speech in Selma, Alabama, on the Edmund Pettus Bridge to commemorate the 50th anniversary of the march that took place on what became known as Bloody Sunday.On Monday, I had received an email from the White House offering “a potential opportunity with President Obama in the very near future.” The opportunity was to be a part of a roundtable of five journalists who would have 30 minutes to talk with the president.
As the week progressed, however, the stakes grew. With the date inching closer, the details became clearer. On Friday, the final email:
Following brief remarks at the top of the roundtable, the President will take a question from each participant.As in one question. Zero room for error. My editor’s response was as blunt as it was true: “Better make it count.”
Lying in bed, staring at the ceiling, just a sunrise away from that one question, I still wasn’t sure what I was going to ask. I had written one question down, but I wasn’t convinced it was the question. And I was running out of time.
All I could think about was why I was here. Or, more accurately, what brought me here. I knew what I’d wanted to ask for years. I just didn’t know if, when the time came, I’d actually ask it.
I've been chasing Barack Obama for more than a decade. I watched his 2004 speech at the Democratic National Convention while deep in the throes of college application essays. It was a speech that I needed to hear, a speech that felt as if it were specifically for me. Before I knew it, I was working on Capitol Hill in 2007 as a college intern for Senator Ted Kennedy, where I would occasionally catch a glimpse of the then-Senator Obama traveling on the underground monorail from the Senate to the Capitol floor. I reveled in the excitement when he announced his presidency that February. I volunteered for that campaign in 2008 in New Hampshire, taking to the streets of New England with a megaphone following his victory, and hoping to one day be a part of his actual staff. In 2011, looking for a way out of graduate school, I applied for a job as a blogger in his reelection campaign — and I almost got that job, before then not getting that job.
My current job — the second attempt to drop out of graduate school — is a result of not getting a job with the Obama campaign. Living in New York is a result of not getting a job with the Obama administration. And my slow crawl away from politics and toward writing is a direct result of chasing — and never quite catching — the world that surrounds President Obama. The chase has felt never-ending. But in a way, I owe everything to the chase.
The chase was on my mind as I rode in a car to Joint Base Andrews on Saturday morning. It’s what I thought about on the shuttle to Air Force One with the four other journalists, Charles Blow from the New York Times, Zerlina Maxwell from Essence, White House correspondent April Ryan from the American Urban Radio Networks, and DeWayne Wickham, a USA Today columnist and dean of Morgan State University’s School of Global Journalism & Communication. And that chase is what I thought of when we arrived at Andrews and stood before Air Force One. (...)
Air Force One is a plane on PEDs. It rumbles with such force that we were told attempting to record the roundtable on our personal devices would be a challenge, and that the stenographer would have a transcript of proceedings ready for us later that day. In terms of size, it appeared to have swallowed two double-aisled commercial airliners. But it’s still a plane. It has wheels, it has wings, it takes off, and it goes into the air.
There were stairs everywhere, and so many rooms. And many of these rooms had doors. The floor plan felt like a labyrinth of narrow walkways, leading to beige area after beige area. Both times I left my part of the cabin by myself, I got lost. And even though I was never lost for more than 10 seconds, I immediately felt that let-go-of-your-mom’s-hand-at–Six Flags lost, scared that I was either going to get in trouble or never find my way back.
Every now and then, during a break in conversation, I’d retreat to my notebook and stare at my question. I’d written a second one focused on Selma, but it wasn’t right. It was a cop-out question. A question anyone could have asked. So I knew what I had to do. I needed to change a word here, move a sentence there, make it more concise, but I knew it was absolutely the type of question I was asked here to put forward.
by Rembert Browne, Grantland | Read more:
Image: Rembert Browne
Subscribe to:
Comments (Atom)






.jpg)

