Wednesday, May 15, 2013


François PerrierThe Sacrifice of Iphigenia,1632-33.
Oil on canvas, 213 x 154 cm
Musйe des Beaux-Arts, Dijon
via:

Welcome, Robot Overlords. Please Don't Fire Us?

Increasingly, then, robots will take over more and more jobs. And guess who will own all these robots? People with money, of course. As this happens, capital will become ever more powerful and labor will become ever more worthless. Those without money—most of us—will live on whatever crumbs the owners of capital allow us.

This is a grim prediction. But it's not nearly as far-fetched as it sounds. Economist Paul Krugman recently remarked that our long-standing belief in skills and education as the keys to financial success may well be outdated. In a blog post titled "Rise of the Robots," he reviewed some recent economic data and predicted that we're entering an era where the prime cause of income inequality will be something else entirely: capital vs. labor.

Until a decade ago, the share of total national income going to workers was pretty stable at around 70 percent, while the share going to capital—mainly corporate profits and returns on financial investments—made up the other 30 percent. More recently, though, those shares have started to change. Slowly but steadily, labor's share of total national income has gone down, while the share going to capital owners has gone up. The most obvious effect of this is the skyrocketing wealth of the top 1 percent, due mostly to huge increases in capital gains and investment income.

In the economics literature, the increase in the share of income going to capital owners is known as capital-biased technological change. Let's take a layman's look at what that means.

The question we want to answer is simple: If CBTC is already happening—not a lot, but just a little bit—what trends would we expect to see? What are the signs of a computer-driven economy? First and most obviously, if automation were displacing labor, we'd expect to see a steady decline in the share of the population that's employed.

Second, we'd expect to see fewer job openings than in the past. Third, as more people compete for fewer jobs, we'd expect to see middle-class incomes flatten in a race to the bottom. Fourth, with consumption stagnant, we'd expect to see corporations stockpile more cash and, fearing weaker sales, invest less in new products and new factories. Fifth, as a result of all this, we'd expect to see labor's share of national income decline and capital's share rise.

These trends are the five horsemen of the robotic apocalypse, and guess what? We're already seeing them, and not just because of the crash of 2008. They started showing up in the statistics more than a decade ago. For a while, though, they were masked by the dot-com and housing bubbles, so when the financial crisis hit, years' worth of decline was compressed into 24 months. The trend lines dropped off the cliff.

How alarmed should we be by this? In one sense, a bit of circumspection is in order. The modern economy is complex, and most of these trends have multiple causes. The decline in the share of workers who are employed, for example, is partly caused by the aging of the population. What's more, the financial crisis has magnified many of these trends. Labor's share of income will probably recover a bit once the economy finally turns up.

But in another sense, we should be very alarmed. It's one thing to suggest that robots are going to cause mass unemployment starting in 2030 or so. We'd have some time to come to grips with that. But the evidence suggests that—slowly, haltingly—it's happening already, and we're simply not prepared for it.

by Kevin Drum, Mother Jones |  Read more:
Illustration by Roberto Parada

Some of My Best Friends Are Germs

I can tell you the exact date that I began to think of myself in the first-person plural — as a superorganism, that is, rather than a plain old individual human being. It happened on March 7. That’s when I opened my e-mail to find a huge, processor-choking file of charts and raw data from a laboratory located at the BioFrontiers Institute at the University of Colorado, Boulder. As part of a new citizen-science initiative called the American Gut project, the lab sequenced my microbiome — that is, the genes not of “me,” exactly, but of the several hundred microbial species with whom I share this body. These bacteria, which number around 100 trillion, are living (and dying) right now on the surface of my skin, on my tongue and deep in the coils of my intestines, where the largest contingent of them will be found, a pound or two of microbes together forming a vast, largely uncharted interior wilderness that scientists are just beginning to map.

I clicked open a file called Taxa Tables, and a colorful bar chart popped up on my screen. Each bar represented a sample taken (with a swab) from my skin, mouth and feces. For purposes of comparison, these were juxtaposed with bars representing the microbiomes of about 100 “average” Americans previously sequenced.

Here were the names of the hundreds of bacterial species that call me home. In sheer numbers, these microbes and their genes dwarf us. It turns out that we are only 10 percent human: for every human cell that is intrinsic to our body, there are about 10 resident microbes — including commensals (generally harmless freeloaders) and mutualists (favor traders) and, in only a tiny number of cases, pathogens. To the extent that we are bearers of genetic information, more than 99 percent of it is microbial. And it appears increasingly likely that this “second genome,” as it is sometimes called, exerts an influence on our health as great and possibly even greater than the genes we inherit from our parents. But while your inherited genes are more or less fixed, it may be possible to reshape, even cultivate, your second genome.

Justin Sonnenburg, a microbiologist at Stanford, suggests that we would do well to begin regarding the human body as “an elaborate vessel optimized for the growth and spread of our microbial inhabitants.” This humbling new way of thinking about the self has large implications for human and microbial health, which turn out to be inextricably linked. Disorders in our internal ecosystem — a loss of diversity, say, or a proliferation of the “wrong” kind of microbes — may predispose us to obesity and a whole range of chronic diseases, as well as some infections. “Fecal transplants,” which involve installing a healthy person’s microbiota into a sick person’s gut, have been shown to effectively treat an antibiotic-resistant intestinal pathogen named C. difficile, which kills 14,000 Americans each year. (Researchers use the word “microbiota” to refer to all the microbes in a community and “microbiome” to refer to their collective genes.) We’ve known for a few years that obese mice transplanted with the intestinal community of lean mice lose weight and vice versa. (We don’t know why.) A similar experiment was performed recently on humans by researchers in the Netherlands: when the contents of a lean donor’s microbiota were transferred to the guts of male patients with metabolic syndrome, the researchers found striking improvements in the recipients’ sensitivity to insulin, an important marker for metabolic health. Somehow, the gut microbes were influencing the patients’ metabolisms.

Our resident microbes also appear to play a critical role in training and modulating our immune system, helping it to accurately distinguish between friend and foe and not go nuts on, well, nuts and all sorts of other potential allergens. Some researchers believe that the alarming increase in autoimmune diseases in the West may owe to a disruption in the ancient relationship between our bodies and their “old friends” — the microbial symbionts with whom we coevolved.

These claims sound extravagant, and in fact many microbiome researchers are careful not to make the mistake that scientists working on the human genome did a decade or so ago, when they promised they were on the trail of cures to many diseases. We’re still waiting. Yet whether any cures emerge from the exploration of the second genome, the implications of what has already been learned — for our sense of self, for our definition of health and for our attitude toward bacteria in general — are difficult to overstate. Human health should now “be thought of as a collective property of the human-associated microbiota,” as one group of researchers recently concluded in a landmark review article on microbial ecology — that is, as a function of the community, not the individual.

Such a paradigm shift comes not a moment too soon, because as a civilization, we’ve just spent the better part of a century doing our unwitting best to wreck the human-associated microbiota with a multifronted war on bacteria and a diet notably detrimental to its well-being. Researchers now speak of an impoverished “Westernized microbiome” and ask whether the time has come to embark on a project of “restoration ecology” — not in the rain forest or on the prairie but right here at home, in the human gut.

by Michael Pollan, NY Times |  Read more:
Image: Hannah Whitaker for The New York Times. Prop stylist: Emily Mullin

Tuesday, May 14, 2013


Photo Copyright Clarissa Bonet, 2012 ‘Student Work’ Winner (PDN Curator).
via:

The Opihi Ohana

Opihi, limpets with a cone-like shell, have been overharvested for more than a century. In 1900, roughly 150,000 pounds of opihi were harvested for commercial sale in Hawaii. By 1944, opihi sales were down to 13,000 pounds. “The fishery had crashed,” says Dr. Chris Bird, a marine ecologist with The Hawaii Institute of Marine Biology (HIMB). Bird explains that the value of opihi is increasing due to the fact they’re getting harder and harder to find. In 2009, opihi were the fifth most expensive seafood harvested in Hawaiian waters, at $6.80 per pound wholesale, according to NOAA and the Hawaii Division of Aquatic Resources. (...)

Opihi Facts

Hawaii has three endemic species of opihi and each live in different sections along the rocky intertidal zone. They’re also very specific about what they like—and what they don’t.


Opihi Makaiauli

(Cellana exarata) want to be splashed, and don’t mind being dry between tides. The low ribs of their shells are dark and their troughs light. They grow to about two inches across.



Opihi Alinalina

(Cellana sandwichensis) crave constant splash or surge and can’t tolerate drying out for long periods. Their shells grow to about 2 and a half inches across and have a scalloped edge that was used by Hawaiians for shredding coconut meat.



Opihi Koele

(Cellana talcosa) are sometimes submerged and can live in depths of up to 10 feet. Koele are the largest of Hawaiian opihi, growing up to four inches across. Their shells are smooth and thick with a low profile.


by Sheila Sarhangi, Honolulu Magazine |  Read more:
Images: uncredited

This is the Life


[ed. Stumbled on this lyrical essay by Annie Dillard today, written, I think, around 2011.]

Any culture tells you how to live your one and only life: to wit as everyone else does. Probably most cultures prize, as ours rightly does, making a contribution by working hard at work that you love; being in the know, and intelligent; gathering a surplus; and loving your family above all, and your dog, your boat, bird-watching. Beyond those things our culture might specialize in money, and celebrity, and natural beauty. These are not universal. You enjoy work and will love your grandchildren, and somewhere in there you die.

Another contemporary consensus might be: You wear the best shoes you can afford, you seek to know Rome's best restaurants and their staffs, drive the best car, and vacation on Tenerife. And what a cook you are!

Or you take the next tribe's pigs in thrilling raids; you grill yams; you trade for televisions and hunt white-plumed birds. Everyone you know agrees: this is the life. Perhaps you burn captives. You set fire to a drunk. Yours is the human struggle, or the elite one, to achieve... whatever your own culture tells you: to publish the paper that proves the point; to progress in the firm and gain high title and salary, stock options, benefits; to get the loan to store the beans till their price rises; to elude capture, to feed your children or educate them to a feather edge; or to count coup or perfect your calligraphy; to eat the king's deer or catch the poacher; to spear the seal, intimidate the enemy, and be a big man or beloved woman and die respected for the pigs or the title or the shoes. Not a funeral. Forget funeral. A big birthday party. Since everyone around you agrees.  (...)

So the illusion, like the visual field, is complete It has no holes except books you read and soon forget. And death takes us by storm. What was that, that life? What else offered? If for him it was contract bridge, if for her it was copyright law, if for everyone it was and is an optimal mix of family and friends, learning, contribution, and joy of making and ameliorating what else is there, or was there, or will there ever be?

What else is a vision or fact of time and the peoples it bears issuing from the mouth of the cosmos, from the round mouth of eternity, in a wide and parti-colored utterance. In the complex weave of this utterance like fabric, in its infinite domestic interstices, the centuries and continents and classes dwell. Each people knows only its own squares in the weave, its wars and instruments and arts, and also the starry sky.

Okay, and then what? Say you scale your own weft and see time's breadth and the length of space. You see the way the fabric both passes among the stars and encloses them. You see in the weave nearby, and aslant farther off, the peoples variously scandalized or exalted in their squares. They work on their projects they flake spear points, hoe, plant; they kill aurochs or one another; they prepare sacrifices as we here and now work on our projects. What, seeing this spread multiply infinitely in every direction, would you do differently? No one could love your children more; would you love them less? Would you change your project? To what?

by Annie Dillard, billemory.com |  Read more:
Image: Corn Dog via billemory.com

Stasia Burrington - Droop (2012)
via:

America's First Climate Refugees


[ed. First in a series of articles on the effects of coastal erosion in Alaska. Others include: One Family's Great Escape, An Undeniable Truth, and The At Risk List.]

Sabrina Warner keeps having the same nightmare: a huge wave rearing up out of the water and crashing over her home, forcing her to swim for her life with her toddler son.

"I dream about the water coming in," she said. The landscape in winter on the Bering Sea coast seems peaceful, the tidal wave of Warner's nightmare trapped by snow and several feet of ice. But the calm is deceptive. Spring break-up will soon restore the Ninglick River to its full violent force.

In the dream, Warner climbs on to the roof of her small house. As the waters rise, she swims for higher ground: the village school which sits on 20-foot pilings.

Even that isn't high enough. By the time Warner wakes, she is clinging to the roof of the school, desperate to be saved.

Warner's vision is not far removed from a reality written by climate change. The people of Newtok, on the west coast of Alaska and about 400 miles south of the Bering Strait that separates the state from Russia, are living a slow-motion disaster that will end, very possibly within the next five years, with the entire village being washed away.

The Ninglick River coils around Newtok on three sides before emptying into the Bering Sea. It has steadily been eating away at the land, carrying off 100ft or more some years, in a process moving at unusual speed because of climate change. Eventually all of the villagers will have to leave, becoming America's first climate change refugees.

It is not a label or a future embraced by people living in Newtok. Yup'ik Eskimo have been fishing and hunting by the shores of the Bering Sea for centuries and the villagers reject the notion they will now be forced to run in chaos from ancestral lands.

But exile is undeniable. A report by the US Army Corps of Engineers predicted that the highest point in the village – the school of Warner's nightmare – could be underwater by 2017. There was no possible way to protect the village in place, the report concluded.

If Newtok can not move its people to the new site in time, the village will disappear. A community of 350 people, nearly all related to some degree and all intimately connected to the land, will cease to exist, its inhabitants scattered to the villages and towns of western Alaska, Anchorage and beyond.

It's a choice confronting more than 180 native communities in Alaska, which are flooding and losing land because of the ice melt that is part of the changing climate. (...)

It became clear by the 1990s that Newtok – like dozens of other remote communities in Alaska – was losing land at a dangerous rate. Almost all native Alaskan villages are located along rivers and sea coasts, and almost all are facing similar peril.

A federal government report found more than 180 other native Alaskan villages – or 86% of all native communities – were at risk because of climate change. In the case of Newtok, those effects were potentially life threatening. (...)

And so after years of poring over reports, the entire community voted to relocate to higher ground across the river. The decision was endorsed by the state authorities. In December 2007, the village held the first public meeting to plan the move.

The proposed new site for Newtok, voted on by the villagers and approved by government planners, lies only nine miles away, atop a high ridge of dark volcanic rock across the river on Nelson Island. On a good day in winter, it's a half-hour bone-shaking journey across the frozen Ninglick river by snowmobile.

But the cost of the move could run as high as $130m, according to government estimates. For the villagers of Newtok, finding the cash, and finding their way through the government bureaucracy, is proving the challenge of their lives.

Five years on from that first public meeting, Newtok remains stuck where it was, the peeling tiles and the broken-down office furniture in the council office grown even shabbier, the dilapidated water treatment plant now shut down as a health hazard, an entire village tethered to a dangerous location by bureaucratic obstacles and lack of funds.

Village leaders hope that this coming summer, when conditions become warm enough for construction crews to get to work, could provide the big push Newtok needs by completing the first phase of basic infrastructure. And the effort needs a push. When the autumn storms blow in, the water rises fast.

by Suzanne Goldenberg, The Guardian |  Read more:
Photograph: Getty Creative/Nasa

Angelina Jolie Has Done Something Extraordinary

[ed. As have many other extraordinary women. By the way, the so-called 'cancer gene' BRCA1 can be detected with a blood test and women with a family history of breast or ovarian cancer should consider getting tested.]

Of course, Angelina Jolie is not the first actress to have had a mastectomy, that most medical of terms referring to the removal of at least one of the anatomical attributes that actresses are expected to hoik up for the sake of their career. In fact, off the top of my head, I can name four: Christina Applegate, Olivia Newton John, Lynn Redgrave and Kathy Bates have all publicly discussed their mastectomies.

Nor is she the first to have a preventative double mastectomy: Sharon Osbourne (not an actress but very much a woman in the public eye) announced only last year that she had one after discovering, as she told Hello! magazine, that she had "the breast-cancer gene".

Yet while Jolie may not be the first, she has done something that is – by any standards – pretty extraordinary and brave, even on top of having a preventative double mastectomy. She is certainly the highest-profile woman to make such an announcement in a long time, and she is arguably one with the most at stake. For a young, beautiful actress to announce that she has had her breasts removed is, as career moves go, somewhat akin to a handsome leading man announcing he is gay, and that is disgusting and ridiculous on both counts. Ultimately, she has challenged not just her own public image but also the wearisome cliche of what makes a woman sexy, and how a woman considered to be sexy talks about her body.

Judging from her clear, calm and plain-speaking article in the New York Times discussing why she elected to undergo a double mastectomy, Jolie views publicising her decision as simply a matter of public service:

"I chose not to keep my story private because there are many women who do not know that they might be living under the shadow of cancer. It is my hope that they, too, will be able to get gene tested, and that if they have a high risk they, too, will know they have strong options," she writes, while acknowledging the issues of financial access that prevent too many women from getting tested and treated. (...)

That breasts do not exist just to turn on other people will not come as a surprise to any sentient adult human being. Nor, it should go without saying but sadly does not, do breasts make the woman. But brutal, mature reality does not generally have much of a place in the fantasy land where the myths of celebrities and public perception intermix. In fact, in this fantasy land of celebrity puffery and tabloid nonsense, Angelina Jolie was, only 24 hours ago, still, in the eyes of the media, the sex-crazed, blood-drinking, man-stealing seductress (albeit one with six children) that she has been pretty much since she came to the public eye decades ago. In fact, only last weekend I read an article – and I'm using that term in the loosest sense – claiming that Jolie was so adamant to have her wedding before Jennifer Aniston's that she and Brad Pitt had already booked a "romantic getaway honeymoon" for themselves. Now we know that, contrary to looking up "sexxxxxy hotels" on the internet while having mind-blowing sexy sex, Pitt and Jolie have actually been otherwise engaged at the Pink Lotus Breast Center, while Jolie was being treated for her double mastectomy. Rarely has the disjunct between celebrity gossip rubbish and the actual truth looked so ridiculously exposed.

by Hadley Freeman, The Guardian |  Read more:
Photo: AP

Monday, May 13, 2013


Philip Barlow, Sea of Glass.
via:

Harry Benson, Little Rock, Arkansas, 1992.
via:

Why I Despise The Great Gatsby

The best advice I ever got about reading came from the critic and scholar Louis Menand. Back in 2005, I spent six months in Boston and, for the fun of it, sat in on a lit seminar he was teaching at Harvard. The week we were to read Gertrude Stein’s notoriously challenging Tender Buttons, one student raised her hand and asked—bravely, I thought—if Menand had any advice about how best to approach it. In response, he offered up the closest thing to a beatific smile I have ever seen on the face of a book critic. “With pleasure,” he replied.

I have read The Great Gatsby five times. The first was in high school; the second, in college. The third was in my mid-twenties, stuck in a remote bus depot in Peru with someone’s left-behind copy. The fourth was last month, in advance of seeing the new film adaptation; the fifth, last week. There are a small number of novels I return to again and again: Middlemarch, The Portrait of a Lady, Pride and ­Prejudice, maybe a half-dozen others. But Gatsby is in a class by itself. It is the only book I have read so often despite failing—in the face of real effort and sincere intentions—to derive almost any pleasure at all from the experience.

I know how I’m supposed to feel about Gatsby: In the words of the critic Jonathan Yardley, “that it is the American masterwork.” Malcolm Cowley admired its “moral permanence.” T. S. Eliot called it “the first step that American fiction has taken since Henry James.” Lionel Trilling thought Fitzgerald had achieved in it “the ideal voice of the novelist.” That’s the received Gatsby: a linguistically elegant, intellectually bold, morally acute parable of our nation.

I am in thoroughgoing disagreement with all of this. I find Gatsby aesthetically overrated, psychologically vacant, and morally complacent; I think we kid ourselves about the lessons it contains. None of this would matter much to me if Gatsby were not also sacrosanct. Books being borderline irrelevant in America, one is generally free to dislike them—but not this book. So since we find ourselves, as we cyclically do here, in the middle of another massive Gatsby ­recrudescence, allow me to file a minority report.

The plot of The Great Gatsby, should you need a refresher, is easily told. Nick Carraway, an upstanding young man from the Midwest, moves to New York to seek his fortune in the bond business. He rents a cottage on Long Island, next to a mansion occupied by a man of mysterious origins but manifest wealth: Jay Gatsby, known far and wide for his extravagant parties. Gradually, we learn that Gatsby was born into poverty, and that everything he has acquired—­fortune, mansion, entire persona—is designed to attract the attention of his first love: the beautiful Daisy, by chance Nick’s cousin. Daisy loved Gatsby but married Tom Buchanan, who is fabulously wealthy, fabulously unpleasant, and conducting an affair with a married working-class woman named Myrtle. Thanks to Nick, Gatsby and Daisy reunite, but she ultimately balks at the prospect of leaving Tom and, barreling back home in Gatsby’s car, kills Myrtle in a hit-and-run. Her husband, believing that Gatsby was both the driver and Myrtle’s lover, tracks him to his mansion and shoots him. Finis, give or take some final reflections from Nick.

When this tale was published, in 1925, very few people aside from its author thought it was or would ever become an American classic. Unlike his first book—This Side of Paradise, which was hailed as the definitive novel of its era—The Great Gatsby emerged to mixed reviews and mediocre sales. Fewer than 24,000 copies were printed in Fitzgerald’s lifetime, and some were still sitting in a warehouse when he died, in 1940, at the age of 44. Five years later, the U.S. military distributed 150,000 copies to service members, and the book has never been out of print since. Untold millions of copies have sold, including 405,000 in the first three months of this year.

But sales figures don’t capture the contemporary Gatsby phenomenon. In recent years, the book has been reinvented as a much-admired experimental play (Gatz) and a Nintendo video game—“Grand Theft Auto, West Egg,” as the New York Times dubbed it. This Thursday, Stephen Colbert will host a Gatsby book club; the new movie opens Friday. (Read David Edelstein's review here.) If you need a place to take your date afterward and have $14,999 to spare, you can head to the Trump hotel, which is offering a glamorous “Great Gatsby Package”: three nights in a suite on Central Park West, a magnum of Champagne, cuff links and a tailored suit for men, and, “for the ladies, an Art Deco shagreen and onyx cuff, accompanied by a personal note from Ivanka Trump.” Car insurance is not included.

So Gatsby is on our minds, on our screens, on our credit cards, on top of the Amazon best-seller list. But even in quieter days, we never really forget Fitzgerald’s novel. It is, among other things, a pedagogical perennial, in part for obvious reasons. The book is short, easy to read, and full of low-hanging symbols, the most famous of which really do hang low over Long Island: the green light at the end of Daisy’s dock; the unblinking eyes of Dr. T. J. Eckleburg, that Jazz Age Dr. Zizmor. But the real appeal of the book, one assumes, is what it lets us teach young people about the political, moral, and social fabric of our nation. Which raises the question: To our students, and to ourselves, exactly what kind of Great Gatsby Package are we selling?

by Kathryn Schulz, Vulture |  Read more:
Image via: Wikipedia

Saska Pomeroy, Still Life With Cat, 2013.
via:

Jeremy Parnell, Big Chook, (2005). Installed on Tamarama Beach in Sydney.
via:

Liz Brizzi. Mojave Sands, 2013
via:

The Rationalist Way of Death


Rationalists and secularists in the old plain style were very clear about death and dying, or at least they tried to be. “It’s just a nothing,” they would say: “the lights go out and then the curtain falls.” I won’t exist after I die, but then I didn’t exist before I was born, so what’s the big deal? It’s going to happen anyway, so just get over it. We are only forked animals after all, and when the time comes you should give my body to medical science, or burn it and use it as fertiliser; or why not eat it, if you’re hungry, or feed it to the pigs? And for goodness sake, don’t worry about how I died – whether peacefully or in pain – and don’t speculate about my last thoughts, my last sentiments or my last words. Why attach more importance to my dying moments than to any other part of my life? As for the business of seeing the body and saying goodbye, and the trouble and expense of coffins and flowers and funerals: what are they but relics of morbid superstitions that we should have got rid of centuries ago? So no fuss, please: the world belongs to youth and the future, not death and the past: go ahead and have a party if you must, with plenty to drink, but no speeches, nothing maudlin, no tears, nothing that might silence the laughter of children. And I beg you, no memorials of any kind: no stones, no plaques, no shrines, no park benches, no tree-plantings, no dedications: let the memory of who I was die with me.

In practice it has not always been so easy, and those of us who think of ourselves as CORPSES (Children of Rationalist Parents) may find ourselves seriously embarrassed when it comes to carrying out the wishes of our progenitors when they die. Bans on mourning and demands for oblivion are not going to have much effect when we are wracked with grief – when happiness is the last thing we want, when we find ourselves dwelling in remorse and remembrance and will not be comforted. Hence one of the most conspicuous elements in the transformation of rationalism in recent decades: the rise of a burgeoning service industry supplying secular celebrants for humanist funerals, to fill a ritualistic gap that earlier generations would not have wanted to acknowledge.

The decline of hardline rationalism about bereavement may be part of a global social trend towards blubbering sentimentality and public exhibitions of grief: Princess Diana and all that. But there could be something more serious behind it too: a suspicion that the no-nonsense approach to death advocated by pure-minded atheists bears a horrible resemblance to the attitudes that lie behind the great political crimes of the 20th century – Hiroshima and Nagasaki, the massified deaths of two world wars, the millions discarded as obstacles to progress in the Soviet Union and China, and of course the Nazi death camps.

If Holocaust stories are uniquely hard to bear, it is not because they describe suffering, death and humiliation on a bewildering scale, but because of the calculated impersonality and disinterested anonymity with which they were inflicted on their victims. (...)

As far as the old-style rationalists were concerned, any desire to ritualise death and remember the dead was a sign of a failure of nerve, and an inability to grow out of religious indoctrination – especially all that Christian stuff about personal survival, arraignment before a divine judge and consignment to heaven or hell. But in fact Christianity does not speak with one voice when it comes to death and dying. In the gospels of Matthew and Luke, Jesus issued a severe reprimand to a disciple who wanted to give his father a proper funeral: get back to work at once, he said, and “let the dead bury their dead.” The rebuke may seem like an enlightened anticipation of 20th-century rationalism, but it is also perfectly consistent with some main doctrines of Christianity: if the body is just a temporary home for an immortal soul, and a perpetual temptation to sin, then the sooner we shuffle it off the better.

The Egyptians, lacking the assurance of eternal life, had favoured mummification and entombment, at least for the ruling elite, while the Greeks and Romans preferred cremation and a good epitaph, and the Jews went in for speedy burials, usually in communal graves. But the Christians, with their confident expectation of a life after death, had no need for such pagan mumbo-jumbo.

by Jonathan Ree, Rationlist Association |  Read more:
Image: Jessica Chandler

Laptop U

When people refer to “higher education” in this country, they are talking about two systems. One is élite. It’s made up of selective schools that people can apply to—schools like Harvard, and also like U.C. Santa Cruz, Northeastern, Penn State, and Kenyon. All these institutions turn most applicants away, and all pursue a common, if vague, notion of what universities are meant to strive for. When colleges appear in movies, they are verdant, tree-draped quadrangles set amid Georgian or Gothic (or Georgian-Gothic) buildings. When brochures from these schools arrive in the mail, they often look the same. Chances are, you’ll find a Byronic young man reading “Cartesian Meditations” on a bench beneath an elm tree, or perhaps his romantic cousin, the New England boy of fall, a tousle-haired chap with a knapsack slung back on one shoulder. He is walking with a lovely, earnest young woman who apparently likes scarves, and probably Shelley. They are smiling. Everyone is smiling. The professors, who are wearing friendly, Rick Moranis-style glasses, smile, though they’re hard at work at a large table with an eager student, sharing a splayed book and gesturing as if weighing two big, wholesome orbs of fruit. Universities are special places, we believe: gardens where chosen people escape their normal lives to cultivate the Life of the Mind.

But that is not the kind of higher education most Americans know. The vast majority of people who get education beyond high school do so at community colleges and other regional and nonselective schools. Most who apply are accepted. The teachers there, not all of whom have doctorates or get research support, may seem restless and harried. Students may, too. Some attend school part time, juggling their academic work with family or full-time jobs, and so the dropout rate, and time-to-degree, runs higher than at élite institutions. Many campuses are funded on fumes, or are on thin ice with accreditation boards; there are few quadrangles involved. The coursework often prepares students for specific professions or required skills. If you want to be trained as a medical assistant, there is a track for that. If you want to learn to operate an infrared spectrometer, there is a course to show you how. This is the populist arm of higher education. It accounts for about eighty per cent of colleges in the United States.

It is also under extreme strain. In the mid-nineteen-sixties, two economists, William J. Baumol and William G. Bowen, diagnosed a “cost disease” in industries like education, and the theory continues to inform thinking about pressure in the system. Usually, as wages rise within an industry, productivity does, too. But a Harvard lecture hall still holds about the same number of students it held a century ago, and the usual means of increasing efficiency—implementing advances in technology, speeding the process up, doing more at once—haven’t seemed to apply when the goal is turning callow eighteen-year-olds into educated men and women. Although educators’ salaries have risen (more or less) in measure with the general economy over the past hundred years, their productivity hasn’t. The cost disease is thought to help explain why the price of education is on a rocket course, with no levelling in sight.

Bowen spent much of the seventies and eighties as the president of Princeton, after which he joined the Mellon Foundation. In a lecture series at Stanford last year, he argued that online education may provide a cure for the disease he diagnosed almost half a century ago. If overloaded institutions diverted their students to online education, it would reduce faculty, and associated expenses. Courses would become less jammed. Best of all, the élite and populist systems of higher education would finally begin to interlock gears and run as one: the best-endowed schools in the country could give something back to their nonexclusive cousins, streamlining their own teaching in the process. Struggling schools could use the online courses in their own programs, as San José State has, giving their students the benefit of a first-rate education. Everybody wins. At Harvard, I was told, repeatedly, “A rising tide lifts all boats.”

Does it, though? On the one hand, if schools like Harvard and Stanford become the Starbucks and Peet’s of higher education, offering sophisticated branded courses at the campus nearest you, bright students at all levels will have access. But very few of these students will ever have a chance to touch these distant shores. And touch, historically, has been a crucial part of élite education. At twenty, at Dartmouth, maybe, you’re sitting in a dormitory room at 1 a.m. sharing Chinese food with two kids wearing flip-flops and Target jeans; twenty-five years later, one of those kids is running a multibillion-dollar tech company and the other is chairing a Senate subcommittee. Access to “élite education” may be more about access to the élites than about access to the classroom teaching. Bill Clinton, a lower-middle-class kid out of Arkansas, might have received an equally distinguished education if he hadn’t gone to Georgetown, Oxford, and Yale, but he wouldn’t have been President.

Meanwhile, smaller institutions could be eclipsed, or reduced to dependencies of the standing powers. “As a country we are simply trying to support too many universities that are trying to be research institutions,” Stanford’s John Hennessy has argued. “Nationally we may not be able to afford as many research institutions going forward.” If élite universities were to carry the research burden of the whole system, less well-funded schools could be stripped down and streamlined. Instead of having to fuel a fleet of ships, you’d fuel the strongest ones, and let them tug the other boats along.

by Nathan Heller, New Yorker |  Read more:
Illustration by Leo Espinosa.

Jan Versnel - Graphic Designer, 1962
via: