Thursday, March 5, 2015

The Sound of Maybe

Harvard University’s Holden Chapel always struck me as the proper home of a crypt-keeper: an appropriate place to die, or at least to remain dead. The forty-foot brick structure has no front windows. Above its entrance are four stone bucrania, bas-relief ox-skull sculptures of the sort that pagans once placed on their temples to keep away evil spirits. In 1895, when William James was asked to address a crowd of young Christian men at the Georgian chapel, it was already more than 150 years old, a fitting setting for the fifty-three-year-old philosopher to contemplate what he had come to believe was the profoundest of questions: “Is life worth living?” (...)

James, by then quite famous as the father of American psychology and philosophy, was one of them—unequivocally “sick-souled,” as he put it. He knew something that the faithful often miss, that believing in life’s worth, for many people, is the struggle of a lifetime. He’d overdosed on chloral hydrate in the 1870s just “for the fun of it,” as he wrote to his brother Henry, to see how close he could come to the morgue without actually going there. James was not alone in his curiosity. A decade later, his colleague and founder of the Society for Psychical Research, Edmund Gurney, took the experiment with life and death too far, testing what turned out to be fatal dose of chloroform. In response to Gurney’s death, James wrote to his brother once again. “[This death] make[s] what remains here strangely insignificant and ephemeral, as if the weight of things, as well as the numbers, was all on the other side.” The other side. As in: No. Life is not worth living.

“No,” as it turns out, is an answer that has much to recommend it in a place like Holden Chapel. Religious services were moved out of the building in the middle of the eighteenth century and, for the next hundred years, it served as a chemistry lab and classroom for the nascent Harvard Medical School, where cadavers were dissected. “The Gross Clinic,” painted by Thomas Eakins in 1875, gives some idea about the nature of surgery at the time. In it, several doctors perform an operation on a child, working without gloves as their patient’s insides fester in the open air. The patient’s mother, meanwhile, sits nearby in horror, covering her face in a futile attempt to escape what James understood all too well: at the end of the existential day, we are a bunch of smelly carcasses. (When Holden was renovated in 1999, workers discovered skeletal remains in the basement.) James would have been aware of the chapel’s gory medical history as he pondered life’s worth with the YMCA.

The aging philosopher was not there to affirm the sunny convictions of these Christian acolytes. But he’d also managed to avoid suicide for more than half a century. By the end of the evening at Holden, somewhere between Pollyanna optimism and utter nihilism, James discovered the answer to his question: Maybe. Maybe life is worth living. “It depends,” he said, “on the liver.” (...)

When James suggested that life’s worth depends on the liver, he tapped into this particular, peculiar spiritual history. Most scholars continue to take James to mean that forming existential meaning is, in a very real sense, up to each of us, that our individual wills are the determining factor in making meaning in a world that continually threatens it. This is all well and good, but it doesn’t go quite far enough. For some people, their livers portend truly awful circumstances, and no amount of effort can see them through. When asked the question—“Is life worth living?”—some livers are naturally inclined toward, and then can finally give, but one answer: “No.” James wasn’t resigned to this fact, but he also wasn’t blind to it. The worth of life remains conditional on factors both within and beyond one’s control. The appropriate response to this existential situation is not, at least for James, utter despair, but rather the repeated, ardent, yearning attempt to make good on life’s tenuous possibilities. The risk of life—that it turns out to be wholly meaningless—is real, but so too is the reward: the ever-present chance to be partially responsible for its worth.

by John Kaag, Harpers |  Read more:
Image: John Kaag

Wednesday, March 4, 2015


Joel Meyerowitz, Wild Flowers, 1983
via:

Marching One By One

During his three controversial terms as mayor of New York, Michael Bloomberg launched all-out public-health crusades against some of his constituents’ favorite vices, including cigarettes, trans fats, and sodas. Perhaps he should have declared war on loneliness, instead. Scientists have repeatedly found that people who lack—or believe that they lack—close social connections have significantly higher mortality rates than those who find themselves surrounded by friends. One 2010 meta-analysis suggested that social isolation may be more dangerous than obesity and just as deadly as smoking. Loneliness and isolation appear to alter hormone levels, immune responses, and gene expression, and to increase the risk for a variety of ailments, including heart disease, depression, high blood pressure, and Alzheimer’s.

Humans are not the only creatures that benefit from a little companionship; isolation can also enfeeble rats, mice, pigs, rabbits, squirrel monkeys, starlings, and parrots. And it takes a particular toll on Camponotus fellah, a species of carpenter ant. According to a new study by researchers at the University of Lausanne, in Switzerland, worker ants that live alone have one-tenth the life span of those that live in small groups. Scientists have known for decades that social insects fare poorly when separated from their colonies, and the Swiss group, in the course of their ongoing experiments with carpenter ants, had observed this effect firsthand. “We realized that, when we isolated them, many of them would die quickly, even if we provided them with food and water,” Laurent Keller, the biologist who led the research team, told me.

Keller and his colleagues weren’t initially sure why the ants were dying, but they had recently developed a tool that they thought might solve the puzzle: an automated system for tracking ant movements. They printed unique patterns on tiny squares of white paper and glued them, like bar codes, to the backs of their carpenter ants. A camera snapped high-resolution photos as the ants skittered around, and a software program used the images to calculate the position and orientation of each ant. The system wasn’t foolproof. (“First, some ant species are difficult to mark because they remove or chew the tags,” the researchers wrote. “Second some species are difficult to track because individuals sit on top of each other thereby obscuring the tags.”) But the tags allowed the researchers to easily collect billions of data points on the active, industrious insects. The system revealed, for instance, that ants are restless workers, shifting careers—from nurses to cleaners to foragers—as they age.

Keller’s team thought that their automated tracker might help explain why isolated ants died so quickly and how the ants’ behavior changed when their social ties had been severed. And so the researchers set out to create some lonely ants. They assembled isolation chambers, small plastic boxes containing food, water, and a capsule that was designed to serve as a nest. Then they plucked a few hundred unlucky workers out of their colonies, glued bar codes to their backs, and introduced them to their new homes. The ants were assigned to the boxes in different combinations—some moved in utterly alone, while others were housed in pairs, in groups of ten, or with several squirming larvae. Then the insects went back to their lives, and the scientists waited for them to die. As their final moments approached, the ants would begin to shake. “Finally, they lay down and stop moving,” Akiko Koto, a co-author of the new study, said.

The effect of isolation was dramatic. The ants that lived in groups of ten survived for about sixty-six days, on average. The solitary ants died after just six and a half. (Ants that lived with larvae or in pairs had intermediate life spans, averaging twenty-two and twenty-nine days, respectively.) When the researchers analyzed the movement data, they discovered that the companionless ants were hyperactive, spending huge amounts of time roaming around the plastic box, especially near the walls. During the first day alone, the lonesome ants walked twice as far as those that lived in groups of ten. This ceaseless locomotion is likely a stress response, Keller said, though Koto painted a more poignant picture. “I think it’s somehow expected,” she told me. “In a natural condition, in the park or in a forest, if the ants lose their colonies, they’ll try to find their mother colonies.” The ants, in other words, may have been looking for their families.

by Emily Anthes, New Yorker |  Read more:
Image: Ellen Surrey

The Fall of the Hipster Brand

In 2005, Daniel Bernardo, a newly minted art history graduate, decided to start his post-college life in Williamsburg, Brooklyn. He was lucky enough to find an affordable loft apartment above a skateboard shop with four other housemates who were all busy pursuing their dreams: one owned the shop below, two were artists, and the other was an aspiring fashion designer.

Bernardo took a stab at starting a business selling handmade shoes. Tourists (or Manhattanites) who strayed into his neighborhood certainly considered the community somewhat eccentric; Bernardo, for instance, was rocking a Dali mustache.

"We were all heavily influenced by the ethics of the punk scene, which was about going against the norm," he recalls. "I wore vintage vests, ties, and suspenders, partly because I liked them, but partly because it was what I could afford at the time. Some of my friends were into the skater look, wearing old band t-shirts and cutoff jeans, others wore flannel and had beards."

After moving to the neighborhood, Bernardo began to notice something odd: the offbeat outfits he and his friends were wearing started appearing on the racks of chain stores. You could pop into Urban Outfitters to buy a faded T-shirt from an obscure band, a pocket watch, and Victorian pinstripe trousers. You could walk out of American Apparel with a leotard and '80s spandex shorts.

This aesthetic was labeled "hipster" and could be bought straight off the shelf. Bernardo was mystified: "Our outfits were all about being true to ourselves and not spending too much money on clothes, but suddenly you could pick up an outfit that looked vintage but that had never been worn before."

By 2006, American Apparel's hipster-centric aesthetic became so popular that the company was snapped up for $382.5 million by an investment firm, who promptly took it public. That year Pabst Blue Ribbon beer, once described as the "nectar of the hipster gods," overtook Coors in volume sales. Meanwhile, Urban Outfitters saw a 44% increasein profits every year between 2003 and 2006.

Before long, so-called hipster looks started showing up at big-box retailers. You could buy skinny jeans, ironic tees, and beanies from Target, along with toilet paper and washing detergent—you could even buy a fixed gear bike from Walmart for $150. "Suddenly, trying to look different became mainstream, which was hard to wrap our minds around," says Dylan Leavitt, star of Dylan’s Vintage Minute.

A decade later, the tide has turned. American Apparel hasn't posted a profit since 2009 and has in fact hemorrhaged so much money that it was nearly delisted from the New York Stock Exchange. Pabst was just sold to a Russian beverage company in a move widely interpreted as a sign that the beer is losing its appeal. Urban Outfitters sales have steadily declined since 2011. What went wrong? Did these companies fail to anticipate the next "hipster trend?" Or has the hipster era finally come to an end, rendering these kinds of companies irrelevant?

"This is the conversation I have almost every weekend," says Katharine Brandes, the LA-based creative director of women’s fashion label BB Dakota. "My friends and I sometimes like to observe hipsters in their natural habitat, which around here is Silver Lake." Brandes, 28, grew up surrounded by the hipster-gone-mainstream aesthetic. Her theory is that these brands haven't been perceptive to how youth culture has changed over the last decade.

The earliest hipsters, she argues, generally rejected mainstream fashion and beliefs. In later years, however, hipster culture became more about a particular ethical lifestyle. "Urban Outfitters and American Apparel did a good job of commodifying the earliest kind of hipster—the Vice-reading, PBR-swilling, trucker-hat-wearing twentysomething," she says. "But they have not successfully evolved to meet the needs of the new wave that cares about authenticity and buys products from brands that have a strong ethical core."

by Elizabeth Segran, Racked |  Read more:
Image: Getty

My Prescribed Life

When I was diagnosed with a mental illness at age eleven, my doctor—soft-spoken and straitlaced, with a thick moustache—explained that I was suffering from a chemical imbalance in my brain. At the time, the medical community believed that depressive disorders were primarily caused by a deficiency of monoamine neurotransmitters, which help regulate moods and general happiness. The new drugs were thought to stall reabsorption of serotonin into nerve cells and allow it to linger instead in the synapse between cells, where over time it may help transmit the “happy” message. It sounds logical, but even today scientists have unanswered questions. Since patients were seeing improvement, doctors figured, they must have been low in monoamines.

And I did improve. A week or so after I started taking the drugs, my family took our new dog, Chester, for a walk in the Beach neighbourhood of Toronto. As we strolled along the boardwalk, he snatched a peanut butter sandwich clean out of a little boy’s hand. “It was the first time we’d seen you smile in three months,” my dad told me recently. Over the next few months, my sadness lifted. My toxic thoughts became more manageable. The nausea dissipated, and I started eating again. I switched schools, made new friends, and slowly, cautiously, returned to normal life. The drugs buoyed me up from cataclysmic depression to relatively stable, low-boiling anxiety.

The drugs came with some obsessive-compulsive side effects. I picked the skin on my face and limbs like a crystal meth addict, burrowing beneath the flesh to create welts and sores. I also developed a facial tic, wherein I’d scrunch up my nose until it ached. (Even when I think about it now, the urge to scrunch is hard to resist.) My doctor prescribed even more drugs: clomipramine and imipramine, two remnants of the old tricyclic class of antidepressants.

My parents weighed the potential risks of this cocktail against what they could only imagine would happen if I continued along my destructive path. My doctor, meanwhile, hoped that by staving off anxiety and depression at an early age, my brain might lay down permanent pathways to combat patterns of dysfunctional thinking. Really, no one knew what to expect. “We were totally in the wilderness about child and adolescent psychiatry,” explains Dean Elbe, a clinical pharmacist who specializes in child psychiatry at the BC Children’s Hospital, in Vancouver. “All we could do was extrapolate from what we saw in adults.”

We still don’t really know anything about the long-term effects of antidepressants on adolescent development. There have been no long-term studies, partly because of logistics, and because the US Food and Drug Administration and Health Canada require pharmaceutical companies to prove only that their medications are better than placebos over the short term. One study found that extended exposure to fluoxetine (the generic form of Prozac) in some young mice led to anxiety-like behavior recurring when the mice were exposed again to the drug as adults.

But the most profound and pervasive fear—among adults, among parents of affected kids, and among those kids themselves—is that antidepressants will somehow alter the patient’s essential identity. In Is It Me or My Meds: Living with Antidepressants, Boston-based sociologist David A. Karp explains, “Psychotropic drugs have as their purpose the transformation of people’s moods, feelings, and perceptions. These drugs act on—perhaps even create—people’s consciousness and, therefore, have profound effects on the nature of their identities.”

This kind of thinking taps into one of the paramount tensions of mental illness: the blurred line between pathology and personality. How much of what we feel is the result of an illness? How much is our so-called identity? Though scientists still believe neurotransmitter deficiencies affect mental health, they’ve also implicated a whack of other factors, including the environment, stress, and physical health. Together with the mysterious chemical voodoo taking place in our bodies, those factors spin in an endless feedback loop that makes it impossible to source mental illness. It stands to reason that drugs meant to treat an imbalance in serotonin might bleed beyond their reach, altering who we are as long as we’re on them.

by Emily Landau, The Walrus |  Read more:
Image: Adrian Forrow

Tuesday, March 3, 2015


Robert Rauschenberg, Fashion (Tribute 21), 1994
via:

Why We’re All Becoming Independent Contractors

[ed. FedEx? Really? I had no idea this is how they operated.]

FedEx calls its drivers independent contractors.

Yet FedEx requires them to pay for the FedEx-branded trucks they drive, as well as the FedEx uniforms they wear, and FedEx scanners they use—along with insurance, fuel, tires, oil changes, meals on the road, maintenance, and workers compensation insurance. If they get sick or need a vacation, they have to hire their own replacements. They’re even required to groom themselves according to FedEx standards.

FedEx doesn’t tell its drivers what hours to work, but it tells them what packages to deliver and organizes their workloads to ensure they work between 9.5 and 11 hours every working day.

If this isn’t “employment,” I don’t know what the word means.

In 2005, thousands of FedEx drivers in California sued the company, alleging they were in fact employees and that FedEx owed them the money they shelled out, as well as wages for all the overtime work they put in.

Last summer, a federal appeals court agreed, finding that under California law—which looks at whether a company “controls” how a job is done along with a variety of other criteria to determine the real employment relationship—the FedEx drivers were indeed employees, not independent contractors.

by Robert Reich, Granta |  Read more:
Image: WSJ

All You Have Eaten


Over the course of his or her lifetime, the average person will eat 60,000 pounds of food, the weight of six elephants. The average American will drink over 3,000 gallons of soda. He will eat about 28 pigs, 2,000 chickens, 5,070 apples, and 2,340 pounds of lettuce.

How much of that will he remember, and for how long, and how well? You might be able to tell me, with some certainty, what your breakfast was, but that confidence most likely diminishes when I ask about two, three, four breakfasts ago—never mind this day last year. (...)

For breakfast on January 2, 2008, I ate oatmeal with pumpkin seeds and brown sugar and drank a cup of green tea. I know because it’s the first entry in a food log I still keep today. I began it as an experiment in food as a mnemonic device. The idea was this: I’d write something objective every day that would cue my memories into the future—they’d serve as compasses by which to remember moments.

Andy Warhol kept what he called a “smell collection,” switching perfumes every three months so he could reminisce more lucidly on those three months whenever he smelled that period’s particular scent. Food, I figured, took this even further. It involves multiple senses, and that’s why memories that surround food can come on so strong.

What I’d like to have is a perfect record of every day. I’ve long been obsessed with this impossibility, that every day be perfectly productive and perfectly remembered. What I remember from January 2, 2008 is that after eating the oatmeal I went to the post office, where an old woman was arguing with a postal worker about postage—she thought what she’d affixed to her envelope was enough and he didn’t. (...)

Last spring, as part of a NASA-funded study, a crew of three men and three women with “astronaut-like” characteristics spent four months in a geodesic dome in an abandoned quarry on the northern slope of Hawaii’s Mauna Loa volcano. For those four months, they lived and ate as though they were on Mars, only venturing outside to the surrounding Mars-like, volcanic terrain, in simulated space suits. The Hawaii Space Exploration Analog and Simulation (HI-SEAS) is a four-year project: a series of missions meant to simulate and study the challenges of long-term space travel, in anticipation of mankind’s eventual trip to Mars. This first mission’s focus was food.

Getting to Mars will take roughly six to nine months each way, depending on trajectory; the mission itself will likely span years. So the question becomes: How do you feed astronauts for so long? On “Mars,” the HI-SEAS crew alternated between two days of pre-prepared meals and two days of dome-cooked meals of shelf-stable ingredients. Researchers were interested in the answers to a number of behavioral issues: among them, the well-documented phenomenon of menu fatigue (when International Space Station astronauts grow weary of their packeted meals, they tend to lose weight). They wanted to see what patterns would evolve over time if a crew’s members were allowed dietary autonomy, and given the opportunity to cook for themselves (“an alternative approach to feeding crews of long term planetary outposts,” read the open call).

Everything was hyper-documented. Everything eaten was logged in painstaking detail: weighed, filmed, and evaluated. The crew filled in surveys before and after meals: queries into how hungry they were, their first impressions, their moods, how the food smelled, what its texture was, how it tasted. They documented their time spent cooking; their water usage; the quantity of leftovers, if any. The goal was to measure the effect of what they ate on their health and morale, along with other basic questions concerning resource use. How much water will it take to cook on Mars? How much water will it take to wash dishes? How much time is required; how much energy? How will everybody feel about it all? I followed news of the mission devoutly. (...)

Their crew of six was selected from a pool of 700 candidates. Kate is a science writer, open-water swimmer, and volleyball player. When I asked her what “astronaut-like” means and why she was picked she says it’s some “combination of education, experience, and attitude”: a science background, leadership experience, an adventurous attitude. An interest in cooking was not among the requirements. The cooking duties were divided from the get-go; in the kitchen, crew members worked in pairs. On non-creative days they’d eat just-add-water, camping-type meals: pre-prepared lasagna, which surprised Kate by being not terrible; a thing called “kung fu chicken” that Angelo described as “slimy” and less tolerable; a raspberry crumble dessert that’s a favorite among backpackers (“That was really delicious,” Kate said, “but still you felt weird about thinking it was too delicious”). The crew didn’t eat much real astronaut food—astronaut ice cream, for example—because real astronaut food is expensive. (...)

When I look back on my meals from the past year, the food log does the job I intended more or less effectively. I can remember, with some clarity, the particulars of given days: who I was with, how I was feeling, the subjects discussed. There was the night in October I stress-scarfed a head of romaine and peanut butter packed onto old, hard bread; the somehow not-sobering bratwurst and fries I ate on day two of a two-day hangover, while trying to keep things light with somebody to whom, the two nights before, I had aired more than I meant to. There was the night in January I cooked “rice, chicken stirfry with bell pepper and mushrooms, tomato-y Chinese broccoli, 1 bottle IPA” with my oldest, best friend, and we ate the stirfry and drank our beers slowly while commiserating about the most recent conversations we’d had with our mothers.

Reading the entries from 2008, that first year, does something else to me: it suffuses me with the same mortification as if I’d written down my most private thoughts (that reaction is what keeps me from maintaining a more conventional journal). There’s nothing particularly incriminating about my diet, except maybe that I ate tortilla chips with unusual frequency, but the fact that it’s just food doesn’t spare me from the horror and head-shaking that comes with reading old diaries. Mentions of certain meals conjure specific memories, but mostly what I’m left with are the general feelings from that year. They weren’t happy ones. I was living in San Francisco at the time. A relationship was dissolving.

It seems to me that the success of a relationship depends on a shared trove of memories. Or not shared, necessarily, but not incompatible. That’s the trouble, I think, with parents and children: parents retain memories of their children that the children themselves don’t share. My father’s favorite meal is breakfast and his favorite breakfast restaurant is McDonald’s, and I remember—having just read Michael Pollan or watched Super Size Me—self-righteously not ordering my regular egg McMuffin one morning, and how that actually hurt him.

When a relationship goes south, it’s hard to pinpoint just where or how—especially after a prolonged period of it heading that direction. I was at a loss with this one. Going forward, I didn’t want not to be able to account for myself. If I could remember everything, I thought, I’d be better equipped; I’d be better able to make proper, comprehensive assessments—informed decisions. But my memory had proved itself unreliable, and I needed something better. Writing down food was a way to turn my life into facts: if I had all the facts, I could keep them straight. So the next time this happened I’d know exactly why—I’d have all the data at hand.

by Rachel Khong, Lucky Peach |  Read more:
Image: Jason Polan

New Professor


[ed. Professor of Physics! Congratulations, Nate. I'm so proud of you for achieving your goal.
Love, Dad.]
Image via:
[See also: Maybe the hardest nut for a new scientist to crack: finding a job.]

Monday, March 2, 2015

The Troubled History Of The Foreskin

Circumcision has been practised for millennia. Right now, in America, it is so common that foreskins are somewhat rare, and may become more so. A few weeks before the protests, the Centers for Disease Control and Prevention (CDC) had suggested that healthcare professionals talk to men and parents about the benefits of the procedure, which include protection from some sexually transmitted diseases, and the risks, which the CDC describes as low. But as the protesters wanted drivers to know, there is no medical consensus on this issue. Circumcision isn't advised for health reasons in Europe, for instance, because the benefits remain unclear. Meanwhile, Western organisations are paying for the circumcision of millions of African men in an attempt to rein in HIV – a campaign that critics say is also based on questionable evidence.

Men have been circumcised for thousands of years, yet our thinking about the foreskin seems as muddled as ever. And a close examination of this muddle raises disturbing questions. Is this American exceptionalism justified? Should we really be funding mass circumcision in Africa? Or by removing the foreskins of men, boys and newborns, are we actually committing a violation of human rights?

The tomb of Ankhmahor, a high-ranking official in ancient Egypt, is situated in a vast burial ground just outside Cairo. A picture of a man standing upright is carved into one of the walls. His hands are restrained, and another figure kneels in front of him, holding a tool to his penis. Though there is no definitive explanation of why circumcision began, many historians believe this relief, carved more than four thousand years ago, is the oldest known record of the procedure.

The best-known circumcision ritual, the Jewish ceremony of brit milah, is also thousands of years old. It survives to this day, as do others practised by Muslims and some African tribes. But American attitudes to circumcision have a much more recent origin. As medical historian David Gollaher recounts in his book Circumcision: A History of the World's Most Controversial Surgery, early Christian leaders abandoned the practice, realising perhaps that their religion would be more attractive to converts if surgery wasn't required. Circumcision disappeared from Christianity, and the secular Western cultures that descended from it, for almost two thousand years.

Then came the Victorians. One day in 1870, a New York orthopaedic surgeon named Lewis Sayre was asked to examine a five-year-old boy suffering from paralysis of both legs. Sayre was the picture of a Victorian gentleman: three-piece suit, bow tie, mutton chops. He was also highly respected, a renowned physician at Bellevue Hospital, New York's oldest public hospital, and an early member of the American Medical Association.

After the boy's sore genitals were pointed out by his nanny, Sayre removed the foreskin. The boy recovered. Believing he was on to something big, Sayre conducted more procedures. His reputation was such that when he praised the benefits of circumcision – which he did in the Transactions of the American Medical Association and elsewhere until he died in 1900 – surgeons elsewhere followed suit. Among other ailments, Sayre discussed patients whose foreskins were tightened and could not retract, a condition known as phimosis. Sayre declared that the condition caused a general state of nervous irritation, and that circumcision was the cure.

His ideas found a receptive audience. To Victorian minds many mental health issues originated with the sexual organs and masturbation. The connection had its roots in a widely read 18th-century treatise entitled Onania, or the Heinous Sin of Self-Pollution, and All Its Frightful Consequences, in Both Sexes, Considered. With Spiritual and Physical Advice to Those Who Have Already Injur'd Themselves By This Abominable Practice. The anonymous author warned that masturbation could cause epilepsy, infertility, "a wounded conscience" and other problems. By 1765 the book was in its 80th printing.

Later puritans took a similar view. Sylvester Graham associated any pleasure with immorality. He was a preacher, health reformer and creator of the graham cracker. Masturbation turned one into "a confirmed and degraded idiot", he declared in 1834. Men and women suffering from otherwise unlabelled psychiatric issues were diagnosed with masturbatory insanity; treatments included clitoridectomies for women, circumcision for men.

Graham's views were later taken up by another eccentric but prominent thinker on health matters: John Harvey Kellogg, who promoted abstinence and advocated foreskin removal as a cure. (He also worked with his brother to invent the cornflake.) "The operation should be performed by a surgeon without administering anesthetic," instructed Kellogg, "as the brief pain attending the operation will have a salutary effect upon the mind, especially if it be connected with the idea of punishment."

Counter-examples to Sayre's supposed breakthrough could be found in operating theatres across America. Attempts to cure children of paralysis failed. Men, one can assume, continued to masturbate. It mattered not. The circumcised penis came to be seen as more hygienic, and cleanliness was a sign of moral standards. An 1890 journal identified smegma as "infectious material". A few years later, a book for mothers – Confidential Talks on Home and Child Life, by a member of the National Temperance Society – described the foreskin as a "mark of Satan". Another author described parents who did not circumcise their sons at an early age as "almost criminally negligent".

by Jessica Wapner, io9 | Read more:
Image: Image by Flickr user Mararie.

Medicating Women’s Feelings

Women are moody. By evolutionary design, we are hard-wired to be sensitive to our environments, empathic to our children’s needs and intuitive of our partners’ intentions. This is basic to our survival and that of our offspring. Some research suggests that women are often better at articulating their feelings than men because as the female brain develops, more capacity is reserved for language, memory, hearing and observing emotions in others.

These are observations rooted in biology, not intended to mesh with any kind of pro- or anti-feminist ideology. But they do have social implications. Women’s emotionality is a sign of health, not disease; it is a source of power. But we are under constant pressure to restrain our emotional lives. We have been taught to apologize for our tears, to suppress our anger and to fear being called hysterical.

The pharmaceutical industry plays on that fear, targeting women in a barrage of advertising on daytime talk shows and in magazines. More Americans are on psychiatric medications than ever before, and in my experience they are staying on them far longer than was ever intended. Sales of antidepressants and antianxiety meds have been booming in the past two decades, and they’ve recently been outpaced by an antipsychotic, Abilify, that is the No. 1 seller among all drugs in the United States, not just psychiatric ones.

As a psychiatrist practicing for 20 years, I must tell you, this is insane.

At least one in four women in America now takes a psychiatric medication, compared with one in seven men. Women are nearly twice as likely to receive a diagnosis of depression or anxiety disorder than men are. For many women, these drugs greatly improve their lives. But for others they aren’t necessary. The increase in prescriptions for psychiatric medications, often by doctors in other specialties, is creating a new normal, encouraging more women to seek chemical assistance. Whether a woman needs these drugs should be a medical decision, not a response to peer pressure and consumerism.

The new, medicated normal is at odds with women’s dynamic biology; brain and body chemicals are meant to be in flux. To simplify things, think of serotonin as the “it’s all good” brain chemical. Too high and you don’t care much about anything; too low and everything seems like a problem to be fixed. (...)

The most common antidepressants, which are also used to treat anxiety, are selective serotonin reuptake inhibitors (S.S.R.I.s) that enhance serotonin transmission. S.S.R.I.s keep things “all good.” But too good is no good. More serotonin might lengthen your short fuse and quell your fears, but it also helps to numb you, physically and emotionally. These medicines frequently leave women less interested in sex. S.S.R.I.s tend to blunt negative feelings more than they boost positive ones. On S.S.R.I.s, you probably won’t be skipping around with a grin; it’s just that you stay more rational and less emotional. Some people on S.S.R.I.s have also reported less of many other human traits: empathy, irritation, sadness, erotic dreaming, creativity, anger, expression of their feelings, mourning and worry.

by Julie Holland, NY Times |  Read more:
Image: Christelle Enault

American Democracy is Doomed

America's constitutional democracy is going to collapse.

Some day — not tomorrow, not next year, but probably sometime before runaway climate change forces us to seek a new life in outer-space colonies — there is going to be a collapse of the legal and political order and its replacement by something else. If we're lucky, it won't be violent. If we're very lucky, it will lead us to tackle the underlying problems and result in a better, more robust, political system. If we're less lucky, well, then, something worse will happen.

Very few people agree with me about this, of course. When I say it, people generally think that I'm kidding. America is the richest, most successful country on earth. The basic structure of its government has survived contested elections and Great Depressions and civil rights movements and world wars and terrorist attacks and global pandemics. People figure that whatever political problems it might have will prove transient — just as happened before. (...)

The breakdown of American constitutional democracy is a contrarian view. But it's nothing more than the view that rather than everyone being wrong about the state of American politics, maybe everyone is right. Maybe Bush and Obama are dangerously exceeding norms of executive authority. Maybe legislative compromise really has broken down in an alarming way. And maybe the reason these complaints persist across different administrations and congresses led by members of different parties is that American politics is breaking down.

The perils of presidential democracy

To understand the looming crisis in American politics, it's useful to think about Germany, Japan, Italy, and Austria. These are countries that were defeated by American military forces during the Second World War and given constitutions written by local leaders operating in close collaboration with occupation authorities. It's striking that even though the US Constitution is treated as a sacred text in America's political culture, we did not push any of these countries to adopt our basic framework of government.

This wasn't an oversight.

In a 1990 essay, the late Yale political scientist Juan Linz observed that "aside from the United States, only Chile has managed a century and a half of relatively undisturbed constitutional continuity under presidential government — but Chilean democracy broke down in the 1970s."

The exact reasons for why are disputed among scholars — in part because you can't just randomly assign different governments to people. One issue here is that American-style systems are much more common in the Western Hemisphere and parliamentary ones are more common elsewhere. Latin-American countries have experienced many episodes of democratic breakdown, so distinguishing Latin-American cultural attributes from institutional characteristics is difficult.

Still, Linz offered several reasons why presidential systems are so prone to crisis. One particularly important one is the nature of the checks and balances system. Since both the president and the Congress are directly elected by the people, they can both claim to speak forthe people. When they have a serious disagreement, according to Linz, "there is no democratic principle on the basis of which it can be resolved." The constitution offers no help in these cases, he wrote: "the mechanisms the constitution might provide are likely to prove too complicated and aridly legalistic to be of much force in the eyes of the electorate."

In a parliamentary system, deadlocks get resolved. A prime minister who lacks the backing of a parliamentary majority is replaced by a new one who has it. If no such majority can be found, a new election is held and the new parliament picks a leader. It can get a little messy for a period of weeks, but there's simply no possibility of a years-long spell in which the legislative and executive branches glare at each other unproductively.

But within a presidential system, gridlock leads to a constitutional trainwreck with no resolution. The United States's recent government shutdowns and executive action on immigration are small examples of the kind of dynamic that's led to coups and putsches abroad.

There was, of course, the American exception to the problems of the checks-and-balances system. Linz observed on this score: "The uniquely diffuse character of American political parties — which, ironically, exasperates many American political scientists and leads them to call for responsible, ideologically disciplined parties — has something to do with it."

For much of American history, in other words, US political parties have been relatively un-ideological and un-disciplined. They are named after vague ideas rather than specific ideologies, and neither presidents nor legislative leaders can compel back-bench members to vote with them. This has often been bemoaned (famously, a 1950 report by the American Political Science Association called for a more rigorous party system) as the source of problems. It's also, according to Linz, helped avert the kind of zero-sum conflicts that have torn other structurally similar democracies apart. But that diffuse party structure is also a thing of the past.

by Matthew Yglesias, Vox | Read more:
Image: Pete Souza

Christopher Thompson, The Letter
via:

Gerrymandering Explained


[ed. See also: This report that Republican lawmakers from Arizona have filed an appeal with the Supreme Court (which will be heard today) against the state's voter-approved independent redistricting commission for creating the districts of U.S. House members. A decision striking down the commission probably would doom a similar system in neighboring California, and could affect districting commissions in 11 other states..]

Gerrymandering -- drawing political boundaries to give your party a numeric advantage over an opposing party -- is a difficult process to explain. If you find the notion confusing, check out the chart above -- adapted from one posted to Reddit this weekend -- and wonder no more.

by Christopher Ingraham, Washington Post | Read more:
Image: N8theGr8

Sunday, March 1, 2015

Back Dorm Boys


[ed. I've been digging around in the Music archives and found this Back Dorm Boys video that I hadn't seen in a long time. They even have their own Wikipedia entry now. Still makes me smile. A few other videos after this.]
[ed. Repost]

Jenny O.

[ed. Repost]