Friday, March 6, 2015
Pot, Kettle, Black
[ed. See also: PETA launches annual campaign against the Iditirod]
The group euthanized 2,454 of its 3,369 cats and dogs, the vast majority of which were "owner surrenders," meaning that they'd been relinquished to the group voluntarily. Just 23 dogs and 16 cats were adopted.
These figures aren't shocking to PETA's long-time critics -- who have for years pointed out the discrepancy between how this prominent animal rights group is perceived, and what they actually do -- but they are leading to a renewed call from no-kill advocates to put the shelter out of business.
Here's how long-time PETA critic Nathan Winograd, a well-known shelter reform advocate, recently put it on his Facebook page:
How much money did PETA take in last year from unsuspecting donors who helped pay for this mass carnage? $51,933,001: $50,449,023 in contributions, $627,336 in merchandise sales, and $856,642 in interest and dividends. They finished the year with $4,551,786 more in the bank than they started, after expenses. They did not see fit to use some of that to comprehensively promote animals for adoption or to provide veterinary care for the animals who needed it.
By contrast, the Lynchburg Humane Society, also in Virginia, took in about the same number of animals as PETA but saved 94% and without PETA’s millions. Seagoville Animal Services in Texas took in 1/3 of the numbers (about 700 animals) but only 1/20th of 1% of the amount of money that PETA did, saving 99% of them on a paltry $29,700 budget. In fact, hundreds of cities and towns across America are saving over 90% of the animals and doing so on a fraction of PETA’s wealth.VDACS collects and publishes information about how many animals are taken in and what becomes of them, for every public and private shelter, humane society, pound and other sort of animal rescue group in the state.
Indeed, as can be seen in this chart, Virginia as a whole has far lower euthanasia rates. And while PETA says it must euthanize animals because it's an "open-admissions" shelter -- meaning that it will accept any animal brought to it -- other such Virginia shelters, like the Lynchburg Humane Society, present far differently:
The initial figures for PETA's 2014 numbers were obtained via a Freedom of Information Act request filed by Winograd, a leader in the "no-kill" movement, which aims to reduce (or, even better, eliminate) the number of shelter animals that are put down every year.
VDACS spokeswoman Elaine Lidholm told The Huffington Post the figures may be amended before the final report is published online. The numbers, however, are in line with those from previous years -- numbers that have earned the high-profile animal rights group a significant amount of criticism.
by Arin Greenwoood, Huffington Post | Read more:
Image: PETA via Wikipedia
Blowing Bubbles
Friday’s strong jobs report has revived an old and tired debate about inflation and unemployment. Rather than celebrating the news that the jobless rate has dropped to 5.5 per cent, the lowest rate since May 2008, investors sold off stocks, based on expectations that the Federal Reserve will soon start raising interest rates to head off the threat of inflation. By noon, the Dow was down almost two hundred points.
That’s nothing to worry about, taken on its own. Stocks rise and fall every day. But the constant focus on the link between inflation and unemployment, which is evident in the minutes of the Federal Reserve’s Federal Open Market Committee and in the media discussion of what the Fed should do next, does present a real danger. It reflects an outdated economic paradigm that, twice in the past twenty years, has misled policy-makers and produced bad policy decisions.
During the late nineteen-nineties, and again in the mid-aughts, the Fed set interest rates based on the supposed threat of inflation. When that threat failed to materialize, it kept rates at low levels for long periods. Cheap credit, in turn, encouraged the development of speculative bubbles and other financial imbalances. And when the bubbles eventually burst, the economy went into recession. But rather than changing its policy framework to prioritize avoiding yet another speculative bust, top Fed policy makers once again committed themselves to focusing on inflation, publishing a target rate of two per cent. Bubble-prevention was delegated to the Fed’s regulatory apparatus.
At the moment, thankfully, the threat of another bubble appears to be contained, despite ultra-low interest rates. Still, it can’t be ignored. Rather than obsessing about inflation, Fed chair Janet Yellen and her colleagues should be seeking to provide as much support as they can to the economy, consistent with preventing bubbles from forming in asset markets such as stocks, bonds, and, especially, real estate. That is where the threat lies, not in rising inflation.
Things used to be different. In the nineteen-seventies and nineteen-eighties, there was very real danger of a wage-price spiral (in which rising wages and prices become self-reinforcing, pushing inflation upward). In the fall of 1974, the rate of inflation topped twelve per cent; in 1980, it reached almost fifteen per cent. But in today’s globalized and technology-driven economy, workers have little bargaining power, and the prices of many products, such as electronics, have a tendency to fall rather than rise. The last time the inflation rate rose above six per cent was 1990—twenty-six years ago.
When prices are rising at an annual rate of less than two per cent, as they have been for most of the past three years, it’s silly to worry about another wage-price spiral emerging. Still, many people, including some at the Fed, refuse to learn the lessons of history. As the unemployment rate dipped below six per cent last year, Richard Fisher, the president of the Federal Reserve Bank of Dallas, repeatedly warned that wage and price inflation would start to rise. Friday’s jobs report confirmed that it hasn’t happened; average hourly earning rose by just 0.1 per cent in January. Over the course of the past year, it has risen by two per cent, which is a very modest rate of increase.
The fact that Fisher’s prediction didn’t come true shouldn’t surprise anybody. Economists have never been able to pin down the jobless rate at which inflation takes off—the so-called NAIRU, or Non-Accelerating Inflation Rate of Unemployment. Theoretically, the concept makes sense. Empirically, it’s extremely elusive, because it depends on many other things, such as the rate of productivity growth, tax rates, the labor-force participation rate, and the level of unionization.
At some point—a point we can’t predict in advance—tighter labor markets will lead to higher wages. And that wouldn’t necessarily be a bad thing. To the contrary, with median household incomes still well below their 2007 levels, and with labor’s share of overall income having fallen to historic lows, American families badly need a raise in pay. Back in the late nineteen-nineties, as the Fed stood pat, wages and incomes grew for an extended period without causing an inflationary spiral. This only happened after a big fall in unemployment, however. In 1999, the jobless rate hit four per cent. But by that stage, unfortunately, the dot-com bubble was also in place.
In short, the real policy dilemma isn’t the trade-off between inflation and unemployment. It’s the tradeoff between cheap money and financial instability. How long can the Fed keep interest at ultra-low levels without sparking another bubble and all that goes with it?
That’s nothing to worry about, taken on its own. Stocks rise and fall every day. But the constant focus on the link between inflation and unemployment, which is evident in the minutes of the Federal Reserve’s Federal Open Market Committee and in the media discussion of what the Fed should do next, does present a real danger. It reflects an outdated economic paradigm that, twice in the past twenty years, has misled policy-makers and produced bad policy decisions.
During the late nineteen-nineties, and again in the mid-aughts, the Fed set interest rates based on the supposed threat of inflation. When that threat failed to materialize, it kept rates at low levels for long periods. Cheap credit, in turn, encouraged the development of speculative bubbles and other financial imbalances. And when the bubbles eventually burst, the economy went into recession. But rather than changing its policy framework to prioritize avoiding yet another speculative bust, top Fed policy makers once again committed themselves to focusing on inflation, publishing a target rate of two per cent. Bubble-prevention was delegated to the Fed’s regulatory apparatus.At the moment, thankfully, the threat of another bubble appears to be contained, despite ultra-low interest rates. Still, it can’t be ignored. Rather than obsessing about inflation, Fed chair Janet Yellen and her colleagues should be seeking to provide as much support as they can to the economy, consistent with preventing bubbles from forming in asset markets such as stocks, bonds, and, especially, real estate. That is where the threat lies, not in rising inflation.
Things used to be different. In the nineteen-seventies and nineteen-eighties, there was very real danger of a wage-price spiral (in which rising wages and prices become self-reinforcing, pushing inflation upward). In the fall of 1974, the rate of inflation topped twelve per cent; in 1980, it reached almost fifteen per cent. But in today’s globalized and technology-driven economy, workers have little bargaining power, and the prices of many products, such as electronics, have a tendency to fall rather than rise. The last time the inflation rate rose above six per cent was 1990—twenty-six years ago.
When prices are rising at an annual rate of less than two per cent, as they have been for most of the past three years, it’s silly to worry about another wage-price spiral emerging. Still, many people, including some at the Fed, refuse to learn the lessons of history. As the unemployment rate dipped below six per cent last year, Richard Fisher, the president of the Federal Reserve Bank of Dallas, repeatedly warned that wage and price inflation would start to rise. Friday’s jobs report confirmed that it hasn’t happened; average hourly earning rose by just 0.1 per cent in January. Over the course of the past year, it has risen by two per cent, which is a very modest rate of increase.
The fact that Fisher’s prediction didn’t come true shouldn’t surprise anybody. Economists have never been able to pin down the jobless rate at which inflation takes off—the so-called NAIRU, or Non-Accelerating Inflation Rate of Unemployment. Theoretically, the concept makes sense. Empirically, it’s extremely elusive, because it depends on many other things, such as the rate of productivity growth, tax rates, the labor-force participation rate, and the level of unionization.
At some point—a point we can’t predict in advance—tighter labor markets will lead to higher wages. And that wouldn’t necessarily be a bad thing. To the contrary, with median household incomes still well below their 2007 levels, and with labor’s share of overall income having fallen to historic lows, American families badly need a raise in pay. Back in the late nineteen-nineties, as the Fed stood pat, wages and incomes grew for an extended period without causing an inflationary spiral. This only happened after a big fall in unemployment, however. In 1999, the jobless rate hit four per cent. But by that stage, unfortunately, the dot-com bubble was also in place.
In short, the real policy dilemma isn’t the trade-off between inflation and unemployment. It’s the tradeoff between cheap money and financial instability. How long can the Fed keep interest at ultra-low levels without sparking another bubble and all that goes with it?
by John Cassidy, New Yorker | Read more:
Image: Gary Gardiner, Bloomberg via Getty
Larry Fink, N.Y.C Club Cornich, from the Portfolio 82 Photographs 1974 to 1982, 1982; Printed 1983
via:
Slow Love
One-night stands; hooking-up; friends with benefits; living together; pre-nups; civil unions. These all spell caution. But they also spell logic—because our brain is soft-wired to attach slowly to a partner.
The basic circuits for romantic love lie in primitive regions of the brain, near those that orchestrate thirst and hunger. Romantic love is a drive—one of three basic brain systems that evolved to direct our fundamental human mating and breeding strategy. The sex drive predisposes you to seek a range of mating partners; romantic love enables you to focus your mating energy on a single individual at a time; and feelings of attachment incline you to form a pair-bond at least through the infancy of a single child. Feelings of romantic love and deep attachment to a partner emerge in a pattern highly compatible with the spirit of the times—that is, with slow love.
I say this because my colleagues Lucy Brown, Art Aron, Bianca Acevedo, and I have put new lovers into a brain scanner (using functional Magnetic Resonance Imaging, or fMRI) to measure neural activity as these men and women gazed at a photo of their sweetheart. Those who had fallen madly in love within the past eight months showed activity in brain regions associated with energy, focus, motivation, craving, and intense romantic love. But those who had been passionately in love for eight to 17 months also showed activity in an additional brain region associated with feelings of attachment.
Romantic love is like a sleeping cat; it can be awakened at any time. Feelings of deep attachment, however, take time, and they can endure. In another of our studies, led by Acevedo, we put 17 men and women in their 50s and early 60s into the brain scanner. These participants had been married an average of 21 years, and all maintained that they were still madly in love with their spouse. Their brains showed that they were: They were deeply attached as well.
We have even begun to map some of the brain circuitry responsible for this marital happiness. In our study of long-term lovers, those who scored higher on a marital satisfaction questionnaire showed more activity in a brain region linked with empathy, a trait they had most likely retained from their initial passion. Moreover, when psychologist Mona Xu and her team used my original research design to collect similar brain data on 18 young men and women in China, she found that those who were in love long term showed activity in a brain region associated with the ability to suspend negative judgment and over-evaluate a partner, what psychologists call “positive illusions.” Much like men and women who have just fallen madly in love, these long-term partners still swept aside what they didn’t like about their mate and focused on what they adored.
Because feelings of attachment emerge with time, slow love is natural. In fact, rapidly committing to a new partner before the liquor of attachment has emerged may be more risky to long-term happiness than first getting to know a partner via casual sex, friends with benefits and living together. Sexual liberalism has aligned our courtship tactics with our primordial brain circuits for slow love.
The basic circuits for romantic love lie in primitive regions of the brain, near those that orchestrate thirst and hunger. Romantic love is a drive—one of three basic brain systems that evolved to direct our fundamental human mating and breeding strategy. The sex drive predisposes you to seek a range of mating partners; romantic love enables you to focus your mating energy on a single individual at a time; and feelings of attachment incline you to form a pair-bond at least through the infancy of a single child. Feelings of romantic love and deep attachment to a partner emerge in a pattern highly compatible with the spirit of the times—that is, with slow love.
I say this because my colleagues Lucy Brown, Art Aron, Bianca Acevedo, and I have put new lovers into a brain scanner (using functional Magnetic Resonance Imaging, or fMRI) to measure neural activity as these men and women gazed at a photo of their sweetheart. Those who had fallen madly in love within the past eight months showed activity in brain regions associated with energy, focus, motivation, craving, and intense romantic love. But those who had been passionately in love for eight to 17 months also showed activity in an additional brain region associated with feelings of attachment.Romantic love is like a sleeping cat; it can be awakened at any time. Feelings of deep attachment, however, take time, and they can endure. In another of our studies, led by Acevedo, we put 17 men and women in their 50s and early 60s into the brain scanner. These participants had been married an average of 21 years, and all maintained that they were still madly in love with their spouse. Their brains showed that they were: They were deeply attached as well.
We have even begun to map some of the brain circuitry responsible for this marital happiness. In our study of long-term lovers, those who scored higher on a marital satisfaction questionnaire showed more activity in a brain region linked with empathy, a trait they had most likely retained from their initial passion. Moreover, when psychologist Mona Xu and her team used my original research design to collect similar brain data on 18 young men and women in China, she found that those who were in love long term showed activity in a brain region associated with the ability to suspend negative judgment and over-evaluate a partner, what psychologists call “positive illusions.” Much like men and women who have just fallen madly in love, these long-term partners still swept aside what they didn’t like about their mate and focused on what they adored.
Because feelings of attachment emerge with time, slow love is natural. In fact, rapidly committing to a new partner before the liquor of attachment has emerged may be more risky to long-term happiness than first getting to know a partner via casual sex, friends with benefits and living together. Sexual liberalism has aligned our courtship tactics with our primordial brain circuits for slow love.
by Helen Fischer, Nautilus | Read more:
Image: Before Sunrise, Hulton Archive / handoutThursday, March 5, 2015
The Sound of Maybe
Harvard University’s Holden Chapel always struck me as the proper home of a crypt-keeper: an appropriate place to die, or at least to remain dead. The forty-foot brick structure has no front windows. Above its entrance are four stone bucrania, bas-relief ox-skull sculptures of the sort that pagans once placed on their temples to keep away evil spirits. In 1895, when William James was asked to address a crowd of young Christian men at the Georgian chapel, it was already more than 150 years old, a fitting setting for the fifty-three-year-old philosopher to contemplate what he had come to believe was the profoundest of questions: “Is life worth living?” (...)
James, by then quite famous as the father of American psychology and philosophy, was one of them—unequivocally “sick-souled,” as he put it. He knew something that the faithful often miss, that believing in life’s worth, for many people, is the struggle of a lifetime. He’d overdosed on chloral hydrate in the 1870s just “for the fun of it,” as he wrote to his brother Henry, to see how close he could come to the morgue without actually going there. James was not alone in his curiosity. A decade later, his colleague and founder of the Society for Psychical Research, Edmund Gurney, took the experiment with life and death too far, testing what turned out to be fatal dose of chloroform. In response to Gurney’s death, James wrote to his brother once again. “[This death] make[s] what remains here strangely insignificant and ephemeral, as if the weight of things, as well as the numbers, was all on the other side.” The other side. As in: No. Life is not worth living.
“No,” as it turns out, is an answer that has much to recommend it in a place like Holden Chapel. Religious services were moved out of the building in the middle of the eighteenth century and, for the next hundred years, it served as a chemistry lab and classroom for the nascent Harvard Medical School, where cadavers were dissected. “The Gross Clinic,” painted by Thomas Eakins in 1875, gives some idea about the nature of surgery at the time. In it, several doctors perform an operation on a child, working without gloves as their patient’s insides fester in the open air. The patient’s mother, meanwhile, sits nearby in horror, covering her face in a futile attempt to escape what James understood all too well: at the end of the existential day, we are a bunch of smelly carcasses. (When Holden was renovated in 1999, workers discovered skeletal remains in the basement.) James would have been aware of the chapel’s gory medical history as he pondered life’s worth with the YMCA.
The aging philosopher was not there to affirm the sunny convictions of these Christian acolytes. But he’d also managed to avoid suicide for more than half a century. By the end of the evening at Holden, somewhere between Pollyanna optimism and utter nihilism, James discovered the answer to his question: Maybe. Maybe life is worth living. “It depends,” he said, “on the liver.” (...)
When James suggested that life’s worth depends on the liver, he tapped into this particular, peculiar spiritual history. Most scholars continue to take James to mean that forming existential meaning is, in a very real sense, up to each of us, that our individual wills are the determining factor in making meaning in a world that continually threatens it. This is all well and good, but it doesn’t go quite far enough. For some people, their livers portend truly awful circumstances, and no amount of effort can see them through. When asked the question—“Is life worth living?”—some livers are naturally inclined toward, and then can finally give, but one answer: “No.” James wasn’t resigned to this fact, but he also wasn’t blind to it. The worth of life remains conditional on factors both within and beyond one’s control. The appropriate response to this existential situation is not, at least for James, utter despair, but rather the repeated, ardent, yearning attempt to make good on life’s tenuous possibilities. The risk of life—that it turns out to be wholly meaningless—is real, but so too is the reward: the ever-present chance to be partially responsible for its worth.
James, by then quite famous as the father of American psychology and philosophy, was one of them—unequivocally “sick-souled,” as he put it. He knew something that the faithful often miss, that believing in life’s worth, for many people, is the struggle of a lifetime. He’d overdosed on chloral hydrate in the 1870s just “for the fun of it,” as he wrote to his brother Henry, to see how close he could come to the morgue without actually going there. James was not alone in his curiosity. A decade later, his colleague and founder of the Society for Psychical Research, Edmund Gurney, took the experiment with life and death too far, testing what turned out to be fatal dose of chloroform. In response to Gurney’s death, James wrote to his brother once again. “[This death] make[s] what remains here strangely insignificant and ephemeral, as if the weight of things, as well as the numbers, was all on the other side.” The other side. As in: No. Life is not worth living.“No,” as it turns out, is an answer that has much to recommend it in a place like Holden Chapel. Religious services were moved out of the building in the middle of the eighteenth century and, for the next hundred years, it served as a chemistry lab and classroom for the nascent Harvard Medical School, where cadavers were dissected. “The Gross Clinic,” painted by Thomas Eakins in 1875, gives some idea about the nature of surgery at the time. In it, several doctors perform an operation on a child, working without gloves as their patient’s insides fester in the open air. The patient’s mother, meanwhile, sits nearby in horror, covering her face in a futile attempt to escape what James understood all too well: at the end of the existential day, we are a bunch of smelly carcasses. (When Holden was renovated in 1999, workers discovered skeletal remains in the basement.) James would have been aware of the chapel’s gory medical history as he pondered life’s worth with the YMCA.
The aging philosopher was not there to affirm the sunny convictions of these Christian acolytes. But he’d also managed to avoid suicide for more than half a century. By the end of the evening at Holden, somewhere between Pollyanna optimism and utter nihilism, James discovered the answer to his question: Maybe. Maybe life is worth living. “It depends,” he said, “on the liver.” (...)
When James suggested that life’s worth depends on the liver, he tapped into this particular, peculiar spiritual history. Most scholars continue to take James to mean that forming existential meaning is, in a very real sense, up to each of us, that our individual wills are the determining factor in making meaning in a world that continually threatens it. This is all well and good, but it doesn’t go quite far enough. For some people, their livers portend truly awful circumstances, and no amount of effort can see them through. When asked the question—“Is life worth living?”—some livers are naturally inclined toward, and then can finally give, but one answer: “No.” James wasn’t resigned to this fact, but he also wasn’t blind to it. The worth of life remains conditional on factors both within and beyond one’s control. The appropriate response to this existential situation is not, at least for James, utter despair, but rather the repeated, ardent, yearning attempt to make good on life’s tenuous possibilities. The risk of life—that it turns out to be wholly meaningless—is real, but so too is the reward: the ever-present chance to be partially responsible for its worth.
by John Kaag, Harpers | Read more:
Image: John Kaag
Wednesday, March 4, 2015
Marching One By One
During his three controversial terms as mayor of New York, Michael Bloomberg launched all-out public-health crusades against some of his constituents’ favorite vices, including cigarettes, trans fats, and sodas. Perhaps he should have declared war on loneliness, instead. Scientists have repeatedly found that people who lack—or believe that they lack—close social connections have significantly higher mortality rates than those who find themselves surrounded by friends. One 2010 meta-analysis suggested that social isolation may be more dangerous than obesity and just as deadly as smoking. Loneliness and isolation appear to alter hormone levels, immune responses, and gene expression, and to increase the risk for a variety of ailments, including heart disease, depression, high blood pressure, and Alzheimer’s.
Humans are not the only creatures that benefit from a little companionship; isolation can also enfeeble rats, mice, pigs, rabbits, squirrel monkeys, starlings, and parrots. And it takes a particular toll on Camponotus fellah, a species of carpenter ant. According to a new study by researchers at the University of Lausanne, in Switzerland, worker ants that live alone have one-tenth the life span of those that live in small groups. Scientists have known for decades that social insects fare poorly when separated from their colonies, and the Swiss group, in the course of their ongoing experiments with carpenter ants, had observed this effect firsthand. “We realized that, when we isolated them, many of them would die quickly, even if we provided them with food and water,” Laurent Keller, the biologist who led the research team, told me.
Keller and his colleagues weren’t initially sure why the ants were dying, but they had recently developed a tool that they thought might solve the puzzle: an automated system for tracking ant movements. They printed unique patterns on tiny squares of white paper and glued them, like bar codes, to the backs of their carpenter ants. A camera snapped high-resolution photos as the ants skittered around, and a software program used the images to calculate the position and orientation of each ant. The system wasn’t foolproof. (“First, some ant species are difficult to mark because they remove or chew the tags,” the researchers wrote. “Second some species are difficult to track because individuals sit on top of each other thereby obscuring the tags.”) But the tags allowed the researchers to easily collect billions of data points on the active, industrious insects. The system revealed, for instance, that ants are restless workers, shifting careers—from nurses to cleaners to foragers—as they age.
Keller’s team thought that their automated tracker might help explain why isolated ants died so quickly and how the ants’ behavior changed when their social ties had been severed. And so the researchers set out to create some lonely ants. They assembled isolation chambers, small plastic boxes containing food, water, and a capsule that was designed to serve as a nest. Then they plucked a few hundred unlucky workers out of their colonies, glued bar codes to their backs, and introduced them to their new homes. The ants were assigned to the boxes in different combinations—some moved in utterly alone, while others were housed in pairs, in groups of ten, or with several squirming larvae. Then the insects went back to their lives, and the scientists waited for them to die. As their final moments approached, the ants would begin to shake. “Finally, they lay down and stop moving,” Akiko Koto, a co-author of the new study, said.
The effect of isolation was dramatic. The ants that lived in groups of ten survived for about sixty-six days, on average. The solitary ants died after just six and a half. (Ants that lived with larvae or in pairs had intermediate life spans, averaging twenty-two and twenty-nine days, respectively.) When the researchers analyzed the movement data, they discovered that the companionless ants were hyperactive, spending huge amounts of time roaming around the plastic box, especially near the walls. During the first day alone, the lonesome ants walked twice as far as those that lived in groups of ten. This ceaseless locomotion is likely a stress response, Keller said, though Koto painted a more poignant picture. “I think it’s somehow expected,” she told me. “In a natural condition, in the park or in a forest, if the ants lose their colonies, they’ll try to find their mother colonies.” The ants, in other words, may have been looking for their families.
Humans are not the only creatures that benefit from a little companionship; isolation can also enfeeble rats, mice, pigs, rabbits, squirrel monkeys, starlings, and parrots. And it takes a particular toll on Camponotus fellah, a species of carpenter ant. According to a new study by researchers at the University of Lausanne, in Switzerland, worker ants that live alone have one-tenth the life span of those that live in small groups. Scientists have known for decades that social insects fare poorly when separated from their colonies, and the Swiss group, in the course of their ongoing experiments with carpenter ants, had observed this effect firsthand. “We realized that, when we isolated them, many of them would die quickly, even if we provided them with food and water,” Laurent Keller, the biologist who led the research team, told me.Keller and his colleagues weren’t initially sure why the ants were dying, but they had recently developed a tool that they thought might solve the puzzle: an automated system for tracking ant movements. They printed unique patterns on tiny squares of white paper and glued them, like bar codes, to the backs of their carpenter ants. A camera snapped high-resolution photos as the ants skittered around, and a software program used the images to calculate the position and orientation of each ant. The system wasn’t foolproof. (“First, some ant species are difficult to mark because they remove or chew the tags,” the researchers wrote. “Second some species are difficult to track because individuals sit on top of each other thereby obscuring the tags.”) But the tags allowed the researchers to easily collect billions of data points on the active, industrious insects. The system revealed, for instance, that ants are restless workers, shifting careers—from nurses to cleaners to foragers—as they age.
Keller’s team thought that their automated tracker might help explain why isolated ants died so quickly and how the ants’ behavior changed when their social ties had been severed. And so the researchers set out to create some lonely ants. They assembled isolation chambers, small plastic boxes containing food, water, and a capsule that was designed to serve as a nest. Then they plucked a few hundred unlucky workers out of their colonies, glued bar codes to their backs, and introduced them to their new homes. The ants were assigned to the boxes in different combinations—some moved in utterly alone, while others were housed in pairs, in groups of ten, or with several squirming larvae. Then the insects went back to their lives, and the scientists waited for them to die. As their final moments approached, the ants would begin to shake. “Finally, they lay down and stop moving,” Akiko Koto, a co-author of the new study, said.
The effect of isolation was dramatic. The ants that lived in groups of ten survived for about sixty-six days, on average. The solitary ants died after just six and a half. (Ants that lived with larvae or in pairs had intermediate life spans, averaging twenty-two and twenty-nine days, respectively.) When the researchers analyzed the movement data, they discovered that the companionless ants were hyperactive, spending huge amounts of time roaming around the plastic box, especially near the walls. During the first day alone, the lonesome ants walked twice as far as those that lived in groups of ten. This ceaseless locomotion is likely a stress response, Keller said, though Koto painted a more poignant picture. “I think it’s somehow expected,” she told me. “In a natural condition, in the park or in a forest, if the ants lose their colonies, they’ll try to find their mother colonies.” The ants, in other words, may have been looking for their families.
by Emily Anthes, New Yorker | Read more:
Image: Ellen Surrey
The Fall of the Hipster Brand
In 2005, Daniel Bernardo, a newly minted art history graduate, decided to start his post-college life in Williamsburg, Brooklyn. He was lucky enough to find an affordable loft apartment above a skateboard shop with four other housemates who were all busy pursuing their dreams: one owned the shop below, two were artists, and the other was an aspiring fashion designer.
Bernardo took a stab at starting a business selling handmade shoes. Tourists (or Manhattanites) who strayed into his neighborhood certainly considered the community somewhat eccentric; Bernardo, for instance, was rocking a Dali mustache.
"We were all heavily influenced by the ethics of the punk scene, which was about going against the norm," he recalls. "I wore vintage vests, ties, and suspenders, partly because I liked them, but partly because it was what I could afford at the time. Some of my friends were into the skater look, wearing old band t-shirts and cutoff jeans, others wore flannel and had beards."
After moving to the neighborhood, Bernardo began to notice something odd: the offbeat outfits he and his friends were wearing started appearing on the racks of chain stores. You could pop into Urban Outfitters to buy a faded T-shirt from an obscure band, a pocket watch, and Victorian pinstripe trousers. You could walk out of American Apparel with a leotard and '80s spandex shorts.
This aesthetic was labeled "hipster" and could be bought straight off the shelf. Bernardo was mystified: "Our outfits were all about being true to ourselves and not spending too much money on clothes, but suddenly you could pick up an outfit that looked vintage but that had never been worn before."
By 2006, American Apparel's hipster-centric aesthetic became so popular that the company was snapped up for $382.5 million by an investment firm, who promptly took it public. That year Pabst Blue Ribbon beer, once described as the "nectar of the hipster gods," overtook Coors in volume sales. Meanwhile, Urban Outfitters saw a 44% increasein profits every year between 2003 and 2006.
Before long, so-called hipster looks started showing up at big-box retailers. You could buy skinny jeans, ironic tees, and beanies from Target, along with toilet paper and washing detergent—you could even buy a fixed gear bike from Walmart for $150. "Suddenly, trying to look different became mainstream, which was hard to wrap our minds around," says Dylan Leavitt, star of Dylan’s Vintage Minute.
A decade later, the tide has turned. American Apparel hasn't posted a profit since 2009 and has in fact hemorrhaged so much money that it was nearly delisted from the New York Stock Exchange. Pabst was just sold to a Russian beverage company in a move widely interpreted as a sign that the beer is losing its appeal. Urban Outfitters sales have steadily declined since 2011. What went wrong? Did these companies fail to anticipate the next "hipster trend?" Or has the hipster era finally come to an end, rendering these kinds of companies irrelevant?
"This is the conversation I have almost every weekend," says Katharine Brandes, the LA-based creative director of women’s fashion label BB Dakota. "My friends and I sometimes like to observe hipsters in their natural habitat, which around here is Silver Lake." Brandes, 28, grew up surrounded by the hipster-gone-mainstream aesthetic. Her theory is that these brands haven't been perceptive to how youth culture has changed over the last decade.
The earliest hipsters, she argues, generally rejected mainstream fashion and beliefs. In later years, however, hipster culture became more about a particular ethical lifestyle. "Urban Outfitters and American Apparel did a good job of commodifying the earliest kind of hipster—the Vice-reading, PBR-swilling, trucker-hat-wearing twentysomething," she says. "But they have not successfully evolved to meet the needs of the new wave that cares about authenticity and buys products from brands that have a strong ethical core."
Bernardo took a stab at starting a business selling handmade shoes. Tourists (or Manhattanites) who strayed into his neighborhood certainly considered the community somewhat eccentric; Bernardo, for instance, was rocking a Dali mustache.
"We were all heavily influenced by the ethics of the punk scene, which was about going against the norm," he recalls. "I wore vintage vests, ties, and suspenders, partly because I liked them, but partly because it was what I could afford at the time. Some of my friends were into the skater look, wearing old band t-shirts and cutoff jeans, others wore flannel and had beards."After moving to the neighborhood, Bernardo began to notice something odd: the offbeat outfits he and his friends were wearing started appearing on the racks of chain stores. You could pop into Urban Outfitters to buy a faded T-shirt from an obscure band, a pocket watch, and Victorian pinstripe trousers. You could walk out of American Apparel with a leotard and '80s spandex shorts.
This aesthetic was labeled "hipster" and could be bought straight off the shelf. Bernardo was mystified: "Our outfits were all about being true to ourselves and not spending too much money on clothes, but suddenly you could pick up an outfit that looked vintage but that had never been worn before."
By 2006, American Apparel's hipster-centric aesthetic became so popular that the company was snapped up for $382.5 million by an investment firm, who promptly took it public. That year Pabst Blue Ribbon beer, once described as the "nectar of the hipster gods," overtook Coors in volume sales. Meanwhile, Urban Outfitters saw a 44% increasein profits every year between 2003 and 2006.
Before long, so-called hipster looks started showing up at big-box retailers. You could buy skinny jeans, ironic tees, and beanies from Target, along with toilet paper and washing detergent—you could even buy a fixed gear bike from Walmart for $150. "Suddenly, trying to look different became mainstream, which was hard to wrap our minds around," says Dylan Leavitt, star of Dylan’s Vintage Minute.
A decade later, the tide has turned. American Apparel hasn't posted a profit since 2009 and has in fact hemorrhaged so much money that it was nearly delisted from the New York Stock Exchange. Pabst was just sold to a Russian beverage company in a move widely interpreted as a sign that the beer is losing its appeal. Urban Outfitters sales have steadily declined since 2011. What went wrong? Did these companies fail to anticipate the next "hipster trend?" Or has the hipster era finally come to an end, rendering these kinds of companies irrelevant?
"This is the conversation I have almost every weekend," says Katharine Brandes, the LA-based creative director of women’s fashion label BB Dakota. "My friends and I sometimes like to observe hipsters in their natural habitat, which around here is Silver Lake." Brandes, 28, grew up surrounded by the hipster-gone-mainstream aesthetic. Her theory is that these brands haven't been perceptive to how youth culture has changed over the last decade.
The earliest hipsters, she argues, generally rejected mainstream fashion and beliefs. In later years, however, hipster culture became more about a particular ethical lifestyle. "Urban Outfitters and American Apparel did a good job of commodifying the earliest kind of hipster—the Vice-reading, PBR-swilling, trucker-hat-wearing twentysomething," she says. "But they have not successfully evolved to meet the needs of the new wave that cares about authenticity and buys products from brands that have a strong ethical core."
by Elizabeth Segran, Racked | Read more:
Image: Getty
My Prescribed Life
When I was diagnosed with a mental illness at age eleven, my doctor—soft-spoken and straitlaced, with a thick moustache—explained that I was suffering from a chemical imbalance in my brain. At the time, the medical community believed that depressive disorders were primarily caused by a deficiency of monoamine neurotransmitters, which help regulate moods and general happiness. The new drugs were thought to stall reabsorption of serotonin into nerve cells and allow it to linger instead in the synapse between cells, where over time it may help transmit the “happy” message. It sounds logical, but even today scientists have unanswered questions. Since patients were seeing improvement, doctors figured, they must have been low in monoamines.
And I did improve. A week or so after I started taking the drugs, my family took our new dog, Chester, for a walk in the Beach neighbourhood of Toronto. As we strolled along the boardwalk, he snatched a peanut butter sandwich clean out of a little boy’s hand. “It was the first time we’d seen you smile in three months,” my dad told me recently. Over the next few months, my sadness lifted. My toxic thoughts became more manageable. The nausea dissipated, and I started eating again. I switched schools, made new friends, and slowly, cautiously, returned to normal life. The drugs buoyed me up from cataclysmic depression to relatively stable, low-boiling anxiety.
The drugs came with some obsessive-compulsive side effects. I picked the skin on my face and limbs like a crystal meth addict, burrowing beneath the flesh to create welts and sores. I also developed a facial tic, wherein I’d scrunch up my nose until it ached. (Even when I think about it now, the urge to scrunch is hard to resist.) My doctor prescribed even more drugs: clomipramine and imipramine, two remnants of the old tricyclic class of antidepressants.
My parents weighed the potential risks of this cocktail against what they could only imagine would happen if I continued along my destructive path. My doctor, meanwhile, hoped that by staving off anxiety and depression at an early age, my brain might lay down permanent pathways to combat patterns of dysfunctional thinking. Really, no one knew what to expect. “We were totally in the wilderness about child and adolescent psychiatry,” explains Dean Elbe, a clinical pharmacist who specializes in child psychiatry at the BC Children’s Hospital, in Vancouver. “All we could do was extrapolate from what we saw in adults.”
We still don’t really know anything about the long-term effects of antidepressants on adolescent development. There have been no long-term studies, partly because of logistics, and because the US Food and Drug Administration and Health Canada require pharmaceutical companies to prove only that their medications are better than placebos over the short term. One study found that extended exposure to fluoxetine (the generic form of Prozac) in some young mice led to anxiety-like behavior recurring when the mice were exposed again to the drug as adults.
But the most profound and pervasive fear—among adults, among parents of affected kids, and among those kids themselves—is that antidepressants will somehow alter the patient’s essential identity. In Is It Me or My Meds: Living with Antidepressants, Boston-based sociologist David A. Karp explains, “Psychotropic drugs have as their purpose the transformation of people’s moods, feelings, and perceptions. These drugs act on—perhaps even create—people’s consciousness and, therefore, have profound effects on the nature of their identities.”
This kind of thinking taps into one of the paramount tensions of mental illness: the blurred line between pathology and personality. How much of what we feel is the result of an illness? How much is our so-called identity? Though scientists still believe neurotransmitter deficiencies affect mental health, they’ve also implicated a whack of other factors, including the environment, stress, and physical health. Together with the mysterious chemical voodoo taking place in our bodies, those factors spin in an endless feedback loop that makes it impossible to source mental illness. It stands to reason that drugs meant to treat an imbalance in serotonin might bleed beyond their reach, altering who we are as long as we’re on them.
And I did improve. A week or so after I started taking the drugs, my family took our new dog, Chester, for a walk in the Beach neighbourhood of Toronto. As we strolled along the boardwalk, he snatched a peanut butter sandwich clean out of a little boy’s hand. “It was the first time we’d seen you smile in three months,” my dad told me recently. Over the next few months, my sadness lifted. My toxic thoughts became more manageable. The nausea dissipated, and I started eating again. I switched schools, made new friends, and slowly, cautiously, returned to normal life. The drugs buoyed me up from cataclysmic depression to relatively stable, low-boiling anxiety.The drugs came with some obsessive-compulsive side effects. I picked the skin on my face and limbs like a crystal meth addict, burrowing beneath the flesh to create welts and sores. I also developed a facial tic, wherein I’d scrunch up my nose until it ached. (Even when I think about it now, the urge to scrunch is hard to resist.) My doctor prescribed even more drugs: clomipramine and imipramine, two remnants of the old tricyclic class of antidepressants.
My parents weighed the potential risks of this cocktail against what they could only imagine would happen if I continued along my destructive path. My doctor, meanwhile, hoped that by staving off anxiety and depression at an early age, my brain might lay down permanent pathways to combat patterns of dysfunctional thinking. Really, no one knew what to expect. “We were totally in the wilderness about child and adolescent psychiatry,” explains Dean Elbe, a clinical pharmacist who specializes in child psychiatry at the BC Children’s Hospital, in Vancouver. “All we could do was extrapolate from what we saw in adults.”
We still don’t really know anything about the long-term effects of antidepressants on adolescent development. There have been no long-term studies, partly because of logistics, and because the US Food and Drug Administration and Health Canada require pharmaceutical companies to prove only that their medications are better than placebos over the short term. One study found that extended exposure to fluoxetine (the generic form of Prozac) in some young mice led to anxiety-like behavior recurring when the mice were exposed again to the drug as adults.
But the most profound and pervasive fear—among adults, among parents of affected kids, and among those kids themselves—is that antidepressants will somehow alter the patient’s essential identity. In Is It Me or My Meds: Living with Antidepressants, Boston-based sociologist David A. Karp explains, “Psychotropic drugs have as their purpose the transformation of people’s moods, feelings, and perceptions. These drugs act on—perhaps even create—people’s consciousness and, therefore, have profound effects on the nature of their identities.”
This kind of thinking taps into one of the paramount tensions of mental illness: the blurred line between pathology and personality. How much of what we feel is the result of an illness? How much is our so-called identity? Though scientists still believe neurotransmitter deficiencies affect mental health, they’ve also implicated a whack of other factors, including the environment, stress, and physical health. Together with the mysterious chemical voodoo taking place in our bodies, those factors spin in an endless feedback loop that makes it impossible to source mental illness. It stands to reason that drugs meant to treat an imbalance in serotonin might bleed beyond their reach, altering who we are as long as we’re on them.
by Emily Landau, The Walrus | Read more:
Image: Adrian Forrow
Tuesday, March 3, 2015
Why We’re All Becoming Independent Contractors
[ed. FedEx? Really? I had no idea this is how they operated.]
FedEx calls its drivers independent contractors.Yet FedEx requires them to pay for the FedEx-branded trucks they drive, as well as the FedEx uniforms they wear, and FedEx scanners they use—along with insurance, fuel, tires, oil changes, meals on the road, maintenance, and workers compensation insurance. If they get sick or need a vacation, they have to hire their own replacements. They’re even required to groom themselves according to FedEx standards.
FedEx doesn’t tell its drivers what hours to work, but it tells them what packages to deliver and organizes their workloads to ensure they work between 9.5 and 11 hours every working day.
If this isn’t “employment,” I don’t know what the word means.
In 2005, thousands of FedEx drivers in California sued the company, alleging they were in fact employees and that FedEx owed them the money they shelled out, as well as wages for all the overtime work they put in.
Last summer, a federal appeals court agreed, finding that under California law—which looks at whether a company “controls” how a job is done along with a variety of other criteria to determine the real employment relationship—the FedEx drivers were indeed employees, not independent contractors.
by Robert Reich, Granta | Read more:
Image: WSJ
All You Have Eaten
How much of that will he remember, and for how long, and how well? You might be able to tell me, with some certainty, what your breakfast was, but that confidence most likely diminishes when I ask about two, three, four breakfasts ago—never mind this day last year. (...)
For breakfast on January 2, 2008, I ate oatmeal with pumpkin seeds and brown sugar and drank a cup of green tea. I know because it’s the first entry in a food log I still keep today. I began it as an experiment in food as a mnemonic device. The idea was this: I’d write something objective every day that would cue my memories into the future—they’d serve as compasses by which to remember moments.
Andy Warhol kept what he called a “smell collection,” switching perfumes every three months so he could reminisce more lucidly on those three months whenever he smelled that period’s particular scent. Food, I figured, took this even further. It involves multiple senses, and that’s why memories that surround food can come on so strong.
What I’d like to have is a perfect record of every day. I’ve long been obsessed with this impossibility, that every day be perfectly productive and perfectly remembered. What I remember from January 2, 2008 is that after eating the oatmeal I went to the post office, where an old woman was arguing with a postal worker about postage—she thought what she’d affixed to her envelope was enough and he didn’t. (...)
Last spring, as part of a NASA-funded study, a crew of three men and three women with “astronaut-like” characteristics spent four months in a geodesic dome in an abandoned quarry on the northern slope of Hawaii’s Mauna Loa volcano. For those four months, they lived and ate as though they were on Mars, only venturing outside to the surrounding Mars-like, volcanic terrain, in simulated space suits. The Hawaii Space Exploration Analog and Simulation (HI-SEAS) is a four-year project: a series of missions meant to simulate and study the challenges of long-term space travel, in anticipation of mankind’s eventual trip to Mars. This first mission’s focus was food.
Getting to Mars will take roughly six to nine months each way, depending on trajectory; the mission itself will likely span years. So the question becomes: How do you feed astronauts for so long? On “Mars,” the HI-SEAS crew alternated between two days of pre-prepared meals and two days of dome-cooked meals of shelf-stable ingredients. Researchers were interested in the answers to a number of behavioral issues: among them, the well-documented phenomenon of menu fatigue (when International Space Station astronauts grow weary of their packeted meals, they tend to lose weight). They wanted to see what patterns would evolve over time if a crew’s members were allowed dietary autonomy, and given the opportunity to cook for themselves (“an alternative approach to feeding crews of long term planetary outposts,” read the open call).
Everything was hyper-documented. Everything eaten was logged in painstaking detail: weighed, filmed, and evaluated. The crew filled in surveys before and after meals: queries into how hungry they were, their first impressions, their moods, how the food smelled, what its texture was, how it tasted. They documented their time spent cooking; their water usage; the quantity of leftovers, if any. The goal was to measure the effect of what they ate on their health and morale, along with other basic questions concerning resource use. How much water will it take to cook on Mars? How much water will it take to wash dishes? How much time is required; how much energy? How will everybody feel about it all? I followed news of the mission devoutly. (...)
Their crew of six was selected from a pool of 700 candidates. Kate is a science writer, open-water swimmer, and volleyball player. When I asked her what “astronaut-like” means and why she was picked she says it’s some “combination of education, experience, and attitude”: a science background, leadership experience, an adventurous attitude. An interest in cooking was not among the requirements. The cooking duties were divided from the get-go; in the kitchen, crew members worked in pairs. On non-creative days they’d eat just-add-water, camping-type meals: pre-prepared lasagna, which surprised Kate by being not terrible; a thing called “kung fu chicken” that Angelo described as “slimy” and less tolerable; a raspberry crumble dessert that’s a favorite among backpackers (“That was really delicious,” Kate said, “but still you felt weird about thinking it was too delicious”). The crew didn’t eat much real astronaut food—astronaut ice cream, for example—because real astronaut food is expensive. (...)
When I look back on my meals from the past year, the food log does the job I intended more or less effectively. I can remember, with some clarity, the particulars of given days: who I was with, how I was feeling, the subjects discussed. There was the night in October I stress-scarfed a head of romaine and peanut butter packed onto old, hard bread; the somehow not-sobering bratwurst and fries I ate on day two of a two-day hangover, while trying to keep things light with somebody to whom, the two nights before, I had aired more than I meant to. There was the night in January I cooked “rice, chicken stirfry with bell pepper and mushrooms, tomato-y Chinese broccoli, 1 bottle IPA” with my oldest, best friend, and we ate the stirfry and drank our beers slowly while commiserating about the most recent conversations we’d had with our mothers.
Reading the entries from 2008, that first year, does something else to me: it suffuses me with the same mortification as if I’d written down my most private thoughts (that reaction is what keeps me from maintaining a more conventional journal). There’s nothing particularly incriminating about my diet, except maybe that I ate tortilla chips with unusual frequency, but the fact that it’s just food doesn’t spare me from the horror and head-shaking that comes with reading old diaries. Mentions of certain meals conjure specific memories, but mostly what I’m left with are the general feelings from that year. They weren’t happy ones. I was living in San Francisco at the time. A relationship was dissolving.
It seems to me that the success of a relationship depends on a shared trove of memories. Or not shared, necessarily, but not incompatible. That’s the trouble, I think, with parents and children: parents retain memories of their children that the children themselves don’t share. My father’s favorite meal is breakfast and his favorite breakfast restaurant is McDonald’s, and I remember—having just read Michael Pollan or watched Super Size Me—self-righteously not ordering my regular egg McMuffin one morning, and how that actually hurt him.
When a relationship goes south, it’s hard to pinpoint just where or how—especially after a prolonged period of it heading that direction. I was at a loss with this one. Going forward, I didn’t want not to be able to account for myself. If I could remember everything, I thought, I’d be better equipped; I’d be better able to make proper, comprehensive assessments—informed decisions. But my memory had proved itself unreliable, and I needed something better. Writing down food was a way to turn my life into facts: if I had all the facts, I could keep them straight. So the next time this happened I’d know exactly why—I’d have all the data at hand.
by Rachel Khong, Lucky Peach | Read more:
Image: Jason PolanNew Professor
[ed. Professor of Physics! Congratulations, Nate. I'm so proud of you for achieving your goal.
Love, Dad.]
Image via:
[See also: Maybe the hardest nut for a new scientist to crack: finding a job.]
Monday, March 2, 2015
Subscribe to:
Comments (Atom)











