Friday, April 10, 2015

Barely Keeping Up in TV’s New Golden Age

[ed. See also: Myths of the Golden Age]

Not long ago, a friend at work told me I absolutely, positively must watch “Broad City” on Comedy Central, saying it was a slacker-infused hilarity.

My reaction? Oh no, not another one.

The vast wasteland of television has been replaced by an excess of excellence that is fundamentally altering my media diet and threatening to consume my waking life in the process. I am not alone. Even as alternatives proliferate and people cut the cord, they are continuing to spend ever more time in front of the TV without a trace of embarrassment.

I was never one of those snobby people who would claim to not own a television when the subject came up, but I was generally more a reader than a watcher. That was before the explosion in quality television tipped me over into a viewing frenzy.

Something tangible, and technical, is at work. The addition of ancillary devices onto what had been a dumb box has made us the programming masters of our own universes. Including the cable box — with its video on demand and digital video recorder — and Apple TV, Chromecast, PlayStation, Roku, Wii and Xbox, that universe is constantly expanding. Time-shifting allows not just greater flexibility, but increased consumption. According to Nielsen, Americans watched almost 15 hours of time-shifted television a month in 2013, two more hours a month than the year before.

And what a feast. Right now, I am on the second episode of Season 2 of “House of Cards” (Netflix), have caught up on “Girls” (HBO) and am reveling in every episode of “Justified” (FX). I may be a little behind on “The Walking Dead” (AMC) and “Nashville” (ABC) and have just started “The Americans” (FX), but I am pretty much in step with comedies like “Modern Family” (ABC) and “Archer” (FX) and like everyone one else I know, dying to see how “True Detective” (HBO) ends. Oh, and the fourth season of “Game of Thrones” (HBO) starts next month.

Whew. Never mind being able to hold all these serials simultaneously in my head, how can there possibly be room for anything else? So far, the biggest losers in this fight for mind share are not my employer or loved ones, but other forms of media.

My once beloved magazines sit in a forlorn pile, patiently waiting for their turn in front of my eyes. Television now meets many of the needs that pile previously satisfied. I have yet to read the big heave on Amazon in The New Yorker, or the feature on the pathology of contemporary fraternities in the March issue of The Atlantic, and while I have an unhealthy love of street food, I haven’t cracked the spine on Lucky Peach’s survey of the same. Ditto for what looks like an amazing first-person account in Mother Jones from the young Americans who were kidnapped in Iran in 2009. I am a huge fan of the resurgent trade magazines like Adweek and The Hollywood Reporter, but watching the products they describe usually wins out over reading about them.

Magazines in general had a tough year, with newsstand sales down over 11 percent, John Harrington, an industry analyst who tracks circulation, said.

And then there are books. I have a hierarchy: books I’d like to read, books I should read, books I should read by friends of mine and books I should read by friends of mine whom I am likely to bump into. They all remain on standby. That tablets now contain all manner of brilliant stories that happen to be told in video, not print, may be partly why e-book sales leveled out last year. After a day of online reading that has me bathed in the information stream, when I have a little me-time, I mostly want to hit a few buttons on one of my three remotes — cable, Apple, Roku — and watch the splendors unfurl.

by David Carr, NY Times | Read more:
Image: Nathaniel Bell for Netflix

Return of the King

Mad Men still has a half-season to go, but Don Draper’s obituary has already been written. We don’t know exactly how it will end for Don, but the critical consensus is that his fate is sealed: for the past seven years, we’ve watched him follow the same downward trajectory his silhouetted likeness traces in the opening credits, so that all that’s left is for him to land. In a piece lamenting the “death of adulthood in American culture,” A. O. Scott says that Mad Men is one of several recent pop cultural narratives — among them The Sopranos and Breaking Bad — that chart the “final, exhausted collapse” of white men and their regimes, but I’m not convinced. Don has a way of bouncing back. Where one episode opens with him on an examination table, lying to his doctor about how much he drinks and smokes as if his bloodshot eyes and smoker’s cough didn’t give him away (even bets on cirrhosis and emphysema), another finds him swimming laps, cutting down on his drinking, and keeping a journal in an effort to “gain a modicum of control.” Over the course of the past six and a half seasons, Don has been on the brink of personal and professional destruction too many times to count, and yet when we last saw him at the conclusion of “Waterloo,” the final episode of the last half-season, which aired last May, he was fresh-faced and back on top. The truth is that Mad Men has something far more unsettling (and historically accurate) to tell us about the way that white male power works to protect its own interests, precisely by staging and restaging its own death.

In fact, a closer look at “Waterloo” in particular makes clear that the show does not chronicle the last gasp of the white male, as Scott would have it, but outlines the way that a wily old guard has followed the advice of E. Digby Baltzell (who coined the acronym WASP in 1964) by “absorbing talented and distinguished members of minority groups into its privileged ranks” in order to maintain its grip on power. After several episodes of unrelenting humiliation for Don, this installment was so thoroughly upbeat that it had critics wondering just whose Waterloo it was, anyway. Unlike Napoleon, Don doesn’t defiantly march into a futile, fatal battle to save his job, but instead surprises everyone by stepping graciously aside, handing a big pitch for Burger Chef to his protégé, Peggy Olson. (...)

It’s tempting to read both the ad and Peggy’s triumphant performance as harbingers of our own more enlightened, inclusive era, where women and people of color have a seat and a voice at the clean well-lit table that Peggy describes. There are plenty of indications that we are witnessing the small steps that will ultimately amount to real progress (not least of which, the moon landing that provides the episode’s symbolic framework). Remember at the beginning of this season (in “A Day’s Work”), when senior partner Bertram Cooper, a member of the old guard if ever there was one, insists that a black secretary be moved from her post as receptionist at the front of the office? (“I’m all for the national advancement of colored people,” he says, “but I don’t believe people should advance all the way to the front.”) Now Joan Holloway obliges by promoting her to office manager, and it is she — her name is Dawn, naturally — who is not just front but center at the end of “Waterloo” when she calls to order the meeting at which Cooper’s death and a fresh start for the agency are announced.

But as exhilarating as it is to watch Peggy nail the presentation, and to watch Dawn command the room if just for a moment, the big winner in this episode is the status quo, which puts a new face on the same old model. Peggy’s pitch for Burger Chef promises that everyone will get a seat at the table, but if we’ve learned anything over the course of six and a half seasons, it’s that it is actually an invitation-only affair for an exceptional few. Yes, Mad Men narrates the crisis of white masculinity, but as this episode makes clear, that crisis is not about who gets a piece of pie, but about who controls the pie; as Bert tautologically instructs his younger partner Roger Sterling, “Whoever is in control is in charge.”

by Kathy Knapp, LA Review of Books |  Read more:
Image: via:

International Louie Louie Day


Louie Louie was written by R&B singer Richard Berry in 1955. His band, “The Pharaohs”, recorded and released it in 1957. It got some airplay on the band’s home turf around San Francisco, and became popular in the pacific northwest. It was covered by other garage bands and became a somewhat popular party tune in the western states.

In Berry’s original recording the lyric is quite clear: It’s a song is about a sailor who spends three days traveling to Jamaica to see his girl. The story is told to a bartender named Louie. Nothing even remotely obscene in that original version.

The version we all know and love was recorded by the Kingsmen on April 6, 1963 in Portland Oregon. The cover was not of the original Richard Berry recording, but a later version by Robin Roberts with his backing band “The Wailers.” The Robin Roberts version was released in 1961 and became a local hit in Tacoma, Washington.

For reasons lost in the mists of time, the Kingsmen’s recording session cost $50, and consisted of a single take. Legend suggests they thought that take was a rehearsal, or maybe a demo tape.

A different version of Louie Louie was also recorded the same week, in the same recording studio, by Paul Revere and the Raiders. The Raiders version is considered much better musically, but the Kingsmen’s version got all the glory.

The Kingsmen’s lead singer on Louie Louie was Jack Ely, whose birthday is April 11. That date became the basis for the widely celebrated “International Louie Louie Day.” It was the only time Ely recorded with the Kingsmen as lead vocalist. He left the band shortly after to return to school, or over a dispute about who was to be lead vocalist. Accounts vary. When the song became popular the band refused to take him back. The TV and concert performances the Kingsmen did during the tune’s most popular years were lip synced.

by Gene Baucom, Medium | Read more:
Video: YouTube

Adrian Tomine, Jennifer Davis
via:

Pàtric Marín, Wild in the City
via:

What the Deer Are Telling Us

In 1909, a United States Forest Service officer named Aldo Leopold shot a mother wolf from a perch of rimrock in the Apache National Forest in Arizona. It was a revelatory moment in the life of the young naturalist. “In those days we never heard of passing up a chance to kill a wolf,” Leopold wrote in an essay called “Thinking Like a Mountain,” later included in his Sand County Almanac, published posthumously after his death in 1948 and which went on to sell several million copies. “We reached the old wolf in time to watch a fierce green fire dying in her eyes. I realized then, and have known ever since, that there was something new to me in those eyes—something known only to her and to the mountain.”

Leopold, who today is revered among ecologists, was among the earliest observers of the impact of wolves on deer abundance, and of the impact of too many deer on plant life. In “Thinking Like a Mountain,” he outlined for the first time the basic theory of trophic cascades, which states that top-down predators determine the health of an ecosystem. The theory as presented by Leopold held that the extirpation of wolves and cougars in Arizona, and elsewhere in the West, would result in a booming deer population that would browse unsustainably in the forests of the high country. “I now suspect that just as a deer herd lives in mortal fear of its wolves,” Leopold wrote, “so does a mountain live in mortal fear of its deer.”

One of the areas where Leopold studied deer irruptions was the Kaibab Plateau near the Grand Canyon. By 1924, the deer population on the Kaibab had peaked at 100,000. Then it crashed. During 1924-26, 60 percent of the deer perished due to starvation. Leopold believed this pattern of deer exceeding the carrying capacity of the land would repeat across the U.S. wherever predators had been eliminated as a trophic force. By 1920, wolves and cougars were gone from the ecosystems east of the Mississippi—shot, trapped, poisoned, as human settlement fragmented their habitat— and they were headed toward extirpation in most parts of the American West. Within two generations, the hunting of deer had been heavily regulated, the calls from conservationists had been heeded for deer reintroduction throughout the eastern U.S., and swaths of state and federally managed forest had been protected from any kind of hunting.

Freed both of human and animal predation, however, deer did not follow the pattern predicted by Leopold. Instead of eating themselves out of house and home, they survived—they thrived—by altering their home range to their benefit. As recent studies have shown, certain kinds of grasses and sedges preferred by deer react to over-browsing the way the bluegrass on a suburban lawn reacts to a lawnmower. The grasses grow back faster and healthier, and provide more sustenance for more deer. In short, there has been enough food in our forests, mountains, and grasslands for white-tailed deer in the U.S. to reach unprecedented numbers, about 32 million, more than at any time since record-keeping began.

In 1968, Stanford biology professor Paul Ehrlich predicted that another widespread species would die out as a result of overpopulation. But he was spectacularly wrong. Like the deer, the steadily ingenious Homo sapiens altered its home range—most notably the arable land—to maximize its potential for survival. As Homo sapiens continues to thrive across the planet today, the species might take a moment to find its reflection in the rampant deer.

Conservation biologists who have followed the deer tend to make an unhappy assessment of its progress. They mutter dark thoughts about killing deer, and killing a lot of them. In fact, they already are. In 2011, in the name of conservation, the National Park Service and U.S. Department of Agriculture teamed up with hunters to “harvest” 3 million antlerless deer. I asked Thomas Rooney, one of the nation’s top deer irruption researchers, about the losses in forest ecosystems overrun by deer. “I’d say the word is ‘apocalypse,’ ” Rooney said.

On a warm fall day last year, I went to see Rooney, a professor of biology at Wright State University, in Dayton, Ohio. In his office, I noticed a well-thumbed copy of Ehrlich’s The Population Bomb, and I asked him if he thought a comparison might be drawn between human overpopulation and deer overpopulation. He looked at me as if the point was obvious. “Deer, like humans,” he said, “can come in and eliminate biodiversity, though not to their immediate detriment.” (...)

He told me about a study published last year in Conservation Biology that bemoaned “pandemic deer overabundance,” language suggesting the creature was a disease on the land. Ecosystem damage becomes apparent at roughly 15 deer per square mile, and the damage grows with density. Some areas of the northeast host as many as 100 deer per square mile. (The Wright State University reserve has a density of around 40 deer per square mile.) He noted a 2013 article co-authored by a group of Nature Conservancy scientists who warned that “no other threat to forested habitats is greater at this point in time—not lack of fire, not habitat conversion, not climate change.” (...)

I asked Rooney about the remarkable ability of deer to thrive in their home range—most of the U.S.—while producing ecosystem simplification and a biodiversity crash. In his own studies of deer habitats in Wisconsin, Rooney found that only a few types of grass thrive under a deer-dominant regime. The rest, amounting to around 80 percent of native Wisconsin plant species, had been eradicated. “The 80 percent represent the disappearance of 300 million years of evolutionary history,” he said. He looked deflated.

A turkey vulture pounded its wings through the canopy, and in the darkening sky a military cargo plane howled in descent toward nearby Wright-Paterson Air Force Base. Rooney and I emerged from the forest onto a campus parking lot where Homo sapiens held sway. The self-assured mammals crossed fields of exotic bluegrass under pruned hardwoods surrounded by a sea of concrete, tarmac, glass, and metal. There were no flowers except those managed in beds. There were no other animals to be seen except the occasional squirrel, and these were rat-like, worried, scurrying. The Homo sapiens got into cars that looked the same, on streets that looked the same, and they were headed to domiciles that looked more or less the same. This is home for us.

by Christopher Ketcham, Nautilus |  Read more:
Image: Chris Buzelli

The Wave That Swept the World

In the beginning was the wave. The blue and white tsunami, ascending from the left of the composition like a massive claw, descends pitilessly on Mount Fuji – the most august mountain in Japan, turned in Katsushika Hokusai’s vision into a small and vulnerable hillock. Under the Wave off Kanagawa, one of Hokusai’s Thirty-Six Views of Mount Fuji, has been an icon of Japan since the print was first struck in 1830–31, yet it forms part of a complex global network of art, commerce, and politics. Its intense blue comes from Hokusai’s pioneering use of Prussian Blue ink – a foreign pigment, imported, probably via China, from England or Germany. The wave, from the beginning, stretched beyond Japan. Soon, it would crash over Europe.

This week the Museum of Fine Arts in Boston, home to the greatest collection of Japanese art outside Japan, opens a giant retrospective of the art of Hokusai, showcasing his indispensible woodblock prints of the genre we call ukiyo-e, or ‘images of the floating world’. It’s the second Hokusai retrospective in under a year; last autumn, the wait to see the artist’s two-part mega-show at the Grand Palais in Paris stretched to two hours or more. American and French audiences adore Hokusai – and have for centuries. He is, after all, not only one of the great figures of Japanese art, but a father figure of much of Western modernism. Without Hokusai, there might have been no Impressionism – and the global art world we today take for granted might look very different indeed.

Fine print

Hokusai’s prints didn’t find their way to the West until after the artist’s death in 1849. During his lifetime Japan was still subject to sakoku, the longstanding policy that forbade foreigners from entering and Japanese from leaving, on penalty of death. But in the 1850s, with the arrival of the ‘black ships’ of the American navy under Matthew Perry, Japan gave up its isolationist policies – and officers and diplomats, then artists and collectors, discovered Japanese woodblock printing. In Japan, Hokusai was seen as vulgar, beneath the consideration of the imperial literati. In the West, his delineation of space with color and line, rather than via one-point perspective, would have revolutionary impact.

Both the style and the subject matter of ukiyo-e prints appealed to young artists like Félix Bracquemond, one of the first French artists to be seduced by Japan. Yet the Japanese prints traveling to the West in the first years after Perry were contemporary artworks, rather than the slightly earlier masterpieces of Hokusai, Hiroshige, and Utamaro. Many of the prints that arrived were used as wrapping paper for commercial goods. Everything changed on 1 April, 1867, when the Exposition Universelle opened on the Champ de Mars, the massive Paris marching grounds that now lies in the shadow of the Eiffel Tower. It featured, for the first time, a Japanese pavilion – and its showcase of ukiyo-e prints revealed the depth of Japanese printmaking to French artists for the first time.

by Jason Farago, BBC |  Read more:
Image: Katsushika Hokusai 

Thursday, April 9, 2015

Just Don't Call It a Panama Hat

There are many types of Panama hats but they all have one thing in common: they’re made in Ecuador. Some say it was the Americans who came up with the misleading name, after they saw photographs of Theodore Roosevelt wearing one as he inspected the construction of the Panama Canal. Legend goes it was actually a loan from Eloy Alfaro, the president of Ecuador and hero of the revolution of 1895. Others say the hats were named after the Isthmus of Panama, the point from which they have historically been exported to the rest of the world.

Yet the misnomer didn’t prevent the famous straw hat, more correctly referred to as Montecristi hat, from being designated by UNESCO as Intangible Cultural Heritage in 2012–Ecuador has produced them since the early 17th Century. It takes three months to make a superfino Montecristi hat (the best grade there is), and weavers can only work in the early and late hours of the day because the straw breaks when it’s exposed to high temperature. According to tradition, hats are cleaned, finished and sold in the town of Montecristi, the Panama hat’s spiritual home in the province of Manabi.

In the small and remote village of Pile nearby, the craft is passed on through family. Manuel Lopez, 41, learned to weave with his father at the age of eight. He says he teaches his own children now, though making a Montecristi hat is becoming a lost art. A weaver only makes between $700 to $1,200 to produce a superfino hat, which can fetch for $25,000 abroad. And now that China has become the world’s top producer of straw hats (which they actually make from paper), Ecuador’s hat makers are unable to keep up with decline in price and demand. With most young people looking for more lucrative opportunities elsewhere, experts say the last-ever traditionally made Montecristi hat will be woven in the next 15 years.

by Eduardo Leal, Roads & Kingdoms |  Read more:
Image: Eduardo Leal

Wednesday, April 8, 2015

Thursday, April 2, 2015


Paul Klee, The White Form, 1939.
via:

Our Land, Up for Grabs

A battle is looming over America’s public lands.

It’s difficult to understand why, given decades of consistent, strong support from voters of both parties for protecting land, water and the thousands of jobs and billions of dollars in economic benefits these resources make possible.

Last week, the United States Senate voted 51 to 49 to support an amendment to a nonbinding budget resolution to sell or give away all federal lands other than the national parks and monuments.

If the measure is ever implemented, hundreds of millions of acres of national forests, rangelands, wildlife refuges, wilderness areas and historic sites will revert to the states or local governments or be auctioned off. These lands constitute much of what’s left of the nation’s natural and historical heritage.

This was bad enough. But it followed a 228-to-119 vote in the House of Representatives approving another nonbinding resolution that said “the federal estate is far too large” and voiced support for reducing it and “giving states and localities more control over the resources within their boundaries.” Doing so, the resolution added, “will lead to increased resource production and allow states and localities to take advantage of the benefits of increased economic activity.”

The measures, supported only by the Republicans who control both houses, were symbolic. But they laid down a marker that America’s public lands, long held in trust by the government for its people, may soon be up for grabs.

We’ll get a better sense of Congress’s commitment to conservation this year when it decides whether to reauthorize the Land and Water Conservation Fund, created in 1965 and financed by fees paid by oil companies foroffshore drilling. The program underwrites state and local park and recreation projects, conservation easements for ranches and farms, plus national parks, forests and wildlife refuges.

Nearly $17 billion has gone to those purposes over the years, including 41,000 state and local park and recreation projects, some of which my organization has helped put together. (Another $19 billion was diverted by Congress to other purposes.) The program expires Sept. 30 unless Congress keeps it alive.

Land protection has long been an issue for which voters of both parties have found common cause. Since 1988, some $71.7 billion has been authorized to conserve land in more than 1,800 state and local elections in 43 states. Last year, $13.2 billion was approved by voters in 35 initiatives around the country — the most in a single year in the 27 years my organization has tracked these initiatives and, in some cases, led them.

But this consensus is being ignored, and not just in the nation’s capital.

by Will Rogers, NY Times |  Read more:
Image: via:

The Common Man’s Crown

The 1903 World Series was the first of baseball’s modern era. Boston and Pittsburgh were adhering to newly codified rules of play — and also initiating a new code of dress, as no one could have known, least of all the men in the stands, uniformly obedient to the laws of Edwardian haberdashery. The spectators wore “derbies, boaters, checkered caps and porkpie hats,” wrote Beverly Chico in her book, “Hats and Headwear Around the World.” Each style signaled a distinct social identity. All are now regarded largely as museum pieces, having fallen away in favor of a hat that offers casual comfort and a comforting image of classlessness. Given our cult of youth, our populist preference for informality and our native inclination toward sportiness, its emergence as the common man’s crown was inevitable.

Frank Sinatra supposedly implored the fedora-wearers of his era to cock their brims: Angles are attitudes. Ballplayers have accepted this as truth since at least that first World Series, when Fred Clarke, Pittsburgh’s left fielder and manager, wore his visor insouciantly askew, and the general public has come to know the ground rules as well. Here’s a test of fluency in the sartorial vernacular of Americans: You can read the tilt of a bill like the cut of a jib. The way you wear your hat is essential to others’ memories of you, and the look of a ball cap’s brim communicates tribal identity more meaningfully than the symbols stitched across its front. Is the bill flatter than an AstroTurf outfield? Curved like the trajectory of a fly ball? Straightforwardly centered? Reversed like that of a catcher in his crouch or a loiterer on his corner? The cap conforms to most any cast of mind.

Watch people fiddling with their baseball caps as they sit at a stoplight or on a bar stool, primping and preening in what must be the most socially acceptable form of self-grooming. No one begrudges their fussiness, because everyone appreciates the attempt to express a point of view. The cap presents studies of plasticity in action and of the individual effort to stake out a singular place on the roster, and the meaning of the logo is as mutable as any other aspect. To wear a New York Yankees cap in the United States is to show support for the team, maybe, or to invest in the hegemony of an imperial city. To wear one abroad — the Yankees model is by far the best-selling Major League Baseball cap in Europe and Asia — is to invest in an idealized America, a phenomenon not unlike pulling on contraband bluejeans in the old Soviet Union. (...)

“Until the late 1970s, wearing a ball cap anywhere but on the baseball field carried with it a cultural stigma,” James Lilliefors writes in his book “Ball Cap Nation,” citing the Mets cap of the “Odd Couple” slob Oscar Madison as one example of its signaling mundane degeneracy. In Lilliefors’s reckoning, eight factors contributed to the cap’s increased legitimacy, including the explosion of television sports, the maturation of the first generation of Little League retirees and the relative suavity of the Detroit Tigers cap worn by Tom Selleck as the title character of “Magnum P.I.”: “It made sporting a ball cap seem cool rather than quirky; and it created an interest in authentic M.L.B. caps.” What had been merely juvenile came to seem attractively boyish, and New Era was poised to reap the rewards, having begun selling its wares to the general public, by way of a mail-order ad in the Sporting News, in 1979. (...)

Where the basic structure of a derby or a boater spoke of the wearer’s rank and region, the baseball cap is comparatively subtle. Angles are indeed accents, and a millimetric bend in the bill will inflect the article’s voice. The hip-hop habit is to wear the cap perfectly fresh and clean, as if it arrived on the head directly from the cash register, spotless except, perhaps, for the circle of the manufacturer’s label still stuck to it, alerting admirers that this is no counterfeit and that the cap is as new as the money that bought it. In tribute to this practice, New Era not long ago issued a limited-edition series of caps in the colors of its sticker, black and gold, as if the company were at once flattering its customers and further transforming them into advertisements for itself.

Peel the sticker away and bow the brim a bit: This is the simple start of asserting a further level of ownership. Taken to an extreme, the process can resemble a burlesque of the ancient ritual of breaking in the baseball mitts with which the cap’s contours rhyme. To speak to an undergraduate about a “dirty white baseball cap” is to evoke a fratboy lifestyle devoted to jam bands and domestic lager and possibly lacrosse. To spend time among the frat boys themselves is to learn the baroque techniques for accelerating wear and tear. Some wear them in the shower; others yet undertake artificial rituals involving the hair dryer and the dishwasher and the kitchen sink, recalling the collegians of midcentury who, expressing the prep fetish for the shabby genteel, took sandpaper to the collars of their Oxford shirts to gain a frayed edge.

by Troy Patterson, NY Times |  Read more:
Image: Mauricio Alejo

The Most Popular Antidepressants Are Based On An Outdated Theory

One in ten Americans takes an anti-depressant drug like Zoloft or Prozac. These drugs have been shown to work in some patients, but their design is based on a so-called "chemical imbalance" theory of depression that is incomplete, at best.

The number of people taking antidepressants has increased by over 400% since the early '90s. In a certain light, this could be perceived as a success for public health; it is clear, for example, that tens of millions of people have found antidepressants to be effective. What's less clear iswhy these medications work, but decades of research on the subject suggest that an explanation parroted in ad campaigns and physicians' offices alike – that depression can be chalked up to low levels of serotonin in the brain – is insufficient.

"Chemical imbalance is sort of last-century thinking. It's much more complicated than that,"Dr. Joseph Coyle, a professor of neuroscience at Harvard Medical School, told NPR in 2012. "It's really an outmoded way of thinking."

This is the story of how pharmaceutical companies and psychiatrists convinced the public that depression was the result of a simple chemical imbalance – and how scientists, patients, and psychiatrists are working to piece together the more complicated truth.

by Levi Gadye, i09 |  Read more:
Image: uncredited

Bambi, Yummy Gaga
via:

Against Chill

The Great Chill Massacre of 2014 was not premeditated. When I woke up that morning, I had no idea that I’d end the day going from casually dating six men to formally and intentionally dating zero. But then two of the six men coincidentally sent texts admiring my “chill,” and it became clear that drastic and draconian measures would be required to set the record straight. It seems that my poker face is too perfect when men report a desire to “see what happens.” My willingness to call dates “hanging out” in perpetuity sometimes gives the impression that I am in possession of the amorphous and increasingly desirable characteristic of Chill. And so in a fit of shamelessness and glory, I sent some variation of the text, “I’m actually looking for something serious so I’m not planning to see you anymore” to all six of them. Incredulity and attempts to lure me back into my Chill with more empty promises that we could “see where it goes” were ignored or actively mocked. I killed what little Chill I actually had and I shed no tears for it.

To the uninitiated, having Chill and being cool are synonyms. They describe a person with a laid-back attitude, an absence of neurosis, and reasonably interesting tastes and passions. But the person with Chill is crucially missing these last ingredients because they are too far removed from anything that looks like intensity to have passions. They have discernible tastes and beliefs but they are unlikely to materialize as passionate. Passion is polarizing; being enthusiastic or worked up is downright obsessive. Excessive Chill is “You do you” taken to its most extreme conclusion, giving everyone’s opinions and interests equal value so long as they’re authentically ours.

In an infamous passage in Gone Girl, the elusive “Cool Girl” is described as a woman who declares, “I am a hot, brilliant, funny woman who adores football, poker, dirty jokes, and burping, who plays video games, drinks cheap beer, loves threesomes and anal sex, and jams hot dogs and hamburgers into her mouth like she’s hosting the world’s biggest culinary gang bang while somehow maintaining a size 2.” The “Cool Girl” is, of course, remarkably dull in her interests because they center almost exclusively on the man with whom she is so inexplicably enraptured. But the “Cool Girl” has no Chill. She likes him far too much and lets it show. Chill is different — it is agreeable because it is emotionally vacant. Chill is what Cool would look like with a lobotomy and no hobbies. And for a large subset of the population, Chill is one of the most desirable qualities in a romantic prospect.

I am originally from San Diego where Chill was as much a part of our culture as burritos and surfing and lifted Toyota Tacoma trucks. It was an insistence on going with the flow, rolling with the punches. It would have been about saying “C’est la vie!” to all the shitty shit that happened if more people there had taken French. The ever-reliable Urban Dictionary has 111 definitions of “chill,” the first of which appeared in June 2002. Most of these descriptions describe the act of chilling, which is either hanging out or smoking weed, and sometimes both. The others describe being chill, an adjective to describe being calm, laid back, or relaxed. The first instance of Chill as a noun appears in 2013 under the term “No Chill” and describes a range of people who are reckless or lacking rationality. These definitions are deceptively simple ways of asking people to have fewer strong emotions.

by Alana Massey, Matter | Read more:
Image: Ana Benaroya

Stuck in Seattle

The Aggravating Adventures of a Gigantic Tunnel Drill

About 20 workers wearing hard hats and reflective vests clump together on the edge of a chasm near Seattle’s waterfront, peering down a hole 120 feet deep and 83 feet wide. The last men have been craned out of the pit in a yellow metal cage. Gulls squawk. A TV news helicopter hovers overhead.

A dozen journalists stand nearby on the bed of a truck. We’re here to see Bertha, one of the world’s biggest tunneling machines. Or at least a piece of her. A 240-foot crane is about to haul a 540,000-pound steel shield out of the ground, 20 months after Bertha started digging a highway. Almost imperceptibly, the crane starts rising.

The event, on a Thursday in mid-March, is part of a massive rescue mission to fix the $80 million machine. She broke abruptly in December 2013 after boring through just 1,000 feet, one-ninth of her job. Her seals busted, and her teeth clogged with grit and pieces of an 8-inch steel pipe left over from old groundwater tests. She stopped entirely.

The tunnel, with a budget of $1.4 billion and originally scheduled to be finished in November 2015, is two years behind schedule. The state’s contractor, a joint venture called Seattle Tunnel Partners (STP), has spent months digging to reach Bertha and crane her to the surface, where a weary Seattle awaits.

Bertha’s job is to bury a highway that runs on a structurally unsound elevated road smack in the middle of an earthquake zone. The viaduct, as it’s called, follows the shoreline, effectively barricading downtown Seattle from what could be a beautiful waterfront. The tunnel will let most of the traffic travel deep underground; at street level an old freeway will be demolished, and in its place the city will build a boulevard and shoreline park created by the designers behind New York’s acclaimed High Line park. The $4.2 billion plan calls for the long-neglected waterfront to come to life; Seattleites can celebrate the glory of Puget Sound, where ferries dash across the bay and the jagged peaks of the Olympic Peninsula jut in the distance.

Everything about the project is gargantuan, starting with Bertha, who is as tall as a five-story building. She runs on a 25,000-horsepower motor and has a head weighing 1.7 million pounds, with 260 steel teeth designed specifically to chew through Seattle’s silty soil. She’s named after the city’s first and only female mayor, Bertha Knight Landes, who served in the 1920s. According to the machine’s official state biography, her role models include “whoever invented the shovel.” Bertha’s got 15,700 Twitter followers, has inspired Halloween costumes, and was once feted by thousands.

After Bertha got stuck, she couldn’t back up because she builds the concrete walls of the tunnel as she drills forward. That means the hole she leaves behind is narrower than she is. The contractor has devised a method—itself unprecedented—­to repair Bertha by craning her in sections to the surface. After almost a decade of debating the tunnel’s merits and three more years of construction, more than a few Seattleites argue that Bertha should be buried where she is, her last rites read, and another plan pursued.

by Karen Weise, Bloomberg |  Read more:
Image: Ted S. Warren / AP Photo