Monday, June 23, 2014

Why Are We Importing Our Own Fish?

In 1982 a Chinese aquaculture scientist named Fusui Zhang journeyed to Martha’s Vineyard in search of scallops. The New England bay scallop had recently been domesticated, and Dr. Zhang thought the Vineyard-grown shellfish might do well in China. After a visit to Lagoon Pond in Tisbury, he boxed up 120 scallops and spirited them away to his lab in Qingdao. During the journey 94 died. But 26 thrived. Thanks to them, today China now grows millions of dollars of New England bay scallops, a significant portion of which are exported back to the United States.

As go scallops, so goes the nation. According to the National Marine Fisheries Service, even though the United States controls more ocean than any other country, 86 percent of the seafood we consume is imported.

But it’s much fishier than that: While a majority of the seafood Americans eat is foreign, a third of what Americans catch is sold to foreigners.

The seafood industry, it turns out, is a great example of the swaps, delete-and-replace maneuvers and other mechanisms that define so much of the outsourced American economy; you can find similar, seemingly inefficient phenomena in everything from textiles to technology. The difference with seafood, though, is that we’re talking about the destruction and outsourcing of the very ecological infrastructure that underpins the health of our coasts. Let’s walk through these illogical arrangements, course by course.

Appetizers: Half Shells for Cocktails

Our most blatant seafood swap has been the abandonment of local American oysters for imported Asian shrimp. Once upon a time, most American Atlantic estuaries (including the estuary we now call the New York Bight) had vast reefs of wild oysters. Many of these we destroyed by the 1800s through overharvesting. But because oysters are so easy to cultivate (they live off wild microalgae that they filter from the water), a primitive form of oyster aquaculture arose up and down our Atlantic coast.

Until the 1920s the United States produced two billion pounds of oysters a year. The power of the oyster industry, however, was no match for the urban sewage and industrial dumps of various chemical stews that pummeled the coast at midcentury. Atlantic oyster culture fell to just 1 percent of its historical capacity by 1970.

Just as the half-shell appetizer was fading into obscurity, the shrimp cocktail rose to replace it, thanks to a Japanese scientist named Motosaku Fujinaga and the kuruma prawn. Kurumas were favored in a preparation known as “dancing shrimp,” a dish that involved the consumption of a wiggling wild shrimp dipped in sake. Dr. Fujinaga figured out how to domesticate this pricey animal. His graduate students then fanned out across Asia and tamed other varieties of shrimp.

Today shrimp, mostly farmed in Asia, is the most consumed seafood in the United States: Americans eat nearly as much of it as the next two most popular seafoods (canned tuna and salmon) combined. Notably, the amount of shrimp we now eat is equivalent to our per capita oyster consumption a century ago.

And the Asian aquaculture juggernaut didn’t stop with shrimp. In fact, shrimp was a doorway into another seafood swap, which leads to the next course.  (...)

Lox: Wild for Farmed

There was a time when “nova lox” was exactly that: wild Atlantic salmon (laks in Norwegian) caught off Nova Scotia or elsewhere in the North Atlantic. But most wild Atlantic salmon populations have been fished to commercial extinction, and today a majority of our lox comes from selectively bred farmed salmon, with Chile our largest supplier.

This is curious, given that salmon are not native to the Southern Hemisphere. But after Norwegian aquaculture companies took them there in the ’80s, they became so numerous as to be considered an invasive species.

The prevalence of imported farmed salmon on our bagels is doubly curious because the United States possesses all the wild salmon it could possibly need. Five species of Pacific salmon return to Alaskan rivers every year, generating several hundred million pounds of fish flesh every year. Where does it all go?

Again, abroad. Increasingly to Asia. Alaska, by far our biggest fish-producing state, exports around three-quarters of its salmon.

To make things triply strange, a portion of that salmon, after heading across the Pacific, returns to us: Because foreign labor is so cheap, many Alaskan salmon are caught in American waters, frozen, defrosted in Asia, filleted and boned, refrozen and sent back to us. Pollock also make this Asian round trip, as do squid — and who knows what else?

by Paul Greenberg, NY Times |  Read more:
Image: Hu Sheyou/Xinhua Press, via Corbis

Sunday, June 22, 2014

Yo


Yo.

It seems so simple. So mindless. It’s only slightly less boring than “Hey” or “Hi,” if only because of some perceived aggression or excitement attached to it. But Yo is anything but simple.

If you haven’t been following along on Twitter, Yo is the hottest new app that will leave you scratching your head. The entire premise of the app is to send other users a single word: Yo.

Yo currently has over 50,000 active users, after launching as a joke on April Fools’ Day. Users have sent over 4 million Yo’s to each other. Without ever having officially launched, co-founder and CEO Or Arbel managed to secure $1.2 million in funding from a list of unnamed investors, except for co-founder, angel, and Mobli CEO Moshe Hogeg, who participated in the round.

It might have started out as a joke, but the app has turned into something more universally enjoyable, and its brief popularity tells us something bigger about where the mobile social landscape is headed. We’re seeing the death of digital dualism play out before us, with apps focused on merging the physical and digital worlds. Snapchat has ephemerality. Whisper and Secret have anonymity.

And Yo has context.

Context > Content

Let’s back up for a second.

You’re at a bar with your best friend and a love interest. Both put a hand on your shoulder when they talk to you. From the outside, it all looks the same. But there’s a big difference between the comfortable touch of a close friend and the explorative graze of someone you may very well have sex with soon.

The next morning, your friend and your crush send you the exact same text. It says simply “Hey.” From your old pal, “hey” just means hey. But from your sexy friend, “hey” can mean anything from “last night was fun” to “I’m still thinking about you this morning.”

As with anything, a “Yo” can just be a yo. But you’ll feel a very real difference between a “Yo” you get in the morning from a friend and a “Yo” you get at 2 a.m. from a friend with benefits. Trust me.

And that’s… supposedly… the magic.

The context of the Yo says much more than two little letters. And this is more important than it sounds.

by Jordan Cook, TechCrunch |  Read more:
Image: Yo

Michelle Wie Wins First Major At U.S. Open

[ed. Finally! It'll be interesting to see how her career progresses after this. She's already the top money winner on the LPGA tour this year, and, at only 24, (... seems like she's been around forever) she could fulfill the potential that everyone has seen for years. See also: here and here]

Michelle Wie finally delivered a performance worthy of the hype that has been heaped on her since she was a teenager.

Wie bounced back from a late mistake at Pinehurst No. 2 to bury a 25-foot birdie putt on the 17th hole, sending the 24-year-old from Hawaii to her first major championship Sunday, a two-shot victory over Stacy Lewis in the U.S. Women's Open.

Wie closed with an even-par 70 and covered her mouth with her hand before thrusting both arms in the air.

Lewis, the No. 1 player in women's golf, made her work for it. She made eight birdies to match the best score of the tournament with a 66, and then was on the practice range preparing for a playoff when her caddie told her Wie had made the sharp-breaking birdie putt on the 17th.

Lewis returned to the 18th green to hug the winner after other players doused Wie with champagne.

What a journey for Wie, who now has four career victories -- all in North America, the first on the U.S. mainland -- and moved to the top of the LPGA money list after winning the biggest event in women's golf.

She has been one of the biggest stars in women's golf since she was 13 and played in the final group of a major. Her popularity soared along with criticism when she competed against the men on the PGA Tour while still in high school and talked about wanting to play in the Masters.

That seems like a lifetime ago. The 6-foot Wie is all grown up, a Stanford graduate, popular among pros of both genders and now a major champion.

"Oh my God, I can't believe this is happening," Wie said.

It almost didn't. Just like so much of her life, the path included a sharp twist no one saw coming. Wie started the final round tied with Amy Yang, took the lead when Yang made double bogey on No. 2 and didn't let anyone catch her the rest of the day.

In trouble on the tough fourth hole, she got up-and-down from 135 yards with a shot into 3 feet. Right when Lewis was making a big run, Wie answered by ripping a drive on the shortened par-5 10th and hitting a cut 8-iron into 10 feet for eagle and a four-shot lead.

She had not made a bogey since the first hole -- and then it all nearly unraveled.

From a fairway bunker on the 16th, holding a three-shot lead, she stayed aggressive and hit hybrid from the sand. After a three-minute search, the ball was found in a wiregrass bush that caused her to take a penalty drop behind her in the fairway. She chipped on to about 35 feet and rapped her bogey putt 5 feet past the hole.

Miss it and she would be tied.

Bent over in that table-top putting stance, she poured it in to avoid her first three-putt of the week. Smiling as she left the green, even though her lead was down to one, Wie hit 8-iron safely on the 17th green and holed the tough birdie putt. She pumped her fist, then slammed it twice in succession, a determination rarely seen when she was contending for majors nearly a decade ago as a teen prodigy.

"Obviously, there are moments of doubt in there," Wie said. "But obviously, I had so many people surrounding me. They never lost faith in me. That's pushed me forward."

Wie finished at 2-under 278, the only player to beat par in the second week of championship golf at Pinehurst. Martin Kaymer won by eight shots last week at 9-under 271, the second-lowest score in U.S. Open history.

by Doug Ferguson, AP/Honolulu Star Advertiser |  Read more:
Image: via:

Saturday, June 21, 2014

Chill Bear


[ed. One reason why McNeil State Game Sanctuary is so popular.]

New Open-source Router Firmware Opens Your Wi-Fi Network to Strangers

We’ve often heard security folks explain their belief that one of the best ways to protect Web privacy and security on one's home turf is to lock down one's private Wi-Fi network with a strong password. But a coalition of advocacy organizations is calling such conventional wisdom into question.

Members of the “Open Wireless Movement,” including the Electronic Frontier Foundation (EFF), Free Press, Mozilla, and Fight for the Future are advocating that we open up our Wi-Fi private networks (or at least a small slice of our available bandwidth) to strangers. They claim that such a random act of kindness can actually make us safer online while simultaneously facilitating a better allocation of finite broadband resources.

The OpenWireless.org website explains the group’s initiative. “We are aiming to build technologies that would make it easy for Internet subscribers to portion off their wireless networks for guests and the public while maintaining security, protecting privacy, and preserving quality of access," its mission statement reads. "And we are working to debunk myths (and confront truths) about open wireless while creating technologies and legal precedent to ensure it is safe, private, and legal to open your network.”

One such technology, which EFF plans to unveil at the Hackers on Planet Earth (HOPE X) conferencenext month, is open-sourced router firmware called Open Wireless Router. This firmware would enable individuals to share a portion of their Wi-Fi networks with anyone nearby, password-free, as Adi Kamdar, an EFF activist, told Ars on Friday.

Home network sharing tools are not new, and the EFF has been touting the benefits of open-sourcing Web connections for years, but Kamdar believes this new tool marks the second phase in the open wireless initiative. Unlike previous tools, he claims, EFF’s software will be free for all, will not require any sort of registration, and will actually make surfing the Web safer and more efficient.

Open Wi-Fi initiative members have argued that the act of providing wireless networks to others is a form of “basic politeness… like providing heat and electricity, or a hot cup of tea” to a neighbor, as security expert Bruce Schneier described it.

Walled off

Kamdar said that the new firmware utilizes smart technologies that prioritize the network owner's traffic over others', so good samaritans won't have to wait for Netflix to load because of strangers using their home networks. What's more, he said, "every connection is walled off from all other connections," so as to decrease the risk of unwanted snooping.

Additionally, EFF hopes that opening one’s Wi-Fi network will, in the long run, make it more difficult to tie an IP address to an individual.

“From a legal perspective, we have been trying to tackle this idea that law enforcement and certain bad plaintiffs have been pushing, that your IP address is tied to your identity. Your identity is not your IP address. You shouldn't be targeted by a copyright troll just because they know your IP address," said Kamdar.

by Joe Silver, Ars Technica |  Read more:
Image: uncredited

More Punk, Less Hell

When the ballots had been counted, the Prime Minister of Iceland declared the result a "shock"

The same sense of shock was felt by almost everyone. The old guard, because it had lost. And the new party, because it had won.

There had never been such a result – not in Iceland or anywhere else. Reykjavik had long been a bastion of the conservatives. That was now history. With 34.7% of the vote, the city had voted a new party into power: the anarcho-surrealists.

The leading candidate, Jón Gnarr, a comedian by profession, entered the riotous hall full of drunken anarchists looking rather circumspect. Almost shyly, he raised his fist and said: 'Welcome to the revolution!' And: "Hurray for all kinds of things!"

Gnarr was now the mayor of Reykjavik. After the Prime Minister, he held the second-most important office in the land. A third of all Icelanders live in the capital and another third commute to work there. The city is the country’s largest employer and its mayor the boss of some 8,000 civil servants.

No wonder the result was such a shock. Reykjavik was beset by crises: the crash of the banking system had also brought everything else to the verge of bankruptcy – the country, the city, companies and inhabitants. And the anarcho-surrealist party – the self-appointed Best Party – was composed largely of rock stars, mainly former punks. Not one of them had ever been part of any political body. Their slogan for overcoming the crisis was simple: "More punk, less hell!"

What were the conservative voters of Reykjavik thinking? On May 27, 2010, they did something that people usually only talk about: they took power out of the hands of politicians and gave it to amateurs.

And so began a unique political experiment. How would the anti-politicians govern? Like punks? Like anarchists? In the midst of a crisis?

"It was group sex"

A glance at the most important campaign promises of the Best Party is more than enough to highlight the audacity of Reykjavik’s voters. They were promised free towels at swimming pools, a polar bear for the zoo, the import of Jews, "so that someone who understands something about economics finally comes to Iceland", a drug-free parliament by 2020, inaction ("we’ve worked hard all our lives and want to take a well-paid four-year break now"), Disneyland with free weekly passes for the unemployed ("where they can have themselves photographed with Goofy"), greater understanding for the rural population ("every Icelandic farmer should be able to take a sheep to a hotel for free"), free bus tickets. And all this with the caveat: "We can promise more than any other party because we will break every campaign promise."

by Constantin Seibt, Tages Anzeiger | Read more:
Image: Halldor Kolbeins

Uses of 'Namaste' at My Local Yoga Studio


“Namaste, everybody. ‘Namaste’ is a Sanskrit word that means ‘The divine in me recognizes the divine in you.’ ” —A benediction, delivered by yoga instructors at the end of practice. (...)

* * * 

Greetings, yogis! This e-mail is to inform you that in order to meet rising costs we will be raising our fee to $35 per class at the beginning of July. As a gentle reminder, we will continue to enforce our no-show and tardy policies. Yogis who fail to arrive at least five minutes prior to class will not be admitted and will be charged the full class fee. Cancellations must be made at least twenty-four hours in advance. Yogis cancelling less than twenty-four hours in advance will be charged the full class fee plus a five-dollar service charge. Yogis who fail to show up for a reserved class without making any cancellation will be charged the full class fee plus a ten-dollar service charge. Arriving more than five minutes late for a class will be counted as a no-show without a cancellation. Please let us know if you have any questions. Happy practice! Namaste! (...)

* * * 
Instructor: Let’s take a lotus or a half lotus or whatever is comfortable for you. Press your hands together at your heart center. Really plug those sit bones into the earth. And when you feel really centered you might turn to your neighbor and extend to them some of that energy from the heart center by offering them a Namaste. “Namaste” is a Sanskrit word that means “the divine in me recognizes the divine in you.” And when we offer our neighbor the Namaste we’re able to meet them in a place of peace that is free of ego. Namaste.
Male yogi: Namaste.
Female yogi: Namaste.
Male yogi: Namaste. What’s your name?
Female yogi: Natalie.
Male yogi: Namaste? Your name is Namaste? That’s crazy!
Female yogi: No, it’s Natalie.
Male yogi: Oh, wow. I totally thought you said Namaste. That would have been hilarious. But Natalie’s cool. What are you doing later, Natalie?
Female yogi: Probably going home.
Male yogi: No, don’t go home. You should come hang out with me.
Female yogi: Um, I don’t think I can.
Male yogi: That’s not true. You just said you were just going home. Come to my place. We can practice our headstands.
Female yogi: Yeah, I don’t think so. Sorry.
Male yogi: Come on. Why don’t you like me? I’ll make you a smoothie.
Female yogi: I think we need to be quiet now.
Male yogi: Alright. That’s fine, Natalie. Don’t you even want to know my name?
Female yogi: Fine, what’s your name?
Male yogi: Namaste.
Female yogi: What?
Male yogi: Just kidding. It’s Cody.

by Andrea Denhoed, New Yorker |  Read more:
Image: Bendik Kaltenborn.

Rosie Huntington-Whiteley. Photo by Greg Williams
via:

Friday, June 20, 2014

The End of Higher Education’s Golden Age

Interest in using the internet to slash the price of higher education is being driven in part by hope for new methods of teaching, but also by frustration with the existing system. The biggest threat those of us working in colleges and universities face isn’t video lectures or online tests. It’s the fact that we live in institutions perfectly adapted to an environment that no longer exists.

In the first half of the 20th century, higher education was a luxury and a rarity in the U.S. Only 5% or so of adults, overwhelmingly drawn from well-off families, had attended college. That changed with the end of WWII. Waves of discharged soldiers subsidized by the GI Bill, joined by the children of the expanding middle class, wanted or needed a college degree. From 1945 to 1975, the number of undergraduates increased five-fold, and graduate students nine-fold. PhDs graduating one year got jobs teaching the ever-larger cohort of freshman arriving the next.

This growth was enthusiastically subsidized. Between 1960 and 1975, states more than doubled their rate of appropriations for higher education, from four dollars per thousand in state revenue to ten. Post-secondary education extended its previous mission—liberal arts education for elites—to include both more basic research from faculty and more job-specific training for students. Federal research grants quadrupled; at the same time, a Bachelor’s degree became an entry-level certificate for an increasing number of jobs.

This expansion created tensions among the goals of open-ended exploration, training for the workplace, and research, but these tensions were masked by new income. Decades of rising revenue meant we could simultaneously become the research arm of government and industry, the training ground for a rapidly professionalizing workforce, and the preservers of the liberal arts tradition. Even better, we could do all of this while increasing faculty ranks and reducing the time senior professors spent in the classroom. This was the Golden Age of American academia.

As long as the income was incoming, we were happy to trade funding our institutions with our money (tuition and endowment) for funding it with other people’s money (loans and grants.) And so long as college remained a source of cheap and effective job credentials, our new sources of support—students with loans, governments with research agendas—were happy to let us regard ourselves as priests instead of service workers.

Then the 1970s happened. The Vietnam war ended, removing “not getting shot at” as a reason to enroll. The draft ended too, reducing the ranks of future GIs, while the GI bill was altered to shift new costs onto former soldiers. During the oil shock and subsequent recession, demand for education shrank for the first time since 1945, and states began persistently reducing the proportion of tax dollars going to higher education, eventually cutting the previous increase in half. Rising costs and falling subsidies have driven average tuition up over 1000% since the 1970s.

Golden Age economics ended. Golden Age assumptions did not. For 30 wonderful years, we had been unusually flush, and we got used to it, re-designing our institutions to assume unending increases in subsidized demand. This did not happen. The year it started not happening was 1975. Every year since, we tweaked our finances, hiking tuition a bit, taking in a few more students, making large lectures a little larger, hiring a few more adjuncts.

Each of these changes looked small and reversible at the time. Over the decades, though, we’ve behaved like an embezzler who starts by taking only what he means to replace, but ends up extracting so much that embezzlement becomes the system. There is no longer enough income to support a full-time faculty and provide students a reasonably priced education of acceptable quality at most colleges or universities in this country.

Our current difficulties are not the result of current problems. They are the bill coming due for 40 years of trying to preserve a set of practices that have outlived the economics that made them possible.

by Clay Shirky |  Read more:
Image: via:

The Great Seattle Pizza Smackdown

[ed. I don't usually do product endorsements, and I'll admit, I've never tried Serious Pie, but I have tried Flying Squirrel and it's the best pizza I've ever had in the state of Washington (not to mention the hot green olive appetizer!)]

Get a pizzaiolo talking about his craft and you’re going to get an earful about correct proofing times and proper firing temps. Some are so bound to the strictures of their tradition they even get their pies certified (see New World Neapolitan ).

Then there are those who favor a more innovative approach. “We went in with no preconceptions besides making really good pizza,” recalls Eric Tanaka, who as executive chef and partner at Tom Douglas Restaurants was one of the visionaries consulted when Douglas decided to open Serious Pie in 2007.

“For four months we drove our baker insane,” Tanaka chuckles. Tanaka and Douglas asked Dahlia Bakery’s Gwen LeBlanc to come up with a more breadlike pizza crust than they were seeing around town; she produced three versions and the chefs nixed them all. “We wanted crispier, with a little bit of crumb to it,” he explains. So she lightened the dough with a softer flour. Too cakey. She tossed in semolina for texture and wound up with too gritty a crunch.

They went back to the original three—and through trial and error (“and a lot of Gwen shouting at us to get our act together!”) they discovered that the meaningful variable was fermentation. Too little, and the dough would lose flavor; too much, and it would smell too yeasty. The formula had to change whenever the weather did. “Crust is much more art than science,” Tanaka says.

They tinkered with their huge 1,000—degree Wood Stone oven, finally settling on six minutes at 650 degrees, with potatoes going on at the beginning, cheese in the last two minutes (“scorched cheese equals greasy pizza,” says Tanaka), and lighter charcuterie closer to the end. They made investigative pilgrimages to the country’s best pizzerias, from Oakland’s Pizzaiolo (“where we learned to finish pies with salt,”) to the legendary Pizzeria Bianco in Phoenix.

Serious Pie, the stylishly dim and perpetually packed little joint downtown, is Seattle’s Pizzeria Bianco. All that bakery back-and-forth shows: Crusts are golden and toothsome, chewy within and crispy without, burnished with delectable bits of char. On top go A-list ingredients dictated by flavor and seasonality rather than tradition—hence our category name, the Seattle-style pie. One favorite is shingled with thin-sliced Yukon Golds, fragrant with rosemary and Pecorino Romano; another is dotted with sweet fennel sausage and cherry bomb peppers. Much attention is paid to cheese—which Douglas’s chefs intended to make themselves but learned on about their fourth slammed hour of their first slammed day that would be improbable at best. Like all toppings here, purslane to chanterelles to delicata squash, cheeses are ferociously seasonal—perhaps an Italian truffle variant, perhaps a tart sheep’s milk. The result is simply a masterpiece.

That fiercely local identity marks South End newcomer Flying Squirrel as a Seattle-style innovator too, crafting its own exuberant combos from mostly organic toppings. One pie features local asparagus, goat cheese, and pine nuts; another—the Washington—stars ham from local charcuterie Zoe’s Meats, with caramelized onions and Granny Smiths. Owner Bill Coury is as irreverent about the rules as Douglas, claiming that he wasn’t setting out to be authentically anything; he just wanted to make a classic American “everybody pizza” with the best-tasting stuff on top. And—judging from the crowds of hipsters and families that throng the friendly, Mexican coke–and–Olympia Beer sort of Seward Park storefront every night—that he has done.

BOTTOM LINE: Flying Squirrel offers pristine toppings on a bumpy landscape of highly flavorful crust, which nevertheless lacks the moisture and chewy satisfaction of Serious Pie’s serious triumph.

by Calise Cardenas, Christopher Werner, Jessica Voelker, Kathryn Robinson, Laura Cassidy, Matthew Halverson | Read more:
Image : Lindsay Borden

Tobias KruseInto the Sun, 2010

Thursday, June 19, 2014

The End of Sleep?

Work, friendships, exercise, parenting, eating, reading — there just aren’t enough hours in the day. To live fully, many of us carve those extra hours out of our sleep time. Then we pay for it the next day. A thirst for life leads many to pine for a drastic reduction, if not elimination, of the human need for sleep. Little wonder: if there were a widespread disease that similarly deprived people of a third of their conscious lives, the search for a cure would be lavishly funded. It’s the Holy Grail of sleep researchers, and they might be closing in.

As with most human behaviours, it’s hard to tease out our biological need for sleep from the cultural practices that interpret it. The practice of sleeping for eight hours on a soft, raised platform, alone or in pairs, is actually atypical for humans. Many traditional societies sleep more sporadically, and social activity carries on throughout the night. Group members get up when something interesting is going on, and sometimes they fall asleep in the middle of a conversation as a polite way of exiting an argument. Sleeping is universal, but there is glorious diversity in the ways we accomplish it.

Different species also seem to vary widely in their sleeping behaviours. Herbivores sleep far less than carnivores — four hours for an elephant, compared with almost 20 hours for a lion — presumably because it takes them longer to feed themselves, and vigilance is selected for. As omnivores, humans fall between the two sleep orientations. Circadian rhythms, the body’s master clock, allow us to anticipate daily environmental cycles and arrange our organ’s functions along a timeline so that they do not interfere with one another.

Our internal clock is based on a chemical oscillation, a feedback loop on the cellular level that takes 24 hours to complete and is overseen by a clump of brain cells behind our eyes (near the meeting point of our optic nerves). Even deep in a cave with no access to light or clocks, our bodies keep an internal schedule of almost exactly 24 hours. This isolated state is called ‘free-running’, and we know it’s driven from within because our body clock runs just a bit slow. When there is no light to reset it, we wake up a few minutes later each day. It’s a deeply engrained cycle found in every known multi-cellular organism, as inevitable as the rotation of the Earth — and the corresponding day-night cycles — that shaped it.

Human sleep comprises several 90-minute cycles of brain activity. In a person who is awake, electroencephalogram (EEG) readings are very complex, but as sleep sets in, the brain waves get slower, descending through Stage 1 (relaxation) and Stage 2 (light sleep) down to Stage 3 and slow-wave deep sleep. After this restorative phase, the brain has a spurt of rapid eye movement (REM) sleep, which in many ways resembles the waking brain. Woken from this phase, sleepers are likely to report dreaming.

One of the most valuable outcomes of work on sleep deprivation is the emergence of clear individual differences — groups of people who reliably perform better after sleepless nights, as well as those who suffer disproportionately. The division is quite stark and seems based on a few gene variants that code for neurotransmitter receptors, opening the possibility that it will soon be possible to tailor stimulant variety and dosage to genetic type.

Around the turn of this millennium, the biological imperative to sleep for a third of every 24-hour period began to seem quaint and unnecessary. Just as the birth control pill had uncoupled sex from reproduction, designer stimulants seemed poised to remove us yet further from the archaic requirements of the animal kingdom.

Any remedy for sleepiness must target the brain’s prefrontal cortex. The executive functions of the brain are particularly vulnerable to sleep deprivation, and people who are sleep-deprived are both more likely to take risks, and less likely to be able to make novel or imaginative decisions, or to plan a course of action. Designer stimulants such as modafinil and armodafinil (marketed as Provigil and Nuvigil) bring these areas back online and are highly effective at countering the negative effects of sleep loss. Over the course of 60 hours awake, a 400mg dose of modafinil every eight hours reinstates rested performance levels in everything from stamina for boring tasks to originality for complex ones. It staves off the risk propensity that accompanies sleepiness and brings both declarative memory (facts or personal experiences) and non-declarative memory (learned skills or unconscious associations) back up to snuff.

It’s impressive, but also roughly identical to the restorative effects of 20 mg of dextroamphetamine or 600 mg of caffeine (the equivalent of around six coffee cups). Though caffeine has a shorter half-life and has to be taken every four hours or so, it enjoys the advantages of being ubiquitous and cheap.

For any college student who has pulled an all-nighter guzzling energy drinks to finish an essay, it should come as no surprise that designer stimulants enable extended, focused work. A more challenging test, for a person wired on amphetamines, would be to successfully navigate a phone call from his or her grandmother. It is very difficult to design a stimulant that offers focus without tunnelling – that is, without losing the ability to relate well to one's wider environment and therefore make socially nuanced decisions. Irritability and impatience grate on team dynamics and social skills, but such nuances are usually missed in drug studies, where they are usually treated as unreliable self-reported data. These problems were largely ignored in the early enthusiasm for drug-based ways to reduce sleep. (...)

One reason why stimulants have proved a disappointment in reducing sleep is that we still don’t really understand enough about why we sleep in the first place. More than a hundred years of sleep deprivation studies have confirmed the truism that sleep deprivation makes people sleepy. Slow reaction times, reduced information processing capacity, and failures of sustained attention are all part of sleepiness, but the most reliable indicator is shortened sleep latency, or the tendency to fall asleep faster when lying in a dark room. An exasperatingly recursive conclusion remains that sleep’s primary function is to maintain our wakefulness during the day. (...)

The Somneo mask is only one of many attempts to maintain clarity in the mind of a soldier. Another initiative involves dietary supplements. Omega-3 fatty acids, such as those found in fish oils, sustain performance over 48 hours without sleep — as well as boosting attention and learning — and Marines can expect to see more of the nutritional supplement making its way into rations. The question remains whether measures that block short-term sleep deprivation symptoms will also protect against its long-term effects. A scan of the literature warns us that years of sleep deficit will make us fat, sick and stupid. A growing list of ailments has been linked to circadian disturbance as a risk factor.

Both the Somneo mask and the supplements — in other words, darkness and diet — are ways of practising ‘sleep hygiene’, or a suite of behaviours to optimise a healthy slumber. These can bring the effect of a truncated night’s rest up to the expected norm — eight hours of satisfying shut-eye. But proponents of human enhancement aren’t satisfied with normal. Always pushing the boundaries, some techno-pioneers will go to radical lengths to shrug off the need for sleep altogether.

by Jessa Gamble, Aeon |  Read more:
Image: Carlos Barria/Reuters

Wednesday, June 18, 2014


Ryan McGinley
via:

What Is Literature?

There’s a new definition of literature in town. It has been slouching toward us for some time now but may have arrived officially in 2009, with the publication of Greil Marcus and Werner Sollors’s A New Literary History of America. Alongside essays on Twain, Fitzgerald, Frost, and Henry James, there are pieces about Jackson Pollock, Chuck Berry, the telephone, the Winchester rifle, and Linda Lovelace. Apparently, “literary means not only what is written but what is voiced, what is expressed, what is invented, in whatever form” — in which case maps, sermons, comic strips, cartoons, speeches, photographs, movies, war memorials, and music all huddle beneath the literary umbrella. Books continue to matter, of course, but not in the way that earlier generations took for granted. In 2004, “the most influential cultural figure now alive,” according to Newsweek, wasn’t a novelist or historian; it was Bob Dylan. Not incidentally, the index to A New Literary History contains more references to Dylan than to Stephen Crane and Hart Crane combined. Dylan may have described himself as “a song-and-dance man,” but Marcus and Sollors and such critics as Christopher Ricks beg to differ. Dylan, they contend, is one of the greatest poets this nation has ever produced (in point of fact, he has been nominated for a Nobel Prize in Literature every year since 1996).

The idea that literature contains multitudes is not new. For the greater part of its history, lit(t)eratura referred to any writing formed with letters. Up until the eighteenth century, the only true makers of creative work were poets, and what they aspired to was not literature but poesy. A piece of writing was “literary” only if enough learned readers spoke well of it; but as Thomas Rymer observed in 1674, “till of late years England was as free from Criticks, as it is from Wolves.”

So when did literature in the modern sense begin? According to Trevor Ross’s The Making of the English Literary Canon, that would have been on February 22, 1774. Ross is citing with theatrical flair the case of Donaldson v. Beckett, which did away with the notion of “perpetual copyright” and, as one contemporary onlooker put it, allowed “the Works of Shakespeare, of Addison, Pope, Swift, Gay, and many other excellent Authors of the present Century . . . to be the Property of any Person.” It was at this point, Ross claims, that “the canon became a set of commodities to be consumed. It became literature rather than poetry.” What Ross and other historians of literature credibly maintain is that the literary canon was largely an Augustan invention evolving from la querelle des Anciens et des Modernes, which pitted cutting-edge seventeenth-century authors against the Greek and Latin poets. Because a canon of vastly superior ancient writers — Homer, Virgil, Cicero — already existed, a modern canon had been slow to develop. One way around this dilemma was to create new ancients closer to one’s own time, which is precisely what John Dryden did in 1700, when he translated Chaucer into Modern English. Dryden not only made Chaucer’s work a classic; he helped canonize English literature itself.

The word canon, from the Greek, originally meant “measuring stick” or “rule” and was used by early Christian theologians to differentiate the genuine, or canonical, books of the Bible from the apocryphal ones. Canonization, of course, also referred to the Catholic practice of designating saints, but the term was not applied to secular writings until 1768, when the Dutch classicist David Ruhnken spoke of a canon of ancient orators and poets.

The usage may have been novel, but the idea of a literary canon was already in the air, as evidenced by a Cambridge don’s proposal in 1595 that universities “take the course to canonize [their] owne writers, that not every bold ballader . . . may pass current with a Poet’s name.” A similar nod toward hierarchies appeared in Daniel Defoe’s A Vindication of the Press (1718) and Joseph Spence’s plan for a dictionary of British poets. Writing in 1730, Spence suggested that the “known marks for ye different magnitudes of the Stars” could be used to establish rankings such as “great Genius & fine writer,” “fine writer,” “middling Poet,” and “one never to be read.” In 1756, Joseph Warton’s essay on Pope designated “four different classes and degrees” of poets, with Spenser, Shakespeare, and Milton comfortably leading the field. By 1781, Samuel Johnson’s Lives of the English Poets had confirmed the canon’s constituents — fifty-two of them — but also fine-tuned standards of literary merit so that the common reader, “uncorrupted with literary prejudice,” would know what to look for.

In effect, the canon formalized modern literature as a select body of imaginative writings that could stand up to the Greek and Latin texts. Although exclusionary by nature, it was originally intended to impart a sense of unity; critics hoped that a tradition of great writers would help create a national literature. What was the apotheosis of Shakespeare and Milton if not an attempt to show the world that England and not France — especially not France — had produced such geniuses? The canon anointed the worthy and, by implication, the unworthy, functioning as a set of commandments that saved people the trouble of deciding what to read.

The canon — later the canon of Great Books — endured without real opposition for nearly two centuries before antinomian forces concluded that enough was enough. I refer, of course, to that mixed bag of politicized professors and theory-happy revisionists of the 1970s and 1980s — feminists, ethnicists, Marxists, semioticians, deconstructionists, new historicists, and cultural materialists — all of whom took exception to the canon while not necessarily seeing eye to eye about much else. Essentially, the postmodernists were against — well, essentialism. While books were conceived in private, they reflected the ideological makeup of their host culture; and the criticism that gave them legitimacy served only to justify the prevailing social order. The implication could not be plainer: If books simply reinforced the cultural values that helped shape them, then any old book or any new book was worthy of consideration. Literature with a capital L was nothing more than a bossy construct, and the canon, instead of being genuine and beneficial, was unreal and oppressive.

Traditionalists, naturally, were aghast. The canon, they argued, represented the best that had been thought and said, and its contents were an expression of the human condition: the joy of love, the sorrow of death, the pain of duty, the horror of war, and the recognition of self and soul. Some canonical writers conveyed this with linguistic brio, others through a sensitive and nuanced portrayal of experience; and their books were part of an ongoing conversation, whose changing sum was nothing less than the history of ideas. To mess with the canon was to mess with civilization itself.

Although it’s pretty to think that great books arise because great writers are driven to write exactly what they want to write, canon formation was, in truth, a result of the middle class’s desire to see its own values reflected in art. As such, the canon was tied to the advance of literacy, the surging book trade, the growing appeal of novels, the spread of coffee shops and clubs, the rise of reviews and magazines, the creation of private circulating libraries, the popularity of serialization and three-decker novels, and, finally, the eventual takeover of literature by institutions of higher learning.

by Arthur Krystal, Harpers |  Read more:
Image: “Two Tall Books,” by Abelardo Morell. Courtesy the artist and Edwynn Houk Gallery, New York City

What if Quality Journalism Isn't?


By now you have probably already read the leaked Innovation Report from The New York Times. And if you haven't, you should. It provides a great overview of the challenges and thinking that are happening in the industry, not just for The New York Times, but for every newspaper and magazine.

To very quickly summarize it, The New York Times has had a ton of success with its digital subscriptions, but despite that, is facing a continual decline in digital traffic.


And like all other media companies, they blame this on the transformation of formats and a failure to engage digital readers.

They say the solution to this is to develop more digitally focused 'growth' tactics, like asking all journalists to submit tweets with every article, be smarter about how the content is presented and featured, and generally focus on optimizing the format for digital. (...)

So why would I subscribe to a newspaper whose product has such little relevance to me as a person?

But, wait-a-minute, I hear you say, this is just in relation to you. If we look at the market as a whole (mass-market approach), each article is relevant to the percentage of the audience. And you are right. Each article is relevant to a percentage as a whole, but to the individual you are not relevant at all.

And this is why newspapers fail. You are based on a business model that only makes sense to a mass-market, but not to the individual. This is not a winning strategy. Yes, it used to work in the old days of media, but that was as a result of scarcity.

Think about this in relation to the world of retail. What type of brand are newspapers really like?

Are newspapers a brand like Nike, Starbucks, Ford, Tesla, GoPro, Converse, or Apple? Or are they more like Walmart, Tesco, or Aldi?

Well, companies like Nike, Starbucks, Tesla and GoPro are extremely niche brands targeting people with a very specific customer need, within a very narrow niche. This is the exact opposite of the traditional newspaper model. Each one of Nike's products, for example, are highly valuable to just their niche, but not that relevant outside it.

Whereas Walmart and Tesco are mass-market brands that offer a lot of everything in the hope that people might decide to buy something. They trade off relevance for size and convenience.

In other words, newspapers are the supermarkets of journalism. You are not the brands. Each article (your product) has almost zero value, but as a whole, there is always a small percentage of your offering that people need.

That doesn't necessarily sound like a bad thing, but people don't connect with Walmart or Tesco. They don't really care about them, nor are they inspired by what they do.

No matter how hard they try, supermarkets with a mass-market/low-relevancy appeal will never appear on a list of the most 'engaging brands', or on list of brands that people love.

And this is the essence of the trouble newspapers are facing today. It's not that we now live in a digital world, and that we are behaving in a different way. It's that your editorial focus is to be the supermarket of news.

The New York Times is publishing 300 new articles every single day, and in their Innovation Report they discuss how to surface even more from their archives. This is the Walmart business model.

The problem with this model is that supermarkets only work when visiting the individual brands is too hard to do. That's why we go to supermarkets. In the physical world, visiting 40 different stores just to get your groceries would take forever, so we prefer to only go to one place, the supermarket, where we can get everything... even if most of the other products there aren't what we need.

It's the same with how print newspapers used to work. We needed this one place to go because it was too hard to get news from multiple sources.

But on the internet, we have solved this problem. You can follow as many sources as you want, and it's as easy to visit 1000 different sites as it is to just visit one. Everything is just one click away. In fact, that's how people use social media. It's all about the links.

Imagine what would happen to real-world supermarkets, if every brand was just one step away, regardless of what you wanted. Would you still go to a supermarket, knowing that 85% of the products you see would be of no interest to you? Or would you instead turn directly to each brand that you care about?

This is what is happening to the world of news. You are trying to be the supermarket of news, not realizing that this editorial focus is exactly why people are turning away from you.

by Thomas Baekdal, Baekdal.com | Read more:
Image: uncredited