Saturday, June 21, 2014

Chill Bear


[ed. One reason why McNeil State Game Sanctuary is so popular.]

New Open-source Router Firmware Opens Your Wi-Fi Network to Strangers

We’ve often heard security folks explain their belief that one of the best ways to protect Web privacy and security on one's home turf is to lock down one's private Wi-Fi network with a strong password. But a coalition of advocacy organizations is calling such conventional wisdom into question.

Members of the “Open Wireless Movement,” including the Electronic Frontier Foundation (EFF), Free Press, Mozilla, and Fight for the Future are advocating that we open up our Wi-Fi private networks (or at least a small slice of our available bandwidth) to strangers. They claim that such a random act of kindness can actually make us safer online while simultaneously facilitating a better allocation of finite broadband resources.

The OpenWireless.org website explains the group’s initiative. “We are aiming to build technologies that would make it easy for Internet subscribers to portion off their wireless networks for guests and the public while maintaining security, protecting privacy, and preserving quality of access," its mission statement reads. "And we are working to debunk myths (and confront truths) about open wireless while creating technologies and legal precedent to ensure it is safe, private, and legal to open your network.”

One such technology, which EFF plans to unveil at the Hackers on Planet Earth (HOPE X) conferencenext month, is open-sourced router firmware called Open Wireless Router. This firmware would enable individuals to share a portion of their Wi-Fi networks with anyone nearby, password-free, as Adi Kamdar, an EFF activist, told Ars on Friday.

Home network sharing tools are not new, and the EFF has been touting the benefits of open-sourcing Web connections for years, but Kamdar believes this new tool marks the second phase in the open wireless initiative. Unlike previous tools, he claims, EFF’s software will be free for all, will not require any sort of registration, and will actually make surfing the Web safer and more efficient.

Open Wi-Fi initiative members have argued that the act of providing wireless networks to others is a form of “basic politeness… like providing heat and electricity, or a hot cup of tea” to a neighbor, as security expert Bruce Schneier described it.

Walled off

Kamdar said that the new firmware utilizes smart technologies that prioritize the network owner's traffic over others', so good samaritans won't have to wait for Netflix to load because of strangers using their home networks. What's more, he said, "every connection is walled off from all other connections," so as to decrease the risk of unwanted snooping.

Additionally, EFF hopes that opening one’s Wi-Fi network will, in the long run, make it more difficult to tie an IP address to an individual.

“From a legal perspective, we have been trying to tackle this idea that law enforcement and certain bad plaintiffs have been pushing, that your IP address is tied to your identity. Your identity is not your IP address. You shouldn't be targeted by a copyright troll just because they know your IP address," said Kamdar.

by Joe Silver, Ars Technica |  Read more:
Image: uncredited

More Punk, Less Hell

When the ballots had been counted, the Prime Minister of Iceland declared the result a "shock"

The same sense of shock was felt by almost everyone. The old guard, because it had lost. And the new party, because it had won.

There had never been such a result – not in Iceland or anywhere else. Reykjavik had long been a bastion of the conservatives. That was now history. With 34.7% of the vote, the city had voted a new party into power: the anarcho-surrealists.

The leading candidate, Jón Gnarr, a comedian by profession, entered the riotous hall full of drunken anarchists looking rather circumspect. Almost shyly, he raised his fist and said: 'Welcome to the revolution!' And: "Hurray for all kinds of things!"

Gnarr was now the mayor of Reykjavik. After the Prime Minister, he held the second-most important office in the land. A third of all Icelanders live in the capital and another third commute to work there. The city is the country’s largest employer and its mayor the boss of some 8,000 civil servants.

No wonder the result was such a shock. Reykjavik was beset by crises: the crash of the banking system had also brought everything else to the verge of bankruptcy – the country, the city, companies and inhabitants. And the anarcho-surrealist party – the self-appointed Best Party – was composed largely of rock stars, mainly former punks. Not one of them had ever been part of any political body. Their slogan for overcoming the crisis was simple: "More punk, less hell!"

What were the conservative voters of Reykjavik thinking? On May 27, 2010, they did something that people usually only talk about: they took power out of the hands of politicians and gave it to amateurs.

And so began a unique political experiment. How would the anti-politicians govern? Like punks? Like anarchists? In the midst of a crisis?

"It was group sex"

A glance at the most important campaign promises of the Best Party is more than enough to highlight the audacity of Reykjavik’s voters. They were promised free towels at swimming pools, a polar bear for the zoo, the import of Jews, "so that someone who understands something about economics finally comes to Iceland", a drug-free parliament by 2020, inaction ("we’ve worked hard all our lives and want to take a well-paid four-year break now"), Disneyland with free weekly passes for the unemployed ("where they can have themselves photographed with Goofy"), greater understanding for the rural population ("every Icelandic farmer should be able to take a sheep to a hotel for free"), free bus tickets. And all this with the caveat: "We can promise more than any other party because we will break every campaign promise."

by Constantin Seibt, Tages Anzeiger | Read more:
Image: Halldor Kolbeins

Uses of 'Namaste' at My Local Yoga Studio


“Namaste, everybody. ‘Namaste’ is a Sanskrit word that means ‘The divine in me recognizes the divine in you.’ ” —A benediction, delivered by yoga instructors at the end of practice. (...)

* * * 

Greetings, yogis! This e-mail is to inform you that in order to meet rising costs we will be raising our fee to $35 per class at the beginning of July. As a gentle reminder, we will continue to enforce our no-show and tardy policies. Yogis who fail to arrive at least five minutes prior to class will not be admitted and will be charged the full class fee. Cancellations must be made at least twenty-four hours in advance. Yogis cancelling less than twenty-four hours in advance will be charged the full class fee plus a five-dollar service charge. Yogis who fail to show up for a reserved class without making any cancellation will be charged the full class fee plus a ten-dollar service charge. Arriving more than five minutes late for a class will be counted as a no-show without a cancellation. Please let us know if you have any questions. Happy practice! Namaste! (...)

* * * 
Instructor: Let’s take a lotus or a half lotus or whatever is comfortable for you. Press your hands together at your heart center. Really plug those sit bones into the earth. And when you feel really centered you might turn to your neighbor and extend to them some of that energy from the heart center by offering them a Namaste. “Namaste” is a Sanskrit word that means “the divine in me recognizes the divine in you.” And when we offer our neighbor the Namaste we’re able to meet them in a place of peace that is free of ego. Namaste.
Male yogi: Namaste.
Female yogi: Namaste.
Male yogi: Namaste. What’s your name?
Female yogi: Natalie.
Male yogi: Namaste? Your name is Namaste? That’s crazy!
Female yogi: No, it’s Natalie.
Male yogi: Oh, wow. I totally thought you said Namaste. That would have been hilarious. But Natalie’s cool. What are you doing later, Natalie?
Female yogi: Probably going home.
Male yogi: No, don’t go home. You should come hang out with me.
Female yogi: Um, I don’t think I can.
Male yogi: That’s not true. You just said you were just going home. Come to my place. We can practice our headstands.
Female yogi: Yeah, I don’t think so. Sorry.
Male yogi: Come on. Why don’t you like me? I’ll make you a smoothie.
Female yogi: I think we need to be quiet now.
Male yogi: Alright. That’s fine, Natalie. Don’t you even want to know my name?
Female yogi: Fine, what’s your name?
Male yogi: Namaste.
Female yogi: What?
Male yogi: Just kidding. It’s Cody.

by Andrea Denhoed, New Yorker |  Read more:
Image: Bendik Kaltenborn.

Rosie Huntington-Whiteley. Photo by Greg Williams
via:

Friday, June 20, 2014

The End of Higher Education’s Golden Age

Interest in using the internet to slash the price of higher education is being driven in part by hope for new methods of teaching, but also by frustration with the existing system. The biggest threat those of us working in colleges and universities face isn’t video lectures or online tests. It’s the fact that we live in institutions perfectly adapted to an environment that no longer exists.

In the first half of the 20th century, higher education was a luxury and a rarity in the U.S. Only 5% or so of adults, overwhelmingly drawn from well-off families, had attended college. That changed with the end of WWII. Waves of discharged soldiers subsidized by the GI Bill, joined by the children of the expanding middle class, wanted or needed a college degree. From 1945 to 1975, the number of undergraduates increased five-fold, and graduate students nine-fold. PhDs graduating one year got jobs teaching the ever-larger cohort of freshman arriving the next.

This growth was enthusiastically subsidized. Between 1960 and 1975, states more than doubled their rate of appropriations for higher education, from four dollars per thousand in state revenue to ten. Post-secondary education extended its previous mission—liberal arts education for elites—to include both more basic research from faculty and more job-specific training for students. Federal research grants quadrupled; at the same time, a Bachelor’s degree became an entry-level certificate for an increasing number of jobs.

This expansion created tensions among the goals of open-ended exploration, training for the workplace, and research, but these tensions were masked by new income. Decades of rising revenue meant we could simultaneously become the research arm of government and industry, the training ground for a rapidly professionalizing workforce, and the preservers of the liberal arts tradition. Even better, we could do all of this while increasing faculty ranks and reducing the time senior professors spent in the classroom. This was the Golden Age of American academia.

As long as the income was incoming, we were happy to trade funding our institutions with our money (tuition and endowment) for funding it with other people’s money (loans and grants.) And so long as college remained a source of cheap and effective job credentials, our new sources of support—students with loans, governments with research agendas—were happy to let us regard ourselves as priests instead of service workers.

Then the 1970s happened. The Vietnam war ended, removing “not getting shot at” as a reason to enroll. The draft ended too, reducing the ranks of future GIs, while the GI bill was altered to shift new costs onto former soldiers. During the oil shock and subsequent recession, demand for education shrank for the first time since 1945, and states began persistently reducing the proportion of tax dollars going to higher education, eventually cutting the previous increase in half. Rising costs and falling subsidies have driven average tuition up over 1000% since the 1970s.

Golden Age economics ended. Golden Age assumptions did not. For 30 wonderful years, we had been unusually flush, and we got used to it, re-designing our institutions to assume unending increases in subsidized demand. This did not happen. The year it started not happening was 1975. Every year since, we tweaked our finances, hiking tuition a bit, taking in a few more students, making large lectures a little larger, hiring a few more adjuncts.

Each of these changes looked small and reversible at the time. Over the decades, though, we’ve behaved like an embezzler who starts by taking only what he means to replace, but ends up extracting so much that embezzlement becomes the system. There is no longer enough income to support a full-time faculty and provide students a reasonably priced education of acceptable quality at most colleges or universities in this country.

Our current difficulties are not the result of current problems. They are the bill coming due for 40 years of trying to preserve a set of practices that have outlived the economics that made them possible.

by Clay Shirky |  Read more:
Image: via:

The Great Seattle Pizza Smackdown

[ed. I don't usually do product endorsements, and I'll admit, I've never tried Serious Pie, but I have tried Flying Squirrel and it's the best pizza I've ever had in the state of Washington (not to mention the hot green olive appetizer!)]

Get a pizzaiolo talking about his craft and you’re going to get an earful about correct proofing times and proper firing temps. Some are so bound to the strictures of their tradition they even get their pies certified (see New World Neapolitan ).

Then there are those who favor a more innovative approach. “We went in with no preconceptions besides making really good pizza,” recalls Eric Tanaka, who as executive chef and partner at Tom Douglas Restaurants was one of the visionaries consulted when Douglas decided to open Serious Pie in 2007.

“For four months we drove our baker insane,” Tanaka chuckles. Tanaka and Douglas asked Dahlia Bakery’s Gwen LeBlanc to come up with a more breadlike pizza crust than they were seeing around town; she produced three versions and the chefs nixed them all. “We wanted crispier, with a little bit of crumb to it,” he explains. So she lightened the dough with a softer flour. Too cakey. She tossed in semolina for texture and wound up with too gritty a crunch.

They went back to the original three—and through trial and error (“and a lot of Gwen shouting at us to get our act together!”) they discovered that the meaningful variable was fermentation. Too little, and the dough would lose flavor; too much, and it would smell too yeasty. The formula had to change whenever the weather did. “Crust is much more art than science,” Tanaka says.

They tinkered with their huge 1,000—degree Wood Stone oven, finally settling on six minutes at 650 degrees, with potatoes going on at the beginning, cheese in the last two minutes (“scorched cheese equals greasy pizza,” says Tanaka), and lighter charcuterie closer to the end. They made investigative pilgrimages to the country’s best pizzerias, from Oakland’s Pizzaiolo (“where we learned to finish pies with salt,”) to the legendary Pizzeria Bianco in Phoenix.

Serious Pie, the stylishly dim and perpetually packed little joint downtown, is Seattle’s Pizzeria Bianco. All that bakery back-and-forth shows: Crusts are golden and toothsome, chewy within and crispy without, burnished with delectable bits of char. On top go A-list ingredients dictated by flavor and seasonality rather than tradition—hence our category name, the Seattle-style pie. One favorite is shingled with thin-sliced Yukon Golds, fragrant with rosemary and Pecorino Romano; another is dotted with sweet fennel sausage and cherry bomb peppers. Much attention is paid to cheese—which Douglas’s chefs intended to make themselves but learned on about their fourth slammed hour of their first slammed day that would be improbable at best. Like all toppings here, purslane to chanterelles to delicata squash, cheeses are ferociously seasonal—perhaps an Italian truffle variant, perhaps a tart sheep’s milk. The result is simply a masterpiece.

That fiercely local identity marks South End newcomer Flying Squirrel as a Seattle-style innovator too, crafting its own exuberant combos from mostly organic toppings. One pie features local asparagus, goat cheese, and pine nuts; another—the Washington—stars ham from local charcuterie Zoe’s Meats, with caramelized onions and Granny Smiths. Owner Bill Coury is as irreverent about the rules as Douglas, claiming that he wasn’t setting out to be authentically anything; he just wanted to make a classic American “everybody pizza” with the best-tasting stuff on top. And—judging from the crowds of hipsters and families that throng the friendly, Mexican coke–and–Olympia Beer sort of Seward Park storefront every night—that he has done.

BOTTOM LINE: Flying Squirrel offers pristine toppings on a bumpy landscape of highly flavorful crust, which nevertheless lacks the moisture and chewy satisfaction of Serious Pie’s serious triumph.

by Calise Cardenas, Christopher Werner, Jessica Voelker, Kathryn Robinson, Laura Cassidy, Matthew Halverson | Read more:
Image : Lindsay Borden

Tobias KruseInto the Sun, 2010

Thursday, June 19, 2014

The End of Sleep?

Work, friendships, exercise, parenting, eating, reading — there just aren’t enough hours in the day. To live fully, many of us carve those extra hours out of our sleep time. Then we pay for it the next day. A thirst for life leads many to pine for a drastic reduction, if not elimination, of the human need for sleep. Little wonder: if there were a widespread disease that similarly deprived people of a third of their conscious lives, the search for a cure would be lavishly funded. It’s the Holy Grail of sleep researchers, and they might be closing in.

As with most human behaviours, it’s hard to tease out our biological need for sleep from the cultural practices that interpret it. The practice of sleeping for eight hours on a soft, raised platform, alone or in pairs, is actually atypical for humans. Many traditional societies sleep more sporadically, and social activity carries on throughout the night. Group members get up when something interesting is going on, and sometimes they fall asleep in the middle of a conversation as a polite way of exiting an argument. Sleeping is universal, but there is glorious diversity in the ways we accomplish it.

Different species also seem to vary widely in their sleeping behaviours. Herbivores sleep far less than carnivores — four hours for an elephant, compared with almost 20 hours for a lion — presumably because it takes them longer to feed themselves, and vigilance is selected for. As omnivores, humans fall between the two sleep orientations. Circadian rhythms, the body’s master clock, allow us to anticipate daily environmental cycles and arrange our organ’s functions along a timeline so that they do not interfere with one another.

Our internal clock is based on a chemical oscillation, a feedback loop on the cellular level that takes 24 hours to complete and is overseen by a clump of brain cells behind our eyes (near the meeting point of our optic nerves). Even deep in a cave with no access to light or clocks, our bodies keep an internal schedule of almost exactly 24 hours. This isolated state is called ‘free-running’, and we know it’s driven from within because our body clock runs just a bit slow. When there is no light to reset it, we wake up a few minutes later each day. It’s a deeply engrained cycle found in every known multi-cellular organism, as inevitable as the rotation of the Earth — and the corresponding day-night cycles — that shaped it.

Human sleep comprises several 90-minute cycles of brain activity. In a person who is awake, electroencephalogram (EEG) readings are very complex, but as sleep sets in, the brain waves get slower, descending through Stage 1 (relaxation) and Stage 2 (light sleep) down to Stage 3 and slow-wave deep sleep. After this restorative phase, the brain has a spurt of rapid eye movement (REM) sleep, which in many ways resembles the waking brain. Woken from this phase, sleepers are likely to report dreaming.

One of the most valuable outcomes of work on sleep deprivation is the emergence of clear individual differences — groups of people who reliably perform better after sleepless nights, as well as those who suffer disproportionately. The division is quite stark and seems based on a few gene variants that code for neurotransmitter receptors, opening the possibility that it will soon be possible to tailor stimulant variety and dosage to genetic type.

Around the turn of this millennium, the biological imperative to sleep for a third of every 24-hour period began to seem quaint and unnecessary. Just as the birth control pill had uncoupled sex from reproduction, designer stimulants seemed poised to remove us yet further from the archaic requirements of the animal kingdom.

Any remedy for sleepiness must target the brain’s prefrontal cortex. The executive functions of the brain are particularly vulnerable to sleep deprivation, and people who are sleep-deprived are both more likely to take risks, and less likely to be able to make novel or imaginative decisions, or to plan a course of action. Designer stimulants such as modafinil and armodafinil (marketed as Provigil and Nuvigil) bring these areas back online and are highly effective at countering the negative effects of sleep loss. Over the course of 60 hours awake, a 400mg dose of modafinil every eight hours reinstates rested performance levels in everything from stamina for boring tasks to originality for complex ones. It staves off the risk propensity that accompanies sleepiness and brings both declarative memory (facts or personal experiences) and non-declarative memory (learned skills or unconscious associations) back up to snuff.

It’s impressive, but also roughly identical to the restorative effects of 20 mg of dextroamphetamine or 600 mg of caffeine (the equivalent of around six coffee cups). Though caffeine has a shorter half-life and has to be taken every four hours or so, it enjoys the advantages of being ubiquitous and cheap.

For any college student who has pulled an all-nighter guzzling energy drinks to finish an essay, it should come as no surprise that designer stimulants enable extended, focused work. A more challenging test, for a person wired on amphetamines, would be to successfully navigate a phone call from his or her grandmother. It is very difficult to design a stimulant that offers focus without tunnelling – that is, without losing the ability to relate well to one's wider environment and therefore make socially nuanced decisions. Irritability and impatience grate on team dynamics and social skills, but such nuances are usually missed in drug studies, where they are usually treated as unreliable self-reported data. These problems were largely ignored in the early enthusiasm for drug-based ways to reduce sleep. (...)

One reason why stimulants have proved a disappointment in reducing sleep is that we still don’t really understand enough about why we sleep in the first place. More than a hundred years of sleep deprivation studies have confirmed the truism that sleep deprivation makes people sleepy. Slow reaction times, reduced information processing capacity, and failures of sustained attention are all part of sleepiness, but the most reliable indicator is shortened sleep latency, or the tendency to fall asleep faster when lying in a dark room. An exasperatingly recursive conclusion remains that sleep’s primary function is to maintain our wakefulness during the day. (...)

The Somneo mask is only one of many attempts to maintain clarity in the mind of a soldier. Another initiative involves dietary supplements. Omega-3 fatty acids, such as those found in fish oils, sustain performance over 48 hours without sleep — as well as boosting attention and learning — and Marines can expect to see more of the nutritional supplement making its way into rations. The question remains whether measures that block short-term sleep deprivation symptoms will also protect against its long-term effects. A scan of the literature warns us that years of sleep deficit will make us fat, sick and stupid. A growing list of ailments has been linked to circadian disturbance as a risk factor.

Both the Somneo mask and the supplements — in other words, darkness and diet — are ways of practising ‘sleep hygiene’, or a suite of behaviours to optimise a healthy slumber. These can bring the effect of a truncated night’s rest up to the expected norm — eight hours of satisfying shut-eye. But proponents of human enhancement aren’t satisfied with normal. Always pushing the boundaries, some techno-pioneers will go to radical lengths to shrug off the need for sleep altogether.

by Jessa Gamble, Aeon |  Read more:
Image: Carlos Barria/Reuters

Wednesday, June 18, 2014


Ryan McGinley
via:

What Is Literature?

There’s a new definition of literature in town. It has been slouching toward us for some time now but may have arrived officially in 2009, with the publication of Greil Marcus and Werner Sollors’s A New Literary History of America. Alongside essays on Twain, Fitzgerald, Frost, and Henry James, there are pieces about Jackson Pollock, Chuck Berry, the telephone, the Winchester rifle, and Linda Lovelace. Apparently, “literary means not only what is written but what is voiced, what is expressed, what is invented, in whatever form” — in which case maps, sermons, comic strips, cartoons, speeches, photographs, movies, war memorials, and music all huddle beneath the literary umbrella. Books continue to matter, of course, but not in the way that earlier generations took for granted. In 2004, “the most influential cultural figure now alive,” according to Newsweek, wasn’t a novelist or historian; it was Bob Dylan. Not incidentally, the index to A New Literary History contains more references to Dylan than to Stephen Crane and Hart Crane combined. Dylan may have described himself as “a song-and-dance man,” but Marcus and Sollors and such critics as Christopher Ricks beg to differ. Dylan, they contend, is one of the greatest poets this nation has ever produced (in point of fact, he has been nominated for a Nobel Prize in Literature every year since 1996).

The idea that literature contains multitudes is not new. For the greater part of its history, lit(t)eratura referred to any writing formed with letters. Up until the eighteenth century, the only true makers of creative work were poets, and what they aspired to was not literature but poesy. A piece of writing was “literary” only if enough learned readers spoke well of it; but as Thomas Rymer observed in 1674, “till of late years England was as free from Criticks, as it is from Wolves.”

So when did literature in the modern sense begin? According to Trevor Ross’s The Making of the English Literary Canon, that would have been on February 22, 1774. Ross is citing with theatrical flair the case of Donaldson v. Beckett, which did away with the notion of “perpetual copyright” and, as one contemporary onlooker put it, allowed “the Works of Shakespeare, of Addison, Pope, Swift, Gay, and many other excellent Authors of the present Century . . . to be the Property of any Person.” It was at this point, Ross claims, that “the canon became a set of commodities to be consumed. It became literature rather than poetry.” What Ross and other historians of literature credibly maintain is that the literary canon was largely an Augustan invention evolving from la querelle des Anciens et des Modernes, which pitted cutting-edge seventeenth-century authors against the Greek and Latin poets. Because a canon of vastly superior ancient writers — Homer, Virgil, Cicero — already existed, a modern canon had been slow to develop. One way around this dilemma was to create new ancients closer to one’s own time, which is precisely what John Dryden did in 1700, when he translated Chaucer into Modern English. Dryden not only made Chaucer’s work a classic; he helped canonize English literature itself.

The word canon, from the Greek, originally meant “measuring stick” or “rule” and was used by early Christian theologians to differentiate the genuine, or canonical, books of the Bible from the apocryphal ones. Canonization, of course, also referred to the Catholic practice of designating saints, but the term was not applied to secular writings until 1768, when the Dutch classicist David Ruhnken spoke of a canon of ancient orators and poets.

The usage may have been novel, but the idea of a literary canon was already in the air, as evidenced by a Cambridge don’s proposal in 1595 that universities “take the course to canonize [their] owne writers, that not every bold ballader . . . may pass current with a Poet’s name.” A similar nod toward hierarchies appeared in Daniel Defoe’s A Vindication of the Press (1718) and Joseph Spence’s plan for a dictionary of British poets. Writing in 1730, Spence suggested that the “known marks for ye different magnitudes of the Stars” could be used to establish rankings such as “great Genius & fine writer,” “fine writer,” “middling Poet,” and “one never to be read.” In 1756, Joseph Warton’s essay on Pope designated “four different classes and degrees” of poets, with Spenser, Shakespeare, and Milton comfortably leading the field. By 1781, Samuel Johnson’s Lives of the English Poets had confirmed the canon’s constituents — fifty-two of them — but also fine-tuned standards of literary merit so that the common reader, “uncorrupted with literary prejudice,” would know what to look for.

In effect, the canon formalized modern literature as a select body of imaginative writings that could stand up to the Greek and Latin texts. Although exclusionary by nature, it was originally intended to impart a sense of unity; critics hoped that a tradition of great writers would help create a national literature. What was the apotheosis of Shakespeare and Milton if not an attempt to show the world that England and not France — especially not France — had produced such geniuses? The canon anointed the worthy and, by implication, the unworthy, functioning as a set of commandments that saved people the trouble of deciding what to read.

The canon — later the canon of Great Books — endured without real opposition for nearly two centuries before antinomian forces concluded that enough was enough. I refer, of course, to that mixed bag of politicized professors and theory-happy revisionists of the 1970s and 1980s — feminists, ethnicists, Marxists, semioticians, deconstructionists, new historicists, and cultural materialists — all of whom took exception to the canon while not necessarily seeing eye to eye about much else. Essentially, the postmodernists were against — well, essentialism. While books were conceived in private, they reflected the ideological makeup of their host culture; and the criticism that gave them legitimacy served only to justify the prevailing social order. The implication could not be plainer: If books simply reinforced the cultural values that helped shape them, then any old book or any new book was worthy of consideration. Literature with a capital L was nothing more than a bossy construct, and the canon, instead of being genuine and beneficial, was unreal and oppressive.

Traditionalists, naturally, were aghast. The canon, they argued, represented the best that had been thought and said, and its contents were an expression of the human condition: the joy of love, the sorrow of death, the pain of duty, the horror of war, and the recognition of self and soul. Some canonical writers conveyed this with linguistic brio, others through a sensitive and nuanced portrayal of experience; and their books were part of an ongoing conversation, whose changing sum was nothing less than the history of ideas. To mess with the canon was to mess with civilization itself.

Although it’s pretty to think that great books arise because great writers are driven to write exactly what they want to write, canon formation was, in truth, a result of the middle class’s desire to see its own values reflected in art. As such, the canon was tied to the advance of literacy, the surging book trade, the growing appeal of novels, the spread of coffee shops and clubs, the rise of reviews and magazines, the creation of private circulating libraries, the popularity of serialization and three-decker novels, and, finally, the eventual takeover of literature by institutions of higher learning.

by Arthur Krystal, Harpers |  Read more:
Image: “Two Tall Books,” by Abelardo Morell. Courtesy the artist and Edwynn Houk Gallery, New York City

What if Quality Journalism Isn't?


By now you have probably already read the leaked Innovation Report from The New York Times. And if you haven't, you should. It provides a great overview of the challenges and thinking that are happening in the industry, not just for The New York Times, but for every newspaper and magazine.

To very quickly summarize it, The New York Times has had a ton of success with its digital subscriptions, but despite that, is facing a continual decline in digital traffic.


And like all other media companies, they blame this on the transformation of formats and a failure to engage digital readers.

They say the solution to this is to develop more digitally focused 'growth' tactics, like asking all journalists to submit tweets with every article, be smarter about how the content is presented and featured, and generally focus on optimizing the format for digital. (...)

So why would I subscribe to a newspaper whose product has such little relevance to me as a person?

But, wait-a-minute, I hear you say, this is just in relation to you. If we look at the market as a whole (mass-market approach), each article is relevant to the percentage of the audience. And you are right. Each article is relevant to a percentage as a whole, but to the individual you are not relevant at all.

And this is why newspapers fail. You are based on a business model that only makes sense to a mass-market, but not to the individual. This is not a winning strategy. Yes, it used to work in the old days of media, but that was as a result of scarcity.

Think about this in relation to the world of retail. What type of brand are newspapers really like?

Are newspapers a brand like Nike, Starbucks, Ford, Tesla, GoPro, Converse, or Apple? Or are they more like Walmart, Tesco, or Aldi?

Well, companies like Nike, Starbucks, Tesla and GoPro are extremely niche brands targeting people with a very specific customer need, within a very narrow niche. This is the exact opposite of the traditional newspaper model. Each one of Nike's products, for example, are highly valuable to just their niche, but not that relevant outside it.

Whereas Walmart and Tesco are mass-market brands that offer a lot of everything in the hope that people might decide to buy something. They trade off relevance for size and convenience.

In other words, newspapers are the supermarkets of journalism. You are not the brands. Each article (your product) has almost zero value, but as a whole, there is always a small percentage of your offering that people need.

That doesn't necessarily sound like a bad thing, but people don't connect with Walmart or Tesco. They don't really care about them, nor are they inspired by what they do.

No matter how hard they try, supermarkets with a mass-market/low-relevancy appeal will never appear on a list of the most 'engaging brands', or on list of brands that people love.

And this is the essence of the trouble newspapers are facing today. It's not that we now live in a digital world, and that we are behaving in a different way. It's that your editorial focus is to be the supermarket of news.

The New York Times is publishing 300 new articles every single day, and in their Innovation Report they discuss how to surface even more from their archives. This is the Walmart business model.

The problem with this model is that supermarkets only work when visiting the individual brands is too hard to do. That's why we go to supermarkets. In the physical world, visiting 40 different stores just to get your groceries would take forever, so we prefer to only go to one place, the supermarket, where we can get everything... even if most of the other products there aren't what we need.

It's the same with how print newspapers used to work. We needed this one place to go because it was too hard to get news from multiple sources.

But on the internet, we have solved this problem. You can follow as many sources as you want, and it's as easy to visit 1000 different sites as it is to just visit one. Everything is just one click away. In fact, that's how people use social media. It's all about the links.

Imagine what would happen to real-world supermarkets, if every brand was just one step away, regardless of what you wanted. Would you still go to a supermarket, knowing that 85% of the products you see would be of no interest to you? Or would you instead turn directly to each brand that you care about?

This is what is happening to the world of news. You are trying to be the supermarket of news, not realizing that this editorial focus is exactly why people are turning away from you.

by Thomas Baekdal, Baekdal.com | Read more:
Image: uncredited

Flowers From Alaska

[ed. Timing is everything (and, location, location, location!]

Peonies—those gorgeous, pastel flowers that can bloom as big as dinner plates—are grown all over the world, but there’s only one place where they open up in July. That’s in Alaska, and ever since a horticulturalist discovered this bit of peony trivia, growers here have been planting the flowers as quickly as they can.

While speaking at a conference in the late 1990s, Pat Holloway, a horticulturalist at University of Alaska Fairbanks and manager of the Georgeson Botanical Garden, casually mentioned that peonies, which are wildly popular with brides, were among the many flowers that grew in Alaska. After her talk, a flower grower from Oregon found her in the crowd. “He said, ‘You have something no one else in the world has,’” she recalls. “‘You have peonies blooming in July.’”

Realizing the implications of his insight, Holloway planted a test plot at the botanical garden in 2001. “The first year, they just grew beautifully and they looked gorgeous,” says Holloway. She wrote about her blooms in a report and posted it online. To her surprise, a flower broker from England found the reports of her trials and called to order 100,000 peonies a week. Holloway laughed, informing him that she only had a few dozen plants. But she told a few growers around the state, and that was enough to convince several to plant peonies of their own. “And once they started advertising them, they found out—you can sell these,” says Holloway.

It helps that peonies not only survive, but thrive in Alaska. “Up here, the peonies go from breaking through the soil to flowering within four weeks,” says Aaron Stierle, a peony farmer at Solitude Springs Farm in Fairbanks. “That’s half the time it takes anywhere else in the world.” Blooms from Alaska are unusually big, up to eight inches across, from the long hours of sunshine. The state’s harsh climate staves off most diseases and insects. Even moose—one of the state’s most common garden pests—aren’t a threat, as they hate the taste of peonies.

But their biggest advantage, which that Oregon grower was so keen to point out, is in filling a seasonal gap in the global market that could elevate peonies to the status of roses, in an elite club of cut flowers that are available all year long. Flower markets from England to Taiwan are eager to place orders for Alaska’s midsummer beauties and so are brokers from coast to coast here in the states, where Alaska's peonies bloom just in time for late summer weddings. But first, the peony growers in Alaska must endure the early pains of starting a new industry.

by Amy Nordrum, The Atlantic |  Read more:
Image: Elizabeth Beks/North Pole Peonies

Michelangelo Pistoletto, The Experts (1960s) Art Basel
via:

What’s Up With That: Building Bigger Roads Actually Makes Traffic Worse


I grew up in Los Angeles, the city by the freeway by the sea. And if there’s one thing I’ve known ever since I could sit up in my car seat, it’s that you should expect to run into traffic at any point of the day. Yes, commute hours are the worst, but I’ve run into dead-stop bumper-to-bumper cars on the 405 at 2 a.m.

As a kid, I used to ask my parents why they couldn’t just build more lanes on the freeway. Maybe transform them all into double-decker highways with cars zooming on the upper and lower levels. Except, as it turns out, that wouldn’t work. Because if there’s anything that traffic engineers have discovered in the last few decades it’s that you can’t build your way out of congestion. It’s the roads themselves that cause traffic.

The concept is called induced demand, which is economist-speak for when increasing the supply of something (like roads) makes people want that thing even more. Though some traffic engineers made note of this phenomenon at least as early as the 1960s, it is only in recent years that social scientists have collected enough data to show how this happens pretty much every time we build new roads. These findings imply that the ways we traditionally go about trying to mitigate jams are essentially fruitless, and that we’d all be spending a lot less time in traffic if we could just be a little more rational.

But before we get to the solutions, we have to take a closer look at the problem. In 2009, two economists—Matthew Turner of the University of Toronto and Gilles Duranton of the University of Pennsylvania—decided to compare the amount of new roads and highways built in different U.S. cities between 1980 and 2000, and the total number of miles driven in those cities over the same period.

“We found that there’s this perfect one-to-one relationship,” said Turner.

If a city had increased its road capacity by 10 percent between 1980 and 1990, then the amount of driving in that city went up by 10 percent. If the amount of roads in the same city then went up by 11 percent between 1990 and 2000, the total number of miles driven also went up by 11 percent. It’s like the two figures were moving in perfect lockstep, changing at the same exact rate.

Now, correlation doesn’t mean causation. Maybe traffic engineers in U.S. cities happen to know exactly the right amount of roads to build to satisfy driving demand. But Turner and Duranton think that’s unlikely. The modern interstate network mostly follows the plan originally conceived by the federal government in 1947, and it seems incredibly coincidental that road engineers at the time could have successfully predicted driving demand more than half a century in the future.

A more likely explanation, Turner and Duranton argue, is what they call the fundamental law of road congestion: New roads will create new drivers, resulting in the intensity of traffic staying the same.

Intuitively, I would expect the opposite: that expanding a road network works like replacing a small pipe with a bigger one, allowing the water (or cars) to flow better. Instead, it’s like the larger pipe is drawing more water into itself. The first thing you wonder here is where all these extra drivers are coming from. I mean, are they just popping out of the asphalt as engineers lay down new roads?

The answer has to do with what roads allow people to do: move around. As it turns out, we humans love moving around. And if you expand people’s ability to travel, they will do it more, living farther away from where they work and therefore being forced to drive into town. Making driving easier also means that people take more trips in the car than they otherwise would. Finally, businesses that rely on roads will swoop into cities with many of them, bringing trucking and shipments. The problem is that all these things together erode any extra capacity you’ve built into your street network, meaning traffic levels stay pretty much constant. As long as driving on the roads remains easy and cheap, people have an almost unlimited desire to use them.

You might think that increasing investment in public transit could ease this mess. Many railway and bus projects are sold on this basis, with politicians promising that traffic will decrease once ridership grows. But the data showed that even in cities that expanded public transit, road congestion stayed exactly the same. Add a new subway line and some drivers will switch to transit. But new drivers replace them. It’s the same effect as adding a new lane to the highway: congestion remains constant. (That’s not to say that public transit doesn’t do good, it also allows more people to move around. These projects just shouldn’t be hyped up as traffic decongestants, say Turner and Duranton.)

by Adam Mann, Wired |  Read more:
Image: USGS

Richard Hamilton: Release (1972) Screenprint
via: