Friday, June 20, 2014
Thursday, June 19, 2014
The End of Sleep?
Work, friendships, exercise, parenting, eating, reading — there just aren’t enough hours in the day. To live fully, many of us carve those extra hours out of our sleep time. Then we pay for it the next day. A thirst for life leads many to pine for a drastic reduction, if not elimination, of the human need for sleep. Little wonder: if there were a widespread disease that similarly deprived people of a third of their conscious lives, the search for a cure would be lavishly funded. It’s the Holy Grail of sleep researchers, and they might be closing in.
As with most human behaviours, it’s hard to tease out our biological need for sleep from the cultural practices that interpret it. The practice of sleeping for eight hours on a soft, raised platform, alone or in pairs, is actually atypical for humans. Many traditional societies sleep more sporadically, and social activity carries on throughout the night. Group members get up when something interesting is going on, and sometimes they fall asleep in the middle of a conversation as a polite way of exiting an argument. Sleeping is universal, but there is glorious diversity in the ways we accomplish it.
Different species also seem to vary widely in their sleeping behaviours. Herbivores sleep far less than carnivores — four hours for an elephant, compared with almost 20 hours for a lion — presumably because it takes them longer to feed themselves, and vigilance is selected for. As omnivores, humans fall between the two sleep orientations. Circadian rhythms, the body’s master clock, allow us to anticipate daily environmental cycles and arrange our organ’s functions along a timeline so that they do not interfere with one another.
Our internal clock is based on a chemical oscillation, a feedback loop on the cellular level that takes 24 hours to complete and is overseen by a clump of brain cells behind our eyes (near the meeting point of our optic nerves). Even deep in a cave with no access to light or clocks, our bodies keep an internal schedule of almost exactly 24 hours. This isolated state is called ‘free-running’, and we know it’s driven from within because our body clock runs just a bit slow. When there is no light to reset it, we wake up a few minutes later each day. It’s a deeply engrained cycle found in every known multi-cellular organism, as inevitable as the rotation of the Earth — and the corresponding day-night cycles — that shaped it.
Human sleep comprises several 90-minute cycles of brain activity. In a person who is awake, electroencephalogram (EEG) readings are very complex, but as sleep sets in, the brain waves get slower, descending through Stage 1 (relaxation) and Stage 2 (light sleep) down to Stage 3 and slow-wave deep sleep. After this restorative phase, the brain has a spurt of rapid eye movement (REM) sleep, which in many ways resembles the waking brain. Woken from this phase, sleepers are likely to report dreaming.
One of the most valuable outcomes of work on sleep deprivation is the emergence of clear individual differences — groups of people who reliably perform better after sleepless nights, as well as those who suffer disproportionately. The division is quite stark and seems based on a few gene variants that code for neurotransmitter receptors, opening the possibility that it will soon be possible to tailor stimulant variety and dosage to genetic type.
Around the turn of this millennium, the biological imperative to sleep for a third of every 24-hour period began to seem quaint and unnecessary. Just as the birth control pill had uncoupled sex from reproduction, designer stimulants seemed poised to remove us yet further from the archaic requirements of the animal kingdom.
Any remedy for sleepiness must target the brain’s prefrontal cortex. The executive functions of the brain are particularly vulnerable to sleep deprivation, and people who are sleep-deprived are both more likely to take risks, and less likely to be able to make novel or imaginative decisions, or to plan a course of action. Designer stimulants such as modafinil and armodafinil (marketed as Provigil and Nuvigil) bring these areas back online and are highly effective at countering the negative effects of sleep loss. Over the course of 60 hours awake, a 400mg dose of modafinil every eight hours reinstates rested performance levels in everything from stamina for boring tasks to originality for complex ones. It staves off the risk propensity that accompanies sleepiness and brings both declarative memory (facts or personal experiences) and non-declarative memory (learned skills or unconscious associations) back up to snuff.
It’s impressive, but also roughly identical to the restorative effects of 20 mg of dextroamphetamine or 600 mg of caffeine (the equivalent of around six coffee cups). Though caffeine has a shorter half-life and has to be taken every four hours or so, it enjoys the advantages of being ubiquitous and cheap.
For any college student who has pulled an all-nighter guzzling energy drinks to finish an essay, it should come as no surprise that designer stimulants enable extended, focused work. A more challenging test, for a person wired on amphetamines, would be to successfully navigate a phone call from his or her grandmother. It is very difficult to design a stimulant that offers focus without tunnelling – that is, without losing the ability to relate well to one's wider environment and therefore make socially nuanced decisions. Irritability and impatience grate on team dynamics and social skills, but such nuances are usually missed in drug studies, where they are usually treated as unreliable self-reported data. These problems were largely ignored in the early enthusiasm for drug-based ways to reduce sleep. (...)
One reason why stimulants have proved a disappointment in reducing sleep is that we still don’t really understand enough about why we sleep in the first place. More than a hundred years of sleep deprivation studies have confirmed the truism that sleep deprivation makes people sleepy. Slow reaction times, reduced information processing capacity, and failures of sustained attention are all part of sleepiness, but the most reliable indicator is shortened sleep latency, or the tendency to fall asleep faster when lying in a dark room. An exasperatingly recursive conclusion remains that sleep’s primary function is to maintain our wakefulness during the day. (...)
The Somneo mask is only one of many attempts to maintain clarity in the mind of a soldier. Another initiative involves dietary supplements. Omega-3 fatty acids, such as those found in fish oils, sustain performance over 48 hours without sleep — as well as boosting attention and learning — and Marines can expect to see more of the nutritional supplement making its way into rations. The question remains whether measures that block short-term sleep deprivation symptoms will also protect against its long-term effects. A scan of the literature warns us that years of sleep deficit will make us fat, sick and stupid. A growing list of ailments has been linked to circadian disturbance as a risk factor.
Both the Somneo mask and the supplements — in other words, darkness and diet — are ways of practising ‘sleep hygiene’, or a suite of behaviours to optimise a healthy slumber. These can bring the effect of a truncated night’s rest up to the expected norm — eight hours of satisfying shut-eye. But proponents of human enhancement aren’t satisfied with normal. Always pushing the boundaries, some techno-pioneers will go to radical lengths to shrug off the need for sleep altogether.

Different species also seem to vary widely in their sleeping behaviours. Herbivores sleep far less than carnivores — four hours for an elephant, compared with almost 20 hours for a lion — presumably because it takes them longer to feed themselves, and vigilance is selected for. As omnivores, humans fall between the two sleep orientations. Circadian rhythms, the body’s master clock, allow us to anticipate daily environmental cycles and arrange our organ’s functions along a timeline so that they do not interfere with one another.
Our internal clock is based on a chemical oscillation, a feedback loop on the cellular level that takes 24 hours to complete and is overseen by a clump of brain cells behind our eyes (near the meeting point of our optic nerves). Even deep in a cave with no access to light or clocks, our bodies keep an internal schedule of almost exactly 24 hours. This isolated state is called ‘free-running’, and we know it’s driven from within because our body clock runs just a bit slow. When there is no light to reset it, we wake up a few minutes later each day. It’s a deeply engrained cycle found in every known multi-cellular organism, as inevitable as the rotation of the Earth — and the corresponding day-night cycles — that shaped it.
Human sleep comprises several 90-minute cycles of brain activity. In a person who is awake, electroencephalogram (EEG) readings are very complex, but as sleep sets in, the brain waves get slower, descending through Stage 1 (relaxation) and Stage 2 (light sleep) down to Stage 3 and slow-wave deep sleep. After this restorative phase, the brain has a spurt of rapid eye movement (REM) sleep, which in many ways resembles the waking brain. Woken from this phase, sleepers are likely to report dreaming.
One of the most valuable outcomes of work on sleep deprivation is the emergence of clear individual differences — groups of people who reliably perform better after sleepless nights, as well as those who suffer disproportionately. The division is quite stark and seems based on a few gene variants that code for neurotransmitter receptors, opening the possibility that it will soon be possible to tailor stimulant variety and dosage to genetic type.
Around the turn of this millennium, the biological imperative to sleep for a third of every 24-hour period began to seem quaint and unnecessary. Just as the birth control pill had uncoupled sex from reproduction, designer stimulants seemed poised to remove us yet further from the archaic requirements of the animal kingdom.
Any remedy for sleepiness must target the brain’s prefrontal cortex. The executive functions of the brain are particularly vulnerable to sleep deprivation, and people who are sleep-deprived are both more likely to take risks, and less likely to be able to make novel or imaginative decisions, or to plan a course of action. Designer stimulants such as modafinil and armodafinil (marketed as Provigil and Nuvigil) bring these areas back online and are highly effective at countering the negative effects of sleep loss. Over the course of 60 hours awake, a 400mg dose of modafinil every eight hours reinstates rested performance levels in everything from stamina for boring tasks to originality for complex ones. It staves off the risk propensity that accompanies sleepiness and brings both declarative memory (facts or personal experiences) and non-declarative memory (learned skills or unconscious associations) back up to snuff.
It’s impressive, but also roughly identical to the restorative effects of 20 mg of dextroamphetamine or 600 mg of caffeine (the equivalent of around six coffee cups). Though caffeine has a shorter half-life and has to be taken every four hours or so, it enjoys the advantages of being ubiquitous and cheap.
For any college student who has pulled an all-nighter guzzling energy drinks to finish an essay, it should come as no surprise that designer stimulants enable extended, focused work. A more challenging test, for a person wired on amphetamines, would be to successfully navigate a phone call from his or her grandmother. It is very difficult to design a stimulant that offers focus without tunnelling – that is, without losing the ability to relate well to one's wider environment and therefore make socially nuanced decisions. Irritability and impatience grate on team dynamics and social skills, but such nuances are usually missed in drug studies, where they are usually treated as unreliable self-reported data. These problems were largely ignored in the early enthusiasm for drug-based ways to reduce sleep. (...)
One reason why stimulants have proved a disappointment in reducing sleep is that we still don’t really understand enough about why we sleep in the first place. More than a hundred years of sleep deprivation studies have confirmed the truism that sleep deprivation makes people sleepy. Slow reaction times, reduced information processing capacity, and failures of sustained attention are all part of sleepiness, but the most reliable indicator is shortened sleep latency, or the tendency to fall asleep faster when lying in a dark room. An exasperatingly recursive conclusion remains that sleep’s primary function is to maintain our wakefulness during the day. (...)
The Somneo mask is only one of many attempts to maintain clarity in the mind of a soldier. Another initiative involves dietary supplements. Omega-3 fatty acids, such as those found in fish oils, sustain performance over 48 hours without sleep — as well as boosting attention and learning — and Marines can expect to see more of the nutritional supplement making its way into rations. The question remains whether measures that block short-term sleep deprivation symptoms will also protect against its long-term effects. A scan of the literature warns us that years of sleep deficit will make us fat, sick and stupid. A growing list of ailments has been linked to circadian disturbance as a risk factor.
Both the Somneo mask and the supplements — in other words, darkness and diet — are ways of practising ‘sleep hygiene’, or a suite of behaviours to optimise a healthy slumber. These can bring the effect of a truncated night’s rest up to the expected norm — eight hours of satisfying shut-eye. But proponents of human enhancement aren’t satisfied with normal. Always pushing the boundaries, some techno-pioneers will go to radical lengths to shrug off the need for sleep altogether.
by Jessa Gamble, Aeon | Read more:
Image: Carlos Barria/ReutersWednesday, June 18, 2014
What Is Literature?
There’s a new definition of literature in town. It has been slouching toward us for some time now but may have arrived officially in 2009, with the publication of Greil Marcus and Werner Sollors’s A New Literary History of America. Alongside essays on Twain, Fitzgerald, Frost, and Henry James, there are pieces about Jackson Pollock, Chuck Berry, the telephone, the Winchester rifle, and Linda Lovelace. Apparently, “literary means not only what is written but what is voiced, what is expressed, what is invented, in whatever form” — in which case maps, sermons, comic strips, cartoons, speeches, photographs, movies, war memorials, and music all huddle beneath the literary umbrella. Books continue to matter, of course, but not in the way that earlier generations took for granted. In 2004, “the most influential cultural figure now alive,” according to Newsweek, wasn’t a novelist or historian; it was Bob Dylan. Not incidentally, the index to A New Literary History contains more references to Dylan than to Stephen Crane and Hart Crane combined. Dylan may have described himself as “a song-and-dance man,” but Marcus and Sollors and such critics as Christopher Ricks beg to differ. Dylan, they contend, is one of the greatest poets this nation has ever produced (in point of fact, he has been nominated for a Nobel Prize in Literature every year since 1996).
The idea that literature contains multitudes is not new. For the greater part of its history, lit(t)eratura referred to any writing formed with letters. Up until the eighteenth century, the only true makers of creative work were poets, and what they aspired to was not literature but poesy. A piece of writing was “literary” only if enough learned readers spoke well of it; but as Thomas Rymer observed in 1674, “till of late years England was as free from Criticks, as it is from Wolves.”
So when did literature in the modern sense begin? According to Trevor Ross’s The Making of the English Literary Canon, that would have been on February 22, 1774. Ross is citing with theatrical flair the case of Donaldson v. Beckett, which did away with the notion of “perpetual copyright” and, as one contemporary onlooker put it, allowed “the Works of Shakespeare, of Addison, Pope, Swift, Gay, and many other excellent Authors of the present Century . . . to be the Property of any Person.” It was at this point, Ross claims, that “the canon became a set of commodities to be consumed. It became literature rather than poetry.” What Ross and other historians of literature credibly maintain is that the literary canon was largely an Augustan invention evolving from la querelle des Anciens et des Modernes, which pitted cutting-edge seventeenth-century authors against the Greek and Latin poets. Because a canon of vastly superior ancient writers — Homer, Virgil, Cicero — already existed, a modern canon had been slow to develop. One way around this dilemma was to create new ancients closer to one’s own time, which is precisely what John Dryden did in 1700, when he translated Chaucer into Modern English. Dryden not only made Chaucer’s work a classic; he helped canonize English literature itself.
The word canon, from the Greek, originally meant “measuring stick” or “rule” and was used by early Christian theologians to differentiate the genuine, or canonical, books of the Bible from the apocryphal ones. Canonization, of course, also referred to the Catholic practice of designating saints, but the term was not applied to secular writings until 1768, when the Dutch classicist David Ruhnken spoke of a canon of ancient orators and poets.
The usage may have been novel, but the idea of a literary canon was already in the air, as evidenced by a Cambridge don’s proposal in 1595 that universities “take the course to canonize [their] owne writers, that not every bold ballader . . . may pass current with a Poet’s name.” A similar nod toward hierarchies appeared in Daniel Defoe’s A Vindication of the Press (1718) and Joseph Spence’s plan for a dictionary of British poets. Writing in 1730, Spence suggested that the “known marks for ye different magnitudes of the Stars” could be used to establish rankings such as “great Genius & fine writer,” “fine writer,” “middling Poet,” and “one never to be read.” In 1756, Joseph Warton’s essay on Pope designated “four different classes and degrees” of poets, with Spenser, Shakespeare, and Milton comfortably leading the field. By 1781, Samuel Johnson’s Lives of the English Poets had confirmed the canon’s constituents — fifty-two of them — but also fine-tuned standards of literary merit so that the common reader, “uncorrupted with literary prejudice,” would know what to look for.
In effect, the canon formalized modern literature as a select body of imaginative writings that could stand up to the Greek and Latin texts. Although exclusionary by nature, it was originally intended to impart a sense of unity; critics hoped that a tradition of great writers would help create a national literature. What was the apotheosis of Shakespeare and Milton if not an attempt to show the world that England and not France — especially not France — had produced such geniuses? The canon anointed the worthy and, by implication, the unworthy, functioning as a set of commandments that saved people the trouble of deciding what to read.
The canon — later the canon of Great Books — endured without real opposition for nearly two centuries before antinomian forces concluded that enough was enough. I refer, of course, to that mixed bag of politicized professors and theory-happy revisionists of the 1970s and 1980s — feminists, ethnicists, Marxists, semioticians, deconstructionists, new historicists, and cultural materialists — all of whom took exception to the canon while not necessarily seeing eye to eye about much else. Essentially, the postmodernists were against — well, essentialism. While books were conceived in private, they reflected the ideological makeup of their host culture; and the criticism that gave them legitimacy served only to justify the prevailing social order. The implication could not be plainer: If books simply reinforced the cultural values that helped shape them, then any old book or any new book was worthy of consideration. Literature with a capital L was nothing more than a bossy construct, and the canon, instead of being genuine and beneficial, was unreal and oppressive.
Traditionalists, naturally, were aghast. The canon, they argued, represented the best that had been thought and said, and its contents were an expression of the human condition: the joy of love, the sorrow of death, the pain of duty, the horror of war, and the recognition of self and soul. Some canonical writers conveyed this with linguistic brio, others through a sensitive and nuanced portrayal of experience; and their books were part of an ongoing conversation, whose changing sum was nothing less than the history of ideas. To mess with the canon was to mess with civilization itself.
Although it’s pretty to think that great books arise because great writers are driven to write exactly what they want to write, canon formation was, in truth, a result of the middle class’s desire to see its own values reflected in art. As such, the canon was tied to the advance of literacy, the surging book trade, the growing appeal of novels, the spread of coffee shops and clubs, the rise of reviews and magazines, the creation of private circulating libraries, the popularity of serialization and three-decker novels, and, finally, the eventual takeover of literature by institutions of higher learning.

So when did literature in the modern sense begin? According to Trevor Ross’s The Making of the English Literary Canon, that would have been on February 22, 1774. Ross is citing with theatrical flair the case of Donaldson v. Beckett, which did away with the notion of “perpetual copyright” and, as one contemporary onlooker put it, allowed “the Works of Shakespeare, of Addison, Pope, Swift, Gay, and many other excellent Authors of the present Century . . . to be the Property of any Person.” It was at this point, Ross claims, that “the canon became a set of commodities to be consumed. It became literature rather than poetry.” What Ross and other historians of literature credibly maintain is that the literary canon was largely an Augustan invention evolving from la querelle des Anciens et des Modernes, which pitted cutting-edge seventeenth-century authors against the Greek and Latin poets. Because a canon of vastly superior ancient writers — Homer, Virgil, Cicero — already existed, a modern canon had been slow to develop. One way around this dilemma was to create new ancients closer to one’s own time, which is precisely what John Dryden did in 1700, when he translated Chaucer into Modern English. Dryden not only made Chaucer’s work a classic; he helped canonize English literature itself.
The word canon, from the Greek, originally meant “measuring stick” or “rule” and was used by early Christian theologians to differentiate the genuine, or canonical, books of the Bible from the apocryphal ones. Canonization, of course, also referred to the Catholic practice of designating saints, but the term was not applied to secular writings until 1768, when the Dutch classicist David Ruhnken spoke of a canon of ancient orators and poets.
The usage may have been novel, but the idea of a literary canon was already in the air, as evidenced by a Cambridge don’s proposal in 1595 that universities “take the course to canonize [their] owne writers, that not every bold ballader . . . may pass current with a Poet’s name.” A similar nod toward hierarchies appeared in Daniel Defoe’s A Vindication of the Press (1718) and Joseph Spence’s plan for a dictionary of British poets. Writing in 1730, Spence suggested that the “known marks for ye different magnitudes of the Stars” could be used to establish rankings such as “great Genius & fine writer,” “fine writer,” “middling Poet,” and “one never to be read.” In 1756, Joseph Warton’s essay on Pope designated “four different classes and degrees” of poets, with Spenser, Shakespeare, and Milton comfortably leading the field. By 1781, Samuel Johnson’s Lives of the English Poets had confirmed the canon’s constituents — fifty-two of them — but also fine-tuned standards of literary merit so that the common reader, “uncorrupted with literary prejudice,” would know what to look for.
In effect, the canon formalized modern literature as a select body of imaginative writings that could stand up to the Greek and Latin texts. Although exclusionary by nature, it was originally intended to impart a sense of unity; critics hoped that a tradition of great writers would help create a national literature. What was the apotheosis of Shakespeare and Milton if not an attempt to show the world that England and not France — especially not France — had produced such geniuses? The canon anointed the worthy and, by implication, the unworthy, functioning as a set of commandments that saved people the trouble of deciding what to read.
The canon — later the canon of Great Books — endured without real opposition for nearly two centuries before antinomian forces concluded that enough was enough. I refer, of course, to that mixed bag of politicized professors and theory-happy revisionists of the 1970s and 1980s — feminists, ethnicists, Marxists, semioticians, deconstructionists, new historicists, and cultural materialists — all of whom took exception to the canon while not necessarily seeing eye to eye about much else. Essentially, the postmodernists were against — well, essentialism. While books were conceived in private, they reflected the ideological makeup of their host culture; and the criticism that gave them legitimacy served only to justify the prevailing social order. The implication could not be plainer: If books simply reinforced the cultural values that helped shape them, then any old book or any new book was worthy of consideration. Literature with a capital L was nothing more than a bossy construct, and the canon, instead of being genuine and beneficial, was unreal and oppressive.
Traditionalists, naturally, were aghast. The canon, they argued, represented the best that had been thought and said, and its contents were an expression of the human condition: the joy of love, the sorrow of death, the pain of duty, the horror of war, and the recognition of self and soul. Some canonical writers conveyed this with linguistic brio, others through a sensitive and nuanced portrayal of experience; and their books were part of an ongoing conversation, whose changing sum was nothing less than the history of ideas. To mess with the canon was to mess with civilization itself.
Although it’s pretty to think that great books arise because great writers are driven to write exactly what they want to write, canon formation was, in truth, a result of the middle class’s desire to see its own values reflected in art. As such, the canon was tied to the advance of literacy, the surging book trade, the growing appeal of novels, the spread of coffee shops and clubs, the rise of reviews and magazines, the creation of private circulating libraries, the popularity of serialization and three-decker novels, and, finally, the eventual takeover of literature by institutions of higher learning.
by Arthur Krystal, Harpers | Read more:
Image: “Two Tall Books,” by Abelardo Morell. Courtesy the artist and Edwynn Houk Gallery, New York CityWhat if Quality Journalism Isn't?
To very quickly summarize it, The New York Times has had a ton of success with its digital subscriptions, but despite that, is facing a continual decline in digital traffic.
And like all other media companies, they blame this on the transformation of formats and a failure to engage digital readers.
They say the solution to this is to develop more digitally focused 'growth' tactics, like asking all journalists to submit tweets with every article, be smarter about how the content is presented and featured, and generally focus on optimizing the format for digital. (...)
So why would I subscribe to a newspaper whose product has such little relevance to me as a person?
But, wait-a-minute, I hear you say, this is just in relation to you. If we look at the market as a whole (mass-market approach), each article is relevant to the percentage of the audience. And you are right. Each article is relevant to a percentage as a whole, but to the individual you are not relevant at all.
And this is why newspapers fail. You are based on a business model that only makes sense to a mass-market, but not to the individual. This is not a winning strategy. Yes, it used to work in the old days of media, but that was as a result of scarcity.
Think about this in relation to the world of retail. What type of brand are newspapers really like?
Are newspapers a brand like Nike, Starbucks, Ford, Tesla, GoPro, Converse, or Apple? Or are they more like Walmart, Tesco, or Aldi?
Well, companies like Nike, Starbucks, Tesla and GoPro are extremely niche brands targeting people with a very specific customer need, within a very narrow niche. This is the exact opposite of the traditional newspaper model. Each one of Nike's products, for example, are highly valuable to just their niche, but not that relevant outside it.
Whereas Walmart and Tesco are mass-market brands that offer a lot of everything in the hope that people might decide to buy something. They trade off relevance for size and convenience.
In other words, newspapers are the supermarkets of journalism. You are not the brands. Each article (your product) has almost zero value, but as a whole, there is always a small percentage of your offering that people need.
That doesn't necessarily sound like a bad thing, but people don't connect with Walmart or Tesco. They don't really care about them, nor are they inspired by what they do.
No matter how hard they try, supermarkets with a mass-market/low-relevancy appeal will never appear on a list of the most 'engaging brands', or on list of brands that people love.
And this is the essence of the trouble newspapers are facing today. It's not that we now live in a digital world, and that we are behaving in a different way. It's that your editorial focus is to be the supermarket of news.
The New York Times is publishing 300 new articles every single day, and in their Innovation Report they discuss how to surface even more from their archives. This is the Walmart business model.
The problem with this model is that supermarkets only work when visiting the individual brands is too hard to do. That's why we go to supermarkets. In the physical world, visiting 40 different stores just to get your groceries would take forever, so we prefer to only go to one place, the supermarket, where we can get everything... even if most of the other products there aren't what we need.
It's the same with how print newspapers used to work. We needed this one place to go because it was too hard to get news from multiple sources.
But on the internet, we have solved this problem. You can follow as many sources as you want, and it's as easy to visit 1000 different sites as it is to just visit one. Everything is just one click away. In fact, that's how people use social media. It's all about the links.
Imagine what would happen to real-world supermarkets, if every brand was just one step away, regardless of what you wanted. Would you still go to a supermarket, knowing that 85% of the products you see would be of no interest to you? Or would you instead turn directly to each brand that you care about?
This is what is happening to the world of news. You are trying to be the supermarket of news, not realizing that this editorial focus is exactly why people are turning away from you.
by Thomas Baekdal, Baekdal.com | Read more:
Image: uncredited
Flowers From Alaska
[ed. Timing is everything (and, location, location, location!]
Peonies—those gorgeous, pastel flowers that can bloom as big as dinner plates—are grown all over the world, but there’s only one place where they open up in July. That’s in Alaska, and ever since a horticulturalist discovered this bit of peony trivia, growers here have been planting the flowers as quickly as they can.
While speaking at a conference in the late 1990s, Pat Holloway, a horticulturalist at University of Alaska Fairbanks and manager of the Georgeson Botanical Garden, casually mentioned that peonies, which are wildly popular with brides, were among the many flowers that grew in Alaska. After her talk, a flower grower from Oregon found her in the crowd. “He said, ‘You have something no one else in the world has,’” she recalls. “‘You have peonies blooming in July.’”
Realizing the implications of his insight, Holloway planted a test plot at the botanical garden in 2001. “The first year, they just grew beautifully and they looked gorgeous,” says Holloway. She wrote about her blooms in a report and posted it online. To her surprise, a flower broker from England found the reports of her trials and called to order 100,000 peonies a week. Holloway laughed, informing him that she only had a few dozen plants. But she told a few growers around the state, and that was enough to convince several to plant peonies of their own. “And once they started advertising them, they found out—you can sell these,” says Holloway.
It helps that peonies not only survive, but thrive in Alaska. “Up here, the peonies go from breaking through the soil to flowering within four weeks,” says Aaron Stierle, a peony farmer at Solitude Springs Farm in Fairbanks. “That’s half the time it takes anywhere else in the world.” Blooms from Alaska are unusually big, up to eight inches across, from the long hours of sunshine. The state’s harsh climate staves off most diseases and insects. Even moose—one of the state’s most common garden pests—aren’t a threat, as they hate the taste of peonies.
But their biggest advantage, which that Oregon grower was so keen to point out, is in filling a seasonal gap in the global market that could elevate peonies to the status of roses, in an elite club of cut flowers that are available all year long. Flower markets from England to Taiwan are eager to place orders for Alaska’s midsummer beauties and so are brokers from coast to coast here in the states, where Alaska's peonies bloom just in time for late summer weddings. But first, the peony growers in Alaska must endure the early pains of starting a new industry.

While speaking at a conference in the late 1990s, Pat Holloway, a horticulturalist at University of Alaska Fairbanks and manager of the Georgeson Botanical Garden, casually mentioned that peonies, which are wildly popular with brides, were among the many flowers that grew in Alaska. After her talk, a flower grower from Oregon found her in the crowd. “He said, ‘You have something no one else in the world has,’” she recalls. “‘You have peonies blooming in July.’”
Realizing the implications of his insight, Holloway planted a test plot at the botanical garden in 2001. “The first year, they just grew beautifully and they looked gorgeous,” says Holloway. She wrote about her blooms in a report and posted it online. To her surprise, a flower broker from England found the reports of her trials and called to order 100,000 peonies a week. Holloway laughed, informing him that she only had a few dozen plants. But she told a few growers around the state, and that was enough to convince several to plant peonies of their own. “And once they started advertising them, they found out—you can sell these,” says Holloway.
It helps that peonies not only survive, but thrive in Alaska. “Up here, the peonies go from breaking through the soil to flowering within four weeks,” says Aaron Stierle, a peony farmer at Solitude Springs Farm in Fairbanks. “That’s half the time it takes anywhere else in the world.” Blooms from Alaska are unusually big, up to eight inches across, from the long hours of sunshine. The state’s harsh climate staves off most diseases and insects. Even moose—one of the state’s most common garden pests—aren’t a threat, as they hate the taste of peonies.
But their biggest advantage, which that Oregon grower was so keen to point out, is in filling a seasonal gap in the global market that could elevate peonies to the status of roses, in an elite club of cut flowers that are available all year long. Flower markets from England to Taiwan are eager to place orders for Alaska’s midsummer beauties and so are brokers from coast to coast here in the states, where Alaska's peonies bloom just in time for late summer weddings. But first, the peony growers in Alaska must endure the early pains of starting a new industry.
by Amy Nordrum, The Atlantic | Read more:
Image: Elizabeth Beks/North Pole PeoniesWhat’s Up With That: Building Bigger Roads Actually Makes Traffic Worse
As a kid, I used to ask my parents why they couldn’t just build more lanes on the freeway. Maybe transform them all into double-decker highways with cars zooming on the upper and lower levels. Except, as it turns out, that wouldn’t work. Because if there’s anything that traffic engineers have discovered in the last few decades it’s that you can’t build your way out of congestion. It’s the roads themselves that cause traffic.
The concept is called induced demand, which is economist-speak for when increasing the supply of something (like roads) makes people want that thing even more. Though some traffic engineers made note of this phenomenon at least as early as the 1960s, it is only in recent years that social scientists have collected enough data to show how this happens pretty much every time we build new roads. These findings imply that the ways we traditionally go about trying to mitigate jams are essentially fruitless, and that we’d all be spending a lot less time in traffic if we could just be a little more rational.
But before we get to the solutions, we have to take a closer look at the problem. In 2009, two economists—Matthew Turner of the University of Toronto and Gilles Duranton of the University of Pennsylvania—decided to compare the amount of new roads and highways built in different U.S. cities between 1980 and 2000, and the total number of miles driven in those cities over the same period.
“We found that there’s this perfect one-to-one relationship,” said Turner.
If a city had increased its road capacity by 10 percent between 1980 and 1990, then the amount of driving in that city went up by 10 percent. If the amount of roads in the same city then went up by 11 percent between 1990 and 2000, the total number of miles driven also went up by 11 percent. It’s like the two figures were moving in perfect lockstep, changing at the same exact rate.
Now, correlation doesn’t mean causation. Maybe traffic engineers in U.S. cities happen to know exactly the right amount of roads to build to satisfy driving demand. But Turner and Duranton think that’s unlikely. The modern interstate network mostly follows the plan originally conceived by the federal government in 1947, and it seems incredibly coincidental that road engineers at the time could have successfully predicted driving demand more than half a century in the future.
A more likely explanation, Turner and Duranton argue, is what they call the fundamental law of road congestion: New roads will create new drivers, resulting in the intensity of traffic staying the same.
Intuitively, I would expect the opposite: that expanding a road network works like replacing a small pipe with a bigger one, allowing the water (or cars) to flow better. Instead, it’s like the larger pipe is drawing more water into itself. The first thing you wonder here is where all these extra drivers are coming from. I mean, are they just popping out of the asphalt as engineers lay down new roads?
The answer has to do with what roads allow people to do: move around. As it turns out, we humans love moving around. And if you expand people’s ability to travel, they will do it more, living farther away from where they work and therefore being forced to drive into town. Making driving easier also means that people take more trips in the car than they otherwise would. Finally, businesses that rely on roads will swoop into cities with many of them, bringing trucking and shipments. The problem is that all these things together erode any extra capacity you’ve built into your street network, meaning traffic levels stay pretty much constant. As long as driving on the roads remains easy and cheap, people have an almost unlimited desire to use them.
You might think that increasing investment in public transit could ease this mess. Many railway and bus projects are sold on this basis, with politicians promising that traffic will decrease once ridership grows. But the data showed that even in cities that expanded public transit, road congestion stayed exactly the same. Add a new subway line and some drivers will switch to transit. But new drivers replace them. It’s the same effect as adding a new lane to the highway: congestion remains constant. (That’s not to say that public transit doesn’t do good, it also allows more people to move around. These projects just shouldn’t be hyped up as traffic decongestants, say Turner and Duranton.)
by Adam Mann, Wired | Read more:
Image: USGS
Tuesday, June 17, 2014
The Dark Side Of Facebook
On Feb. 10, Jason Fyk received a strange Facebook message.
“Bro.”
The message had been sent by someone who wasn’t his friend on the social network, someone using the alias “Anthony.*” It was a name Fyk had come to know and dread.
Minutes later, the traffic on his website, FunnierPics.net, nosedived. Google Analytics showed the number of active readers drop from 3,000 to zero instantly.
When Fyk, known online as Jason Michaels, clicked over to his company’s Facebook page, WTF Magazine, he found another message from Anthony.
“Site’s down :(.”
Fyk’s business was under attack, and not for the first time. He’d spent the past few years locked in ferocious virtual combat over his Facebook pages, battling a shadowy group of adversaries that he and his friends call Script Kiddies, on the assumption that they're young hackers who exploit low-level vulnerabilities on others' sites.
Fyk said he received this Facebook wall post right as his site was crashing.
Anthony prefers the name the Community, and he readily admits — albeit communicating only under a pseudonym — that the group’s activities include hijacking valuable Facebook pages for fun and viral fame. (Meanwhile, Anthony and his cohorts refer to the WTF team as the Neckbeards.)
One of Fyk’s employees quickly determined that FunnierPics.net was under a distributed denial-of-service (DDoS) reflection attack. When Fyk’s team contacted the host, GoDaddy, they learned an estimated 70,000 servers had gone dead, resulting in more than 1 million customers losing web service. Fyk’s IP address, GoDaddy confirmed, was the attackers’ target. The others were collateral damage.
“Imagine the World Wide Web is like a six-lane highway, and each exit is its own server,” Fyk said. “And one of the exits is my server.” The attack sent so much traffic up the road to Fyk’s exit that every exit preceding it became jammed as well.
And the waves of bots were still coming.
Within 16 hours, Fyk’s team got his site working again, but not before they’d lost $15,000 in ad revenue. Since then, his company has been subjected to a number of similar attacks, and one of Fyk’s most valuable Facebook pages, an MTV fan page with 1.3 million fans, has been hijacked, stolen by a user who used a security glitch.
Fyk, 40, is a self-made millionaire who’s built his fortune almost entirely on Facebook. It’s a rewarding business but not without its challenges. Not only must he play a constant game of cat and mouse with hackers and digital thieves but he must do so on a field of battle that is constantly shifting because of Facebook’s habit of routinely — and mysteriously — tweaking its algorithm.
“It’s legitimately a cyber war,” said Fyk, who describes his archenemies as tech-savvy teens who are motivated by boredom. “I make almost a quarter-million dollars a month, so I have to protect what I’m doing. That means if I have to play their kiddie game, I play. I don’t have a choice.”
They may be kiddie games, but they are hardly trivial, having led to physical threats, out-and-out swindling, and run-ins with police.
And while Facebook security monitors for suspicious behavior, digital theft seems to be running rampant.
“Bro.”
The message had been sent by someone who wasn’t his friend on the social network, someone using the alias “Anthony.*” It was a name Fyk had come to know and dread.

When Fyk, known online as Jason Michaels, clicked over to his company’s Facebook page, WTF Magazine, he found another message from Anthony.
“Site’s down :(.”
Fyk’s business was under attack, and not for the first time. He’d spent the past few years locked in ferocious virtual combat over his Facebook pages, battling a shadowy group of adversaries that he and his friends call Script Kiddies, on the assumption that they're young hackers who exploit low-level vulnerabilities on others' sites.
Fyk said he received this Facebook wall post right as his site was crashing.
Anthony prefers the name the Community, and he readily admits — albeit communicating only under a pseudonym — that the group’s activities include hijacking valuable Facebook pages for fun and viral fame. (Meanwhile, Anthony and his cohorts refer to the WTF team as the Neckbeards.)
One of Fyk’s employees quickly determined that FunnierPics.net was under a distributed denial-of-service (DDoS) reflection attack. When Fyk’s team contacted the host, GoDaddy, they learned an estimated 70,000 servers had gone dead, resulting in more than 1 million customers losing web service. Fyk’s IP address, GoDaddy confirmed, was the attackers’ target. The others were collateral damage.
“Imagine the World Wide Web is like a six-lane highway, and each exit is its own server,” Fyk said. “And one of the exits is my server.” The attack sent so much traffic up the road to Fyk’s exit that every exit preceding it became jammed as well.
And the waves of bots were still coming.
Within 16 hours, Fyk’s team got his site working again, but not before they’d lost $15,000 in ad revenue. Since then, his company has been subjected to a number of similar attacks, and one of Fyk’s most valuable Facebook pages, an MTV fan page with 1.3 million fans, has been hijacked, stolen by a user who used a security glitch.
Fyk, 40, is a self-made millionaire who’s built his fortune almost entirely on Facebook. It’s a rewarding business but not without its challenges. Not only must he play a constant game of cat and mouse with hackers and digital thieves but he must do so on a field of battle that is constantly shifting because of Facebook’s habit of routinely — and mysteriously — tweaking its algorithm.
“It’s legitimately a cyber war,” said Fyk, who describes his archenemies as tech-savvy teens who are motivated by boredom. “I make almost a quarter-million dollars a month, so I have to protect what I’m doing. That means if I have to play their kiddie game, I play. I don’t have a choice.”
They may be kiddie games, but they are hardly trivial, having led to physical threats, out-and-out swindling, and run-ins with police.
And while Facebook security monitors for suspicious behavior, digital theft seems to be running rampant.
by Alyson Shontell, Business Insider | Read more:
Image: Business Insider
More Corporations Using Tag And Release Programs To Study American Consumers
In an effort to more closely observe the group’s buying habits and personal behaviors, a growing number of corporations are turning to tag and release programs to study American consumers, sources confirmed Friday.
According to reports, multinationals such as Kraft, General Electric, Goodyear, and Apple have embraced the technique of tracking down potential customers in their natural habitats of department stores and supermarkets, forcibly tranquilizing them as they shop, and then fitting them with electronic tracking devices that allow marketing departments to keep a detailed record of individuals’ every movement and purchasing decision.
“In recent weeks, we have employed our tag and release initiative to sedate and earmark consumers in several Costco parking lots and Best Buy television aisles, which has already yielded valuable data from numerous middle-class family units,” said Sony market researcher Nathan McElroy, whose team gathers data on the consumer population by attaching radio-transponder collars to specimens across all age groups and income levels. “Today we subdued and chipped a beautiful white male earning $60,000 annually whose subsequent actions—where he eats, where he works, whether he purchases extended warranties on electronic devices—will give us important insights into his demographic.”
“We’re really starting to get a clear idea of just what sales promotions and big-ticket expenditures make these fascinating creatures tick,” he continued.
Representatives from several Fortune 500 companies described to reporters a delicate process in which marketing associates journey to such varied field sites as Marshalls, OfficeMax, and Bed Bath & Beyond, where they lie in wait behind a row of shopping carts or a promotional cardboard cutout. Once a desirable target moves into view, a member of the marketing team reportedly attempts to immobilize it by firing a tranquilizer dart into its neck or haunches before it can panic and skitter off into another aisle. The unconscious consumer is then fitted with a small, subdermal acoustic tag that is synced to the subject’s credit cards, allowing marketers to both physically and financially track their quarries.
Claiming that every effort is taken to employ humane handling procedures and inflict minimal trauma, marketing associates stressed that consumers always wake up in the same clothing department or mini mall in which they were found, and most obliviously resume their browsing of store shelves within 30 minutes of being sedated.

“In recent weeks, we have employed our tag and release initiative to sedate and earmark consumers in several Costco parking lots and Best Buy television aisles, which has already yielded valuable data from numerous middle-class family units,” said Sony market researcher Nathan McElroy, whose team gathers data on the consumer population by attaching radio-transponder collars to specimens across all age groups and income levels. “Today we subdued and chipped a beautiful white male earning $60,000 annually whose subsequent actions—where he eats, where he works, whether he purchases extended warranties on electronic devices—will give us important insights into his demographic.”
“We’re really starting to get a clear idea of just what sales promotions and big-ticket expenditures make these fascinating creatures tick,” he continued.
Representatives from several Fortune 500 companies described to reporters a delicate process in which marketing associates journey to such varied field sites as Marshalls, OfficeMax, and Bed Bath & Beyond, where they lie in wait behind a row of shopping carts or a promotional cardboard cutout. Once a desirable target moves into view, a member of the marketing team reportedly attempts to immobilize it by firing a tranquilizer dart into its neck or haunches before it can panic and skitter off into another aisle. The unconscious consumer is then fitted with a small, subdermal acoustic tag that is synced to the subject’s credit cards, allowing marketers to both physically and financially track their quarries.
Claiming that every effort is taken to employ humane handling procedures and inflict minimal trauma, marketing associates stressed that consumers always wake up in the same clothing department or mini mall in which they were found, and most obliviously resume their browsing of store shelves within 30 minutes of being sedated.
by The Onion | Read more:
Image: uncredited
The Disruption Machine
Every age has a theory of rising and falling, of growth and decay, of bloom and wilt: a theory of nature. Every age also has a theory about the past and the present, of what was and what is, a notion of time: a theory of history. Theories of history used to be supernatural: the divine ruled time; the hand of God, a special providence, lay behind the fall of each sparrow. If the present differed from the past, it was usually worse: supernatural theories of history tend to involve decline, a fall from grace, the loss of God’s favor, corruption. Beginning in the eighteenth century, as the intellectual historian Dorothy Ross once pointed out, theories of history became secular; then they started something new—historicism, the idea “that all events in historical time can be explained by prior events in historical time.” Things began looking up. First, there was that, then there was this, and this is better than that. The eighteenth century embraced the idea of progress; the nineteenth century had evolution; the twentieth century had growth and then innovation. Our era has disruption, which, despite its futurism, is atavistic. It’s a theory of history founded on a profound anxiety about financial collapse, an apocalyptic fear of global devastation, and shaky evidence.
Most big ideas have loud critics. Not disruption. Disruptive innovation as the explanation for how change happens has been subject to little serious criticism, partly because it’s headlong, while critical inquiry is unhurried; partly because disrupters ridicule doubters by charging them with fogyism, as if to criticize a theory of change were identical to decrying change; and partly because, in its modern usage, innovation is the idea of progress jammed into a criticism-proof jack-in-the-box.
The idea of progress—the notion that human history is the history of human betterment—dominated the world view of the West between the Enlightenment and the First World War. It had critics from the start, and, in the last century, even people who cherish the idea of progress, and point to improvements like the eradication of contagious diseases and the education of girls, have been hard-pressed to hold on to it while reckoning with two World Wars, the Holocaust and Hiroshima, genocide and global warming. Replacing “progress” with “innovation” skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices are getting newer and newer.
The word “innovate”—to make new—used to have chiefly negative connotations: it signified excessive novelty, without purpose or end. Edmund Burke called the French Revolution a “revolt of innovation”; Federalists declared themselves to be “enemies to innovation.” George Washington, on his deathbed, was said to have uttered these words: “Beware of innovation in politics.” Noah Webster warned in his dictionary, in 1828, “It is often dangerous to innovate on the customs of a nation.”
The redemption of innovation began in 1939, when the economist Joseph Schumpeter, in his landmark study of business cycles, used the word to mean bringing new products to market, a usage that spread slowly, and only in the specialized literatures of economics and business. (In 1942, Schumpeter theorized about “creative destruction”; Christensen, retrofitting, believes that Schumpeter was really describing disruptive innovation.) “Innovation” began to seep beyond specialized literatures in the nineteen-nineties, and gained ubiquity only after 9/11. One measure: between 2011 and 2014, Time, the Times Magazine, The New Yorker, Forbes, and even Better Homes and Gardens published special “innovation” issues—the modern equivalents of what, a century ago, were known as “sketches of men of progress.”
The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.
Disruptive innovation as a theory of change is meant to serve both as a chronicle of the past (this has happened) and as a model for the future (it will keep happening). The strength of a prediction made from a model depends on the quality of the historical evidence and on the reliability of the methods used to gather and interpret it. Historical analysis proceeds from certain conditions regarding proof. None of these conditions have been met.

The idea of progress—the notion that human history is the history of human betterment—dominated the world view of the West between the Enlightenment and the First World War. It had critics from the start, and, in the last century, even people who cherish the idea of progress, and point to improvements like the eradication of contagious diseases and the education of girls, have been hard-pressed to hold on to it while reckoning with two World Wars, the Holocaust and Hiroshima, genocide and global warming. Replacing “progress” with “innovation” skirts the question of whether a novelty is an improvement: the world may not be getting better and better but our devices are getting newer and newer.
The word “innovate”—to make new—used to have chiefly negative connotations: it signified excessive novelty, without purpose or end. Edmund Burke called the French Revolution a “revolt of innovation”; Federalists declared themselves to be “enemies to innovation.” George Washington, on his deathbed, was said to have uttered these words: “Beware of innovation in politics.” Noah Webster warned in his dictionary, in 1828, “It is often dangerous to innovate on the customs of a nation.”
The redemption of innovation began in 1939, when the economist Joseph Schumpeter, in his landmark study of business cycles, used the word to mean bringing new products to market, a usage that spread slowly, and only in the specialized literatures of economics and business. (In 1942, Schumpeter theorized about “creative destruction”; Christensen, retrofitting, believes that Schumpeter was really describing disruptive innovation.) “Innovation” began to seep beyond specialized literatures in the nineteen-nineties, and gained ubiquity only after 9/11. One measure: between 2011 and 2014, Time, the Times Magazine, The New Yorker, Forbes, and even Better Homes and Gardens published special “innovation” issues—the modern equivalents of what, a century ago, were known as “sketches of men of progress.”
The idea of innovation is the idea of progress stripped of the aspirations of the Enlightenment, scrubbed clean of the horrors of the twentieth century, and relieved of its critics. Disruptive innovation goes further, holding out the hope of salvation against the very damnation it describes: disrupt, and you will be saved.
Disruptive innovation as a theory of change is meant to serve both as a chronicle of the past (this has happened) and as a model for the future (it will keep happening). The strength of a prediction made from a model depends on the quality of the historical evidence and on the reliability of the methods used to gather and interpret it. Historical analysis proceeds from certain conditions regarding proof. None of these conditions have been met.
by Jill Lepore, New Yorker | Read more:
Image: Brian StaufferMonday, June 16, 2014
The Scent(s) of a Woman
[ed. I've been partial to Oh! de London, ever since high school (for good reasons!). I hear there are re-manufactured versions out now, but nothing like the original.]
Long before I began to learn about it, I was attracted to the idea of perfume. Unlike lipstick, scent changes in contact with each individual, so finding the right one represents a real feat. This might be why people adopt a “signature scent”—it’s so much effort to find one that works with your body. (Michelle Obama apparently smells like cherries. Virginia Woolf is supposed to have smelled like woodsmoke and apples.) And unlike a pair of high heels, perfume doesn’t hobble the newbie (unless scent gives you migraines). Perfume seemed part and parcel of womanhood—its nature, invisible but sweet, sums up the expectations for women’s behavior through most of history—but the existence of cologne and aftershave blur gender lines. It isn’t just women who want to smell good. It’s people.
But while perfume was especially enticing, it was also particularly confusing. Sephora sells nearly 500 perfume varietals, while sites like The Perfumed Court stock thousands, an overwhelming array of choice. Niche stores like New York’s Bond No. 9—with less than fifty scents—weed out the objectively bad ones, celebrity scents made to smell like Jennifer Aniston’s childhood or Jennifer Lopez’s last love affair or largely reviled fragrances like Clinique Aromatics Elixir, described by one reviewer as smelling of “cats, mothballs, and fruitcakes.” But such selective stores tend to be wildly expensive and intimidating for the novitiate. You have to know something about perfume to even know they exist.
Needing a push, I mentioned my interest in perfume to one of my bosses, a stylish but intellectual woman whom I respect. It was awkward to talk about, but when trying new things, in the words of Grace Paley, “it’s as though you have to be artificial at first.”
My boss encouraged me to look into it, supplying links to a few perfume websites. I thanked her but told her I wouldn’t know where to begin: everything had too many reviews, all of which seemed conflicting, most written in a language I didn’t understand. What were top notes? What were bergamot and chypre? How was I supposed to know what constituted a long life, perfume-wise?
Eventually, that same boss sent me an enormous book called Perfumes: The Guide, by scent experts Luca Turin (also a biophysicist) and Tania Sanchez. Their prose is acerbic and witty and damn good as they tour perfume history and basic terminology, reviewing almost 1,500 scents. A book like this was the ideal solution; allaying my fear that wanting some of the trappings of womanhood (sounding too much, to my nervously feminist ear, like “the trap” of womanhood) was a shallow, regressive goal. I read it on the train—surrounded by the far less pleasant scents of the subway—and felt saved: I was attending a womanhood seminar of one.
Long before I began to learn about it, I was attracted to the idea of perfume. Unlike lipstick, scent changes in contact with each individual, so finding the right one represents a real feat. This might be why people adopt a “signature scent”—it’s so much effort to find one that works with your body. (Michelle Obama apparently smells like cherries. Virginia Woolf is supposed to have smelled like woodsmoke and apples.) And unlike a pair of high heels, perfume doesn’t hobble the newbie (unless scent gives you migraines). Perfume seemed part and parcel of womanhood—its nature, invisible but sweet, sums up the expectations for women’s behavior through most of history—but the existence of cologne and aftershave blur gender lines. It isn’t just women who want to smell good. It’s people.

Needing a push, I mentioned my interest in perfume to one of my bosses, a stylish but intellectual woman whom I respect. It was awkward to talk about, but when trying new things, in the words of Grace Paley, “it’s as though you have to be artificial at first.”
My boss encouraged me to look into it, supplying links to a few perfume websites. I thanked her but told her I wouldn’t know where to begin: everything had too many reviews, all of which seemed conflicting, most written in a language I didn’t understand. What were top notes? What were bergamot and chypre? How was I supposed to know what constituted a long life, perfume-wise?
Eventually, that same boss sent me an enormous book called Perfumes: The Guide, by scent experts Luca Turin (also a biophysicist) and Tania Sanchez. Their prose is acerbic and witty and damn good as they tour perfume history and basic terminology, reviewing almost 1,500 scents. A book like this was the ideal solution; allaying my fear that wanting some of the trappings of womanhood (sounding too much, to my nervously feminist ear, like “the trap” of womanhood) was a shallow, regressive goal. I read it on the train—surrounded by the far less pleasant scents of the subway—and felt saved: I was attending a womanhood seminar of one.
by Autumn Whitefield-Madrano, TNI | Read more:
Image: uncredited
Sunday, June 15, 2014
The End
[ed. Another perspective on suicidal ideation.]

Or so the line of thinking goes.
In Édouard LevĂ©’s short novel Suicide, the suicide comes first.
One Saturday in the month of August, you leave your home wearing your tennis gear, accompanied by your wife. In the middle of the garden, you point out to her that you’ve forgotten your racket in the house. You go back to look for it, but instead of making your way to the cupboard in the entryway where you normally keep it, you head down into the basement. Your wife doesn’t notice this. She stays outside. The weather is fine. She’s making the most of the sun. A few moments later she hears a gunshot. She rushes into the house, cries out your name, notices that the door to the stairway leading down to the basement is open, goes down and finds you there. You’ve put a bullet in your head with the rifle you had carefully prepared. On the table, you left a comic book open to a double page spread. In the heat of the moment, your wife leans on the table; the book falls closed before she realizes that this was your final message.Suicide is written mostly in the second person. Sometimes, though, the narrator refers to himself, and Suicide toggles back and forth between these two pronouns: the “I” of the narrator and “you,” the friend who committed suicide. This makes it feel like a letter, a letter from one childhood friend to another, regarding the latter’s suicide at the age of 25, twenty years ago. The separation between “I” and “you” often blurs. Each friend becomes a double, is defined by the other and, in turn, reflects the other. We learn that “you” died young. You studied economics; your childhood home was a chateau. You took photographs and read the dictionary. You were a virtuoso on the drums, playing solos in your basement for hours. You felt yourself ill adapted to the world, surprised that the world had produced a being who lives in it as a foreigner. You traveled to “taste the pleasures of being a stranger in a strange town.” You liked to be anonymous, a silent listener, a mobile voyeur. Eventually, you stopped traveling, preferring to be at home.
You were fascinated by the destitute and the morbidly old. Perhaps this is what you feared — to become the living dead, to commit suicide in slow motion. “You were a perfectionist,” the narrator writes. You were such a perfectionist that you wanted to perfect perfecting. But how can one judge whether perfection has been attained? … Your taste for the perfect bordered on madness…“What was difficult for you,” writes the narrator, “wasn’t beginning or continuing but finishing… sometimes, weary of perfecting perfections, you would abandon your work without destroying or finishing it…instead of finishing the works you undertook, you finished yourself.”
Perfectionism and the fear of finishing go hand in hand. Perfectionism is a form of possession. When something is finished it cannot be possessed, it no longer belongs to us. In the constant struggle to achieve the unachievable, whether in work or in life, the perfectionist is without definitions or limits. Suicide is the infinite preservation of this state of freedom. Thus, “your” suicide and your perfectionism go hand in hand as well. This is not to say that perfectionism is a death impulse; it is the opposite. Perfectionism is the attempt to keep death at bay, to keep everything unfinished. But a lifetime of disappointment, of accomplishing nothing, of suffocating under the burden of possessing yourself with the panicked grip of a drowning man, made your life feel, ironically, foreign. You thought that your perfectionism would allow you to control your life. Instead, life controlled you. In suicide, you could be free again.
by Stefany Anne Golberg, The Smart Set | Read more:
Image Édouard Manet via Wikipedia
Subscribe to:
Posts (Atom)