Friday, June 30, 2017
Solving the Heroin Overdose Mystery
How small doses can kill.
Heroin, like other opiates, depresses activity in the brain centre that controls breathing. Sometimes, this effect is so profound that the drug user dies, and becomes yet another overdose casualty. Some of these victims die because they took too much of the drug. Others die following self-administration of a dose that appears much too small to be lethal, but why? This is the heroin overdose mystery, and it has been known for more than half a century.
There was a heroin crisis in New York City in the 1960s, with overdose deaths increasing each year of the decade. There were almost 1,000 overdose victims in New York City in 1969, about as many as in 2015. The then chief medical examiner of New York, Milton Helpern, together with his deputy chief, Michael Baden, investigated these deaths. They discovered that many died, not from a true pharmacological overdose, but even when, on the day prior, the victim had administered a comparable dose with no ill effects. Helpern, Baden and colleagues noted that, while it is common for several users to take drugs from the same batch, only rarely does more than one user suffer a life-threatening reaction. They examined heroin packages and used syringes found near dead addicts, and tissue surrounding the sites of fatal injections, and found that victims typically self-administered a normal, usually non-fatal dose of heroin. In 1972, Helpern concluded that ‘there does not appear to be a quantitative correlation between the acute fulminating lethal effect and the amount of heroin taken’.
It was a science journalist, Edward Brecher, who first applied the term ‘overdose mystery’ when he evaluated Helpern’s data for Consumer Reports. Brecher concluded that ‘overdose’ was a misnomer. ‘These deaths are, if anything, associated with “underdose” rather than overdose,’ he wrote.
Subsequently, independent evaluations of heroin overdoses in New York City, Washington, DC, Detroit, and various cities in Germany and Hungary all confirmed the phenomenon – addicts often die after self-administering an amount of heroin that should not kill them.
Most scholarly articles concerning heroin overdose don’t mention the mystery; it is simply assumed that the victim died because he or she administered too much opiate. Even when the mystery is addressed, the explanations are wanting. For example, some have suggested that deaths seen after self-administration of a usually non-lethal dose of heroin result from an allergic-type reaction to additives, such as quinine, sometimes used to bulk up its street package. This interpretation has been discredited.
Others have noted that the effect of a small dose of heroin is greatly enhanced if the addict administers other depressant drugs (such as alcohol) with heroin. Although some cases of overdose can result from such drug interactions, many cases do not.
Some have suggested that the addict might overdose following a period of abstinence, either self-initiated or caused by imprisonment. Thus, tolerance that accumulated during a prolonged period of drug use, and which would be expected to protect the addict from the lethal effect of the drug, could dissipate during the drug-free period. If the addict goes back to his or her usual, pre-abstinence routine, the formerly well-tolerated dose could now be lethal.
But there are many demonstrations that opiate tolerance typically does notsubstantially dissipate merely with the passage of time. One piece of evidence comes from the addict’s hair, which carries a record of drug use. Many drugs, and drug metabolites, diffuse from the bloodstream into the growing hair shaft; thus, researchers can reconstruct this pharmacological record, including periods of abstinence, using ‘segmental hair analysis’. In a study that analysed the hair of 28 recently deceased heroin-overdose victims in Stockholm, there was no evidence that they had been abstinent prior to death.
A surprising solution to the overdose mystery has been provided by the testimony of addicts who overdosed, then survived to tell the tale. (Overdose is survivable if the antidote, an opiate antagonist, such as naloxone, is administered in a timely manner.) What do these survivors say was special about their experience? In independent studies, in New Jersey and in Spain, most overdose survivors said that they’d administered heroin in a novel or unusual environment – a place where they had not previously administered heroin.
by Shepard Siegel, Aeon | Read more:
Image: Bill Eppridge
Heroin, like other opiates, depresses activity in the brain centre that controls breathing. Sometimes, this effect is so profound that the drug user dies, and becomes yet another overdose casualty. Some of these victims die because they took too much of the drug. Others die following self-administration of a dose that appears much too small to be lethal, but why? This is the heroin overdose mystery, and it has been known for more than half a century.
There was a heroin crisis in New York City in the 1960s, with overdose deaths increasing each year of the decade. There were almost 1,000 overdose victims in New York City in 1969, about as many as in 2015. The then chief medical examiner of New York, Milton Helpern, together with his deputy chief, Michael Baden, investigated these deaths. They discovered that many died, not from a true pharmacological overdose, but even when, on the day prior, the victim had administered a comparable dose with no ill effects. Helpern, Baden and colleagues noted that, while it is common for several users to take drugs from the same batch, only rarely does more than one user suffer a life-threatening reaction. They examined heroin packages and used syringes found near dead addicts, and tissue surrounding the sites of fatal injections, and found that victims typically self-administered a normal, usually non-fatal dose of heroin. In 1972, Helpern concluded that ‘there does not appear to be a quantitative correlation between the acute fulminating lethal effect and the amount of heroin taken’.
It was a science journalist, Edward Brecher, who first applied the term ‘overdose mystery’ when he evaluated Helpern’s data for Consumer Reports. Brecher concluded that ‘overdose’ was a misnomer. ‘These deaths are, if anything, associated with “underdose” rather than overdose,’ he wrote.
Subsequently, independent evaluations of heroin overdoses in New York City, Washington, DC, Detroit, and various cities in Germany and Hungary all confirmed the phenomenon – addicts often die after self-administering an amount of heroin that should not kill them.
Most scholarly articles concerning heroin overdose don’t mention the mystery; it is simply assumed that the victim died because he or she administered too much opiate. Even when the mystery is addressed, the explanations are wanting. For example, some have suggested that deaths seen after self-administration of a usually non-lethal dose of heroin result from an allergic-type reaction to additives, such as quinine, sometimes used to bulk up its street package. This interpretation has been discredited.
Others have noted that the effect of a small dose of heroin is greatly enhanced if the addict administers other depressant drugs (such as alcohol) with heroin. Although some cases of overdose can result from such drug interactions, many cases do not.
Some have suggested that the addict might overdose following a period of abstinence, either self-initiated or caused by imprisonment. Thus, tolerance that accumulated during a prolonged period of drug use, and which would be expected to protect the addict from the lethal effect of the drug, could dissipate during the drug-free period. If the addict goes back to his or her usual, pre-abstinence routine, the formerly well-tolerated dose could now be lethal.
But there are many demonstrations that opiate tolerance typically does notsubstantially dissipate merely with the passage of time. One piece of evidence comes from the addict’s hair, which carries a record of drug use. Many drugs, and drug metabolites, diffuse from the bloodstream into the growing hair shaft; thus, researchers can reconstruct this pharmacological record, including periods of abstinence, using ‘segmental hair analysis’. In a study that analysed the hair of 28 recently deceased heroin-overdose victims in Stockholm, there was no evidence that they had been abstinent prior to death.
A surprising solution to the overdose mystery has been provided by the testimony of addicts who overdosed, then survived to tell the tale. (Overdose is survivable if the antidote, an opiate antagonist, such as naloxone, is administered in a timely manner.) What do these survivors say was special about their experience? In independent studies, in New Jersey and in Spain, most overdose survivors said that they’d administered heroin in a novel or unusual environment – a place where they had not previously administered heroin.
by Shepard Siegel, Aeon | Read more:
Image: Bill Eppridge
Disrupt the Citizen
The ouster of Travis Kalanick last week brings to an end nearly a year of accumulating scandal at Uber. The company—its specious claims to being a world-beating disruptor significantly weakened—now joins Amazon as one of the more frightening entities of our time, with Kalanick taking his place among Elizabeth Holmes, Jeff Bezos, Martin Shkreli, and the late Steve Jobs in the burgeoning pantheon of tech sociopaths. Few moments in history have been so crowded with narcissists: incapable of acknowledging the existence of others, unwilling to permit state and civil society—with their strange, confusing, downright offensive cult of taxes, regulations and public services—to impede their quest for monopolizing the mind, muscles, heart rate, and blood of every breathing person on earth. The Mormons, with their registries of the unsaved, have beaten Silicon Valley to the hosts of the dead—but it’s safe to assume that this, too, will not last. (...)
In the same vein, the proliferating but ever meaningless distinctions between the “bad” Uber and the “good” Lyft have obscured how destructive the rise of ride-sharing has been for workers and the cities they live in. The predatory lawlessness that prevails inside Valley workplaces scales up and out. Both companies entered their markets illegally, without regard to prevailing wages, regulations, or taxes. Like Amazon, which found a way to sell books without sales tax, this turned out to be one of the many boons of lawbreaking. (...)
But lying and rule-breaking to gain a monopoly are old news in liberal capitalism. What ride-sharing companies had to do, in the old spirit of Standard Oil, was secure a foothold in politics, and subject politics to the will of “the consumer.” In a telling example of our times, Uber hired former Obama campaign head David Plouffe to work the political angles. And Plouffe has succeeded wildly, since—as Washingtonians and New Yorkers are experiencing with their subways—municipal and state liberals are only nominally committed to the standards that regulate transport. Never mind that traffic is something that cities need to control, and that transportation should be a public good. Ride-sharing companies—which explode traffic and undermine public transportation—can trim the balance sheets of cities by privatizing both. The choice we make should be between unchecked ride-sharing and fully funded mass transit. Instead, the success of ride-sharing means that we choose between Uber and Lyft.
What Plouffe and the ride-sharing companies understand is that, under capitalism, when markets are pitted against the state, the figure of the consumer can be invoked against the figure of the citizen. Consumption has in fact come to replace our original ideas of citizenship. As the sociologist Wolfgang Streeck has argued in his exceptional 2012 essay, “Citizens as Customers,” the government encouragement of consumer choice in the 1960s and ’70s “radiated” into the public sphere, making government seem shabby in comparison with the endlessly attractive world of consumer society. Political goods began to get judged by the same standards as commodities, and were often found wanting.
The result is that, in Streeck’s prediction, the “middle classes, who command enough purchasing power to rely on commercial rather than political means to get what they want, will lose interest in the complexities of collective preference-setting and decision-making, and find the sacrifices of individual utility required by participation in traditional politics no longer worthwhile.” The affluent find themselves bored by goods formerly subject to collective provision, such as public transportation, ceasing to pay for them, while thereby supporting private options. Consumer choice then stands in for political choice. When Ohio governor John Kasich proposed last year that he would “Uber-ize” the state’s government, he was appealing to this sense that politics should more closely resemble the latest trends in consumption.
In the same vein, the proliferating but ever meaningless distinctions between the “bad” Uber and the “good” Lyft have obscured how destructive the rise of ride-sharing has been for workers and the cities they live in. The predatory lawlessness that prevails inside Valley workplaces scales up and out. Both companies entered their markets illegally, without regard to prevailing wages, regulations, or taxes. Like Amazon, which found a way to sell books without sales tax, this turned out to be one of the many boons of lawbreaking. (...)
But lying and rule-breaking to gain a monopoly are old news in liberal capitalism. What ride-sharing companies had to do, in the old spirit of Standard Oil, was secure a foothold in politics, and subject politics to the will of “the consumer.” In a telling example of our times, Uber hired former Obama campaign head David Plouffe to work the political angles. And Plouffe has succeeded wildly, since—as Washingtonians and New Yorkers are experiencing with their subways—municipal and state liberals are only nominally committed to the standards that regulate transport. Never mind that traffic is something that cities need to control, and that transportation should be a public good. Ride-sharing companies—which explode traffic and undermine public transportation—can trim the balance sheets of cities by privatizing both. The choice we make should be between unchecked ride-sharing and fully funded mass transit. Instead, the success of ride-sharing means that we choose between Uber and Lyft.
What Plouffe and the ride-sharing companies understand is that, under capitalism, when markets are pitted against the state, the figure of the consumer can be invoked against the figure of the citizen. Consumption has in fact come to replace our original ideas of citizenship. As the sociologist Wolfgang Streeck has argued in his exceptional 2012 essay, “Citizens as Customers,” the government encouragement of consumer choice in the 1960s and ’70s “radiated” into the public sphere, making government seem shabby in comparison with the endlessly attractive world of consumer society. Political goods began to get judged by the same standards as commodities, and were often found wanting.
The result is that, in Streeck’s prediction, the “middle classes, who command enough purchasing power to rely on commercial rather than political means to get what they want, will lose interest in the complexities of collective preference-setting and decision-making, and find the sacrifices of individual utility required by participation in traditional politics no longer worthwhile.” The affluent find themselves bored by goods formerly subject to collective provision, such as public transportation, ceasing to pay for them, while thereby supporting private options. Consumer choice then stands in for political choice. When Ohio governor John Kasich proposed last year that he would “Uber-ize” the state’s government, he was appealing to this sense that politics should more closely resemble the latest trends in consumption.
by Nikil Saval, N+1 | Read more:
Image: uncredited
Thursday, June 29, 2017
Greetings, E.T. (Please Don’t Murder Us.)
On Nov. 16, 1974, a few hundred astronomers, government officials and other dignitaries gathered in the tropical forests of Puerto Rico’s northwest interior, a four-hour drive from San Juan. The occasion was a rechristening of the Arecibo Observatory, at the time the largest radio telescope in the world. The mammoth structure — an immense concrete-and-aluminum saucer as wide as the Eiffel Tower is tall, planted implausibly inside a limestone sinkhole in the middle of a mountainous jungle — had been upgraded to ensure its ability to survive the volatile hurricane season and to increase its precision tenfold.
To celebrate the reopening, the astronomers who maintained the observatory decided to take the most sensitive device yet constructed for listening to the cosmos and transform it, briefly, into a machine for talking back. After a series of speeches, the assembled crowd sat in silence at the edge of the telescope while the public-address system blasted nearly three minutes of two-tone noise through the muggy afternoon heat. To the listeners, the pattern was indecipherable, but somehow the experience of hearing those two notes oscillating in the air moved many in the crowd to tears.
That 168 seconds of noise, now known as the Arecibo message, was the brainchild of the astronomer Frank Drake, then the director of the organization that oversaw the Arecibo facility. The broadcast marked the first time a human being had intentionally transmitted a message targeting another solar system. The engineers had translated the missive into sound, so that the assembled group would have something to experience during the transmission. But its true medium was the silent, invisible pulse of radio waves, traveling at the speed of light.
It seemed to most of the onlookers to be a hopeful act, if a largely symbolic one: a message in a bottle tossed into the sea of deep space. But within days, the Royal Astronomer of England, Martin Ryle, released a thunderous condemnation of Drake’s stunt. By alerting the cosmos of our existence, Ryle wrote, we were risking catastrophe. Arguing that ‘‘any creatures out there [might be] malevolent or hungry,’’ Ryle demanded that the International Astronomical Union denounce Drake’s message and explicitly forbid any further communications. It was irresponsible, Ryle fumed, to tinker with interstellar outreach when such gestures, however noble their intentions, might lead to the destruction of all life on earth.
Today, more than four decades later, we still do not know if Ryle’s fears were warranted, because the Arecibo message is still eons away from its intended recipient, a cluster of roughly 300,000 stars known as M13. If you find yourself in the Northern Hemisphere this summer on a clear night, locate the Hercules constellation in the sky, 21 stars that form the image of a man, arms outstretched, perhaps kneeling. Imagine hurtling 250 trillion miles toward those stars. Though you would have traveled far outside our solar system, you would only be a tiny fraction of the way to M13. But if you were somehow able to turn on a ham radio receiver and tune it to 2,380 MHz, you might catch the message in flight: a long series of rhythmic pulses, 1,679 of them to be exact, with a clear, repetitive structure that would make them immediately detectable as a product of intelligent life. (...)
Now this taciturn phase may be coming to an end, if a growing multidisciplinary group of scientists and amateur space enthusiasts have their way. A newly formed group known as METI (Messaging Extra Terrestrial Intelligence), led by the former SETI scientist Douglas Vakoch, is planning an ongoing series of messages to begin in 2018. And Milner’s Breakthrough Listen endeavor has also promised to support a ‘‘Breakthrough Message’’ companion project, including an open competition to design the messages that we will transmit to the stars. But as messaging schemes proliferate, they have been met with resistance. The intellectual descendants of Martin Ryle include luminaries like Elon Musk and Stephen Hawking, and they caution that an assumption of interstellar friendship is the wrong way to approach the question of extraterrestrial life. They argue that an advanced alien civilization might well respond to our interstellar greetings with the same graciousness that Cortés showed the Aztecs, making silence the more prudent option. (...)
Before Doug Vakoch had even filed the papers to form the METI nonprofit organization in July 2015, a dozen or so science-and-tech luminaries, including SpaceX’s Elon Musk, signed a statement categorically opposing the project, at least without extensive further discussion, on a planetary scale. ‘‘Intentionally signaling other civilizations in the Milky Way Galaxy,’’ the statement argued, ‘‘raises concerns from all the people of Earth, about both the message and the consequences of contact. A worldwide scientific, political and humanitarian discussion must occur before any message is sent.’’
One signatory to that statement was the astronomer and science-fiction author David Brin, who has been carrying on a spirited but collegial series of debates with Vakoch over the wisdom of his project. ‘‘I just don’t think anybody should give our children a fait accompli based on blithe assumptions and assertions that have been untested and not subjected to critical peer review,’’ he told me over a Skype call from his home office in Southern California. ‘‘If you are going to do something that is going to change some of the fundamental observable parameters of our solar system, then how about an environmental-impact statement?’’
The anti-METI movement is predicated on a grim statistical likelihood: If we do ever manage to make contact with another intelligent life-form, then almost by definition, our new pen pals will be far more advanced than we are. The best way to understand this is to consider, on a percentage basis, just how young our own high-tech civilization actually is. We have been sending structured radio signals from Earth for only the last 100 years. If the universe were exactly 14 billion years old, then it would have taken 13,999,999,900 years for radio communication to be harnessed on our planet. The odds that our message would reach a society that had been tinkering with radio for a shorter, or even similar, period of time would be staggeringly long. Imagine another planet that deviates from our timetable by just a tenth of 1 percent: If they are more advanced than us, then they will have been using radio (and successor technologies) for 14 million years. Of course, depending on where they live in the universe, their signals might take millions of years to reach us. But even if you factor in that transmission lag, if we pick up a signal from another galaxy, we will almost certainly find ourselves in conversation with a more advanced civilization.
It is this asymmetry that has convinced so many future-minded thinkers that METI is a bad idea. The history of colonialism here on Earth weighs particularly heavy on the imaginations of the METI critics. Stephen Hawking, for instance, made this observation in a 2010 documentary series: ‘‘If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.’’ David Brin echoes the Hawking critique: ‘‘Every single case we know of a more technologically advanced culture contacting a less technologically advanced culture resulted at least in pain.’’
To celebrate the reopening, the astronomers who maintained the observatory decided to take the most sensitive device yet constructed for listening to the cosmos and transform it, briefly, into a machine for talking back. After a series of speeches, the assembled crowd sat in silence at the edge of the telescope while the public-address system blasted nearly three minutes of two-tone noise through the muggy afternoon heat. To the listeners, the pattern was indecipherable, but somehow the experience of hearing those two notes oscillating in the air moved many in the crowd to tears.
That 168 seconds of noise, now known as the Arecibo message, was the brainchild of the astronomer Frank Drake, then the director of the organization that oversaw the Arecibo facility. The broadcast marked the first time a human being had intentionally transmitted a message targeting another solar system. The engineers had translated the missive into sound, so that the assembled group would have something to experience during the transmission. But its true medium was the silent, invisible pulse of radio waves, traveling at the speed of light.
It seemed to most of the onlookers to be a hopeful act, if a largely symbolic one: a message in a bottle tossed into the sea of deep space. But within days, the Royal Astronomer of England, Martin Ryle, released a thunderous condemnation of Drake’s stunt. By alerting the cosmos of our existence, Ryle wrote, we were risking catastrophe. Arguing that ‘‘any creatures out there [might be] malevolent or hungry,’’ Ryle demanded that the International Astronomical Union denounce Drake’s message and explicitly forbid any further communications. It was irresponsible, Ryle fumed, to tinker with interstellar outreach when such gestures, however noble their intentions, might lead to the destruction of all life on earth.
Today, more than four decades later, we still do not know if Ryle’s fears were warranted, because the Arecibo message is still eons away from its intended recipient, a cluster of roughly 300,000 stars known as M13. If you find yourself in the Northern Hemisphere this summer on a clear night, locate the Hercules constellation in the sky, 21 stars that form the image of a man, arms outstretched, perhaps kneeling. Imagine hurtling 250 trillion miles toward those stars. Though you would have traveled far outside our solar system, you would only be a tiny fraction of the way to M13. But if you were somehow able to turn on a ham radio receiver and tune it to 2,380 MHz, you might catch the message in flight: a long series of rhythmic pulses, 1,679 of them to be exact, with a clear, repetitive structure that would make them immediately detectable as a product of intelligent life. (...)
Now this taciturn phase may be coming to an end, if a growing multidisciplinary group of scientists and amateur space enthusiasts have their way. A newly formed group known as METI (Messaging Extra Terrestrial Intelligence), led by the former SETI scientist Douglas Vakoch, is planning an ongoing series of messages to begin in 2018. And Milner’s Breakthrough Listen endeavor has also promised to support a ‘‘Breakthrough Message’’ companion project, including an open competition to design the messages that we will transmit to the stars. But as messaging schemes proliferate, they have been met with resistance. The intellectual descendants of Martin Ryle include luminaries like Elon Musk and Stephen Hawking, and they caution that an assumption of interstellar friendship is the wrong way to approach the question of extraterrestrial life. They argue that an advanced alien civilization might well respond to our interstellar greetings with the same graciousness that Cortés showed the Aztecs, making silence the more prudent option. (...)
Before Doug Vakoch had even filed the papers to form the METI nonprofit organization in July 2015, a dozen or so science-and-tech luminaries, including SpaceX’s Elon Musk, signed a statement categorically opposing the project, at least without extensive further discussion, on a planetary scale. ‘‘Intentionally signaling other civilizations in the Milky Way Galaxy,’’ the statement argued, ‘‘raises concerns from all the people of Earth, about both the message and the consequences of contact. A worldwide scientific, political and humanitarian discussion must occur before any message is sent.’’
One signatory to that statement was the astronomer and science-fiction author David Brin, who has been carrying on a spirited but collegial series of debates with Vakoch over the wisdom of his project. ‘‘I just don’t think anybody should give our children a fait accompli based on blithe assumptions and assertions that have been untested and not subjected to critical peer review,’’ he told me over a Skype call from his home office in Southern California. ‘‘If you are going to do something that is going to change some of the fundamental observable parameters of our solar system, then how about an environmental-impact statement?’’
The anti-METI movement is predicated on a grim statistical likelihood: If we do ever manage to make contact with another intelligent life-form, then almost by definition, our new pen pals will be far more advanced than we are. The best way to understand this is to consider, on a percentage basis, just how young our own high-tech civilization actually is. We have been sending structured radio signals from Earth for only the last 100 years. If the universe were exactly 14 billion years old, then it would have taken 13,999,999,900 years for radio communication to be harnessed on our planet. The odds that our message would reach a society that had been tinkering with radio for a shorter, or even similar, period of time would be staggeringly long. Imagine another planet that deviates from our timetable by just a tenth of 1 percent: If they are more advanced than us, then they will have been using radio (and successor technologies) for 14 million years. Of course, depending on where they live in the universe, their signals might take millions of years to reach us. But even if you factor in that transmission lag, if we pick up a signal from another galaxy, we will almost certainly find ourselves in conversation with a more advanced civilization.
It is this asymmetry that has convinced so many future-minded thinkers that METI is a bad idea. The history of colonialism here on Earth weighs particularly heavy on the imaginations of the METI critics. Stephen Hawking, for instance, made this observation in a 2010 documentary series: ‘‘If aliens visit us, the outcome would be much as when Columbus landed in America, which didn’t turn out well for the Native Americans.’’ David Brin echoes the Hawking critique: ‘‘Every single case we know of a more technologically advanced culture contacting a less technologically advanced culture resulted at least in pain.’’
by Steven Johnson, NY Times Magazine | Read more:
Image: Paul Sahre
[ed. If you find this topic interesting, I'd also suggest The Three-Body Problem by Liu Cixin.]
The Botanists’ Last Stand
Steve Perlman doesn’t take Prozac, like some of the other rare-plant botanists he knows. Instead, he writes poetry.
Either way, you have to do something when a plant you’ve long known goes extinct. Let’s say for 20 years you’ve been observing a tree on a fern-covered crag thousands of feet above sea level on an island in the Pacific. Then one day you hike up to check on the plant and find it dying. You know it’s the last one of its species, and that you’re the only witness to the end of hundreds of thousands of years of evolution, the snuffing out of a line of completely unique genetic material. You might have to sit down and write a poem. Or at least bring a bit of the dead plant to a bar and raise a beer to its life. (Perlman has done both.) You might even need an antidepressant.
“I’ve already witnessed about 20 species go extinct in the wild,” Perlman says. “It can be like you’re dealing with your friends or your family, and then they die.”
Perlman tells me this as we drive up a winding road on the northwestern edge of Kauai, the geologically oldest Hawaiian island. Perlman is 69 with a sturdy build and white hair. That’s been enough to last him 45 years and counting on the knife’s edge of extreme botany.
The stakes are always high: As the top botanist at Hawaii’s Plant Extinction Prevention Program (PEPP), Perlman deals exclusively in plants with 50 or fewer individuals left—in many cases, much fewer, maybe two or three. Of the 238 species currently on that list, 82 are on Kauai; Perlman literally hangs off cliffs and jumps from helicopters to reach them.
Without him, rare Hawaiian plants die out forever. With him, they at least have a shot. Though now, due to forces beyond Perlman’s control, even that slim hope of survival is in jeopardy. Looming budget cuts threaten to make this the final chapter not only in the history of many native Hawaiian species, but in the program designed to keep them alive.
The silver lining: even if a species does go extinct in the wild, chances are Perlman has already collected enough seeds and genetic material before the last plant disappeared to grow others in a greenhouse. Extra seeds are shipped to a seed bank, where they sit, dehydrated and chilled, awaiting a more hospitable future. There may not be a viable habitat for that plant now, but what about in 50 years? Or 150? “Part of it is saving all that genetic information,” he says. “If your house is on fire, you run in and grab the kid.”
Most people probably wouldn’t speak about obscure threatened plants with this much regard. But we don’t necessarily know what we’re losing when we let a plant species die, Perlman says. Could it have been a source of medicine? Could it be supporting a food chain that will come tumbling down in its stead? Our foresight on this kind of thing has been abominable so far; one only has to look at what happened when wolves were driven out of Yellowstone National Park, only to cause a massive boom in the newly predator-free elk population, which in turn ate every plant and baby tree in sight, starving bears of their berry supply, birds of their nest sites, and bees of flowers to feed on.
Everything was beautiful, and nothing hurt
Every native plant on Kauai is an insane stroke of luck and chance. Each species arrived to the island as a single seed floating at sea or flying in a bird’s belly from thousands of miles away—2,000 miles of open ocean sit between Kauai and the nearest continent. “We think…probably one or two seeds made it every 1,000 years,” says botanist Ken Wood, Perlman’s longtime field partner.
Once a seed took root, the plant would evolve into a completely new species, or several, all of which came to be “endemic,” or found exclusively on the island. Any defenses the plant’s predecessors may have had—thorns, or poison, or repellent scents—were completely dropped. No large mammals or other potential predators made the journey from mainland to the remote island chain. From the plant’s perspective, there was no reason to spend energy on defenses when there were no predators to fend off. So stinging nettles no longer stung. Mint lost its mint oil. Scientists ominously refer to this process as species becoming “naive.”
The same was true for animals like birds and insects when they began to arrive. Famously, when a species of duck made it to the Hawaiian islands, it evolved to drop the concept of flying altogether. Its wings became little nubs. After all, there were no large mammals around to fly away from. The bird grew very large; “gigantism” is an evolutionary phenomena common to islands. Predictably, this huge, flightless duck, known as the “mao-nalo,” went extinct once humans showed up, likely finding them an easy-to-catch source of meat.
Fatal naiveté
When plants are allowed to evolve without fear, they get really, really specific. Take the Hibiscadelphus, for example. Found only in Hawaii, members of this genus of plant have flowers custom-shaped to fit the hooked beak of the honeycreeper, the specific bird that pollinates them. “They’re extremely rare. There were only about seven species described ever, and six were already extinct when I found a new one,” says Perlman. He published the discovery in 2014—it was his 50th new plant species discovery.
Almost 15% of the plants of Hawaii evolved to have separate male and female populations—a very high percentage, says Wood, compared to mainland plants. Under normal circumstances, that trait is good for island plants: it forces them to cross-pollinate, keeping the gene pool relatively diverse even if the population is small. But by “small,” evolutionary forces were probably thinking at least 200 individuals—not four or five. When you can count the number of individual plants on one hand, it’s almost certain that the few remaining males and females won’t be anywhere near each other. In those cases, Perlman and Wood painstakingly gather pollen from the males and bring it to the females.
They have to time this just right—or at least try. There is no perfect math to predict what day an individual plant will decide to flower. “And often you need to dangle off helicopters to get to them,” Wood adds. So missing the mark by a day or two and arriving to a flower that is still closed can mean having leapt from a helicopter and rappelled off a cliff and possibly camped for a day or two for naught.
“That’s what Ken doesn’t like—he likes to go in and go out,” Perlman tells me later. He proudly points to a photo on his laptop screen. It shows him collecting seeds from the last-known member of the endemic fan palm species Pritchardia munroi. The palm was clinging to a slope 2,000 feet up in the air on the tiny Hawaiian island of Molokai. “I had to go there three times to get the seed when it’s ripe,” Perlman says.
by Zoë Schlanger, Quartz | Read more:
Image: Steve Perlman
Either way, you have to do something when a plant you’ve long known goes extinct. Let’s say for 20 years you’ve been observing a tree on a fern-covered crag thousands of feet above sea level on an island in the Pacific. Then one day you hike up to check on the plant and find it dying. You know it’s the last one of its species, and that you’re the only witness to the end of hundreds of thousands of years of evolution, the snuffing out of a line of completely unique genetic material. You might have to sit down and write a poem. Or at least bring a bit of the dead plant to a bar and raise a beer to its life. (Perlman has done both.) You might even need an antidepressant.
“I’ve already witnessed about 20 species go extinct in the wild,” Perlman says. “It can be like you’re dealing with your friends or your family, and then they die.”
Perlman tells me this as we drive up a winding road on the northwestern edge of Kauai, the geologically oldest Hawaiian island. Perlman is 69 with a sturdy build and white hair. That’s been enough to last him 45 years and counting on the knife’s edge of extreme botany.
The stakes are always high: As the top botanist at Hawaii’s Plant Extinction Prevention Program (PEPP), Perlman deals exclusively in plants with 50 or fewer individuals left—in many cases, much fewer, maybe two or three. Of the 238 species currently on that list, 82 are on Kauai; Perlman literally hangs off cliffs and jumps from helicopters to reach them.
Without him, rare Hawaiian plants die out forever. With him, they at least have a shot. Though now, due to forces beyond Perlman’s control, even that slim hope of survival is in jeopardy. Looming budget cuts threaten to make this the final chapter not only in the history of many native Hawaiian species, but in the program designed to keep them alive.
The silver lining: even if a species does go extinct in the wild, chances are Perlman has already collected enough seeds and genetic material before the last plant disappeared to grow others in a greenhouse. Extra seeds are shipped to a seed bank, where they sit, dehydrated and chilled, awaiting a more hospitable future. There may not be a viable habitat for that plant now, but what about in 50 years? Or 150? “Part of it is saving all that genetic information,” he says. “If your house is on fire, you run in and grab the kid.”
Most people probably wouldn’t speak about obscure threatened plants with this much regard. But we don’t necessarily know what we’re losing when we let a plant species die, Perlman says. Could it have been a source of medicine? Could it be supporting a food chain that will come tumbling down in its stead? Our foresight on this kind of thing has been abominable so far; one only has to look at what happened when wolves were driven out of Yellowstone National Park, only to cause a massive boom in the newly predator-free elk population, which in turn ate every plant and baby tree in sight, starving bears of their berry supply, birds of their nest sites, and bees of flowers to feed on.
Everything was beautiful, and nothing hurt
Every native plant on Kauai is an insane stroke of luck and chance. Each species arrived to the island as a single seed floating at sea or flying in a bird’s belly from thousands of miles away—2,000 miles of open ocean sit between Kauai and the nearest continent. “We think…probably one or two seeds made it every 1,000 years,” says botanist Ken Wood, Perlman’s longtime field partner.
Once a seed took root, the plant would evolve into a completely new species, or several, all of which came to be “endemic,” or found exclusively on the island. Any defenses the plant’s predecessors may have had—thorns, or poison, or repellent scents—were completely dropped. No large mammals or other potential predators made the journey from mainland to the remote island chain. From the plant’s perspective, there was no reason to spend energy on defenses when there were no predators to fend off. So stinging nettles no longer stung. Mint lost its mint oil. Scientists ominously refer to this process as species becoming “naive.”
The same was true for animals like birds and insects when they began to arrive. Famously, when a species of duck made it to the Hawaiian islands, it evolved to drop the concept of flying altogether. Its wings became little nubs. After all, there were no large mammals around to fly away from. The bird grew very large; “gigantism” is an evolutionary phenomena common to islands. Predictably, this huge, flightless duck, known as the “mao-nalo,” went extinct once humans showed up, likely finding them an easy-to-catch source of meat.
Fatal naiveté
When plants are allowed to evolve without fear, they get really, really specific. Take the Hibiscadelphus, for example. Found only in Hawaii, members of this genus of plant have flowers custom-shaped to fit the hooked beak of the honeycreeper, the specific bird that pollinates them. “They’re extremely rare. There were only about seven species described ever, and six were already extinct when I found a new one,” says Perlman. He published the discovery in 2014—it was his 50th new plant species discovery.
Almost 15% of the plants of Hawaii evolved to have separate male and female populations—a very high percentage, says Wood, compared to mainland plants. Under normal circumstances, that trait is good for island plants: it forces them to cross-pollinate, keeping the gene pool relatively diverse even if the population is small. But by “small,” evolutionary forces were probably thinking at least 200 individuals—not four or five. When you can count the number of individual plants on one hand, it’s almost certain that the few remaining males and females won’t be anywhere near each other. In those cases, Perlman and Wood painstakingly gather pollen from the males and bring it to the females.
They have to time this just right—or at least try. There is no perfect math to predict what day an individual plant will decide to flower. “And often you need to dangle off helicopters to get to them,” Wood adds. So missing the mark by a day or two and arriving to a flower that is still closed can mean having leapt from a helicopter and rappelled off a cliff and possibly camped for a day or two for naught.
“That’s what Ken doesn’t like—he likes to go in and go out,” Perlman tells me later. He proudly points to a photo on his laptop screen. It shows him collecting seeds from the last-known member of the endemic fan palm species Pritchardia munroi. The palm was clinging to a slope 2,000 feet up in the air on the tiny Hawaiian island of Molokai. “I had to go there three times to get the seed when it’s ripe,” Perlman says.
by Zoë Schlanger, Quartz | Read more:
Image: Steve Perlman
How to Stop Worrying and Learn to Love the Internet
I suppose earlier generations had to sit through all this huffing and puffing with the invention of television, the phone, cinema, radio, the car, the bicycle, printing, the wheel and so on, but you would think we would learn the way these things work, which is this:
1) everything that’s already in the world when you’re born is just normal;
2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;
3) anything that gets invented after you’re thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it’s been around for about ten years when it gradually turns out to be alright really.
Apply this list to movies, rock music, word processors and mobile phones to work out how old you are. (...)
Because the Internet is so new we still don’t really understand what it is. We mistake it for a type of publishing or broadcasting, because that’s what we’re used to. So people complain that there’s a lot of rubbish online, or that it’s dominated by Americans, or that you can’t necessarily trust what you read on the web. Imagine trying to apply any of those criticisms to what you hear on the telephone. Of course you can’t ‘trust’ what people tell you on the web anymore than you can ‘trust’ what people tell you on megaphones, postcards or in restaurants. Working out the social politics of who you can trust and why is, quite literally, what a very large part of our brain has evolved to do. For some batty reason we turn off this natural scepticism when we see things in any medium which require a lot of work or resources to work in, or in which we can’t easily answer back – like newspapers, television or granite. Hence ‘carved in stone.’ What should concern us is not that we can’t take what we read on the internet on trust – of course you can’t, it’s just people talking – but that we ever got into the dangerous habit of believing what we read in the newspapers or saw on the TV – a mistake that no one who has met an actual journalist would ever make. One of the most important things you learn from the internet is that there is no ‘them’ out there. It’s just an awful lot of ‘us’.
Douglas Adams, How to Stop Worrying and Learn to Love the Internet, written in 1999 | Read more:
1) everything that’s already in the world when you’re born is just normal;
2) anything that gets invented between then and before you turn thirty is incredibly exciting and creative and with any luck you can make a career out of it;
3) anything that gets invented after you’re thirty is against the natural order of things and the beginning of the end of civilisation as we know it until it’s been around for about ten years when it gradually turns out to be alright really.
Apply this list to movies, rock music, word processors and mobile phones to work out how old you are. (...)
Because the Internet is so new we still don’t really understand what it is. We mistake it for a type of publishing or broadcasting, because that’s what we’re used to. So people complain that there’s a lot of rubbish online, or that it’s dominated by Americans, or that you can’t necessarily trust what you read on the web. Imagine trying to apply any of those criticisms to what you hear on the telephone. Of course you can’t ‘trust’ what people tell you on the web anymore than you can ‘trust’ what people tell you on megaphones, postcards or in restaurants. Working out the social politics of who you can trust and why is, quite literally, what a very large part of our brain has evolved to do. For some batty reason we turn off this natural scepticism when we see things in any medium which require a lot of work or resources to work in, or in which we can’t easily answer back – like newspapers, television or granite. Hence ‘carved in stone.’ What should concern us is not that we can’t take what we read on the internet on trust – of course you can’t, it’s just people talking – but that we ever got into the dangerous habit of believing what we read in the newspapers or saw on the TV – a mistake that no one who has met an actual journalist would ever make. One of the most important things you learn from the internet is that there is no ‘them’ out there. It’s just an awful lot of ‘us’.
Douglas Adams, How to Stop Worrying and Learn to Love the Internet, written in 1999 | Read more:
Wednesday, June 28, 2017
The Bespoke High
I didn’t know I’d ever want a vape. It seemed like getting into magic or CrossFit—a whole production and the mandatory acceptance of an accompanying ethos. But at the time I was susceptible to marketing and there was a display with samples and nifty disposable rubber nubbins that went over the mouth end to keep it hygienic.
I often get overwhelmed purchasing marijuana. Like when you go to Ikea without a game plan. I waffle endlessly. There’s just too much to look at. I understand that top-shelf stuff commands flaunting. (How else to show off the bushiness of the cured flower and clusters of trichomes—those hairy crystalline sprinkles of cannabinoid?) But it’s like explaining music by smell or flavor by dance. I want to know how I’ll feel.
The vapes I bought are made by a company called hmbldt. There are six hmbldt formulations on the market and they’re labeled according to what they do. I got Sleep, the one for sleep and Calm for in case my rush-hour Lyft driver was chatty (in L.A. they’re always chatty!). They’re disposable which might be appalling given their staggeringly, demoralizingly expensive price-tag at $100 a pop. It means that you’ll need a separate pen for each ailment but it also means you don’t have to fiddle with cartridges or even flower. I don’t consume cannabis fast enough for any denomination of actual buds not to become petrified and uninviting and hmbldts have 200 doses so you can hang on to them for a while.
White, slender with a rounded tip—they’re the vape version of smoking Capri cigarettes and they’re about as long as one but wider. They look, to be honest, as if Muji made a tampon. They take their name (in a very web 2.0-y way) from Humboldt County in Northern California which evokes marine layer, Redwoods and (for those in the know) very good weed from 1996 onwards when Proposition 215 made growing medical marijuana legal in the golden state. And probably illegally since before.
Part of my decision was the brevity of the buying experience. No faffing with specials or personal suggestions (which I sometimes love but not always) but mostly it was that these days I’m scared of weed.
The thing is, at my age (mid-30s) a joint is produced with reliable frequency—barbecues, outdoor shows, birthday parties, and even a few picnic-situations where babies are present (provided they’re upwind). Basically any occasion that calls for rosé.
And I like weed. A lot. Enough that I wish I could smoke every vehicle for marijuana that crosses my path. But the last time I took a wee toke of a smoldering cone passed to me by a trusted friend in the spirit of conviviality it took me out of commission for the rest of the day. I couldn’t even speak. I watched my hand lift the joint towards my face and then it was tomorrow.
It’s not news that we’re living in a golden age of legalized marijuana. If golden is to be defined by weed so mighty it renders you catatonic. Two years ago a 19-year-old in Colorado leapt to his death upon eating a pot cookie. Louis C.K. has a bit about how he, “didn’t know they’d been working on this shit like it’s the cure for cancer.”
It’s true. Weed is virtually unrecognizable. It’s incredible to think pot’s changed this much. It used to feel low-rent like Boone’s Farm or Whip-Its. But now it’s the recreational drug version of the kid who was a nothing in middle school who becomes God-hot over summer break. To a genetically—celestially—engineered degree that could irradiate you. Weed, frankly, had evolved past my enjoyment of it. Especially if I have a job where one of the requirements is that I show up.
It’s for these reasons that I understand when people aren’t into it. It seems somehow both sleazy and intimidating. On one hand it’s a drug that’s illegal in most parts of the country and on the other, you’ve got luxury brands that are touted as the “Hermès of Marijuana,” and the Beverly Hills Cannabis Club that sells buds that cost as much as their weight in white truffles.
Plus, people who know too much about weed are annoying. Most invitations to smoke are accompanied by a story that serves as a kind (ha) of tax about Sativas or Indicas and how hybrids are the sweet spot and OG Kush or Girl Scout Cookies or else how Alaskan Thunderfuck is a magical journey. It’s like how Pappy Van Winkle bourbon doesn’t become interesting until someone threatens to pour you some. The really inviting thing about hmbldts (and perhaps this is true of most vapes), is that there’s less pressure to share.
The pens are aesthetically pleasing—certainly more so than a handblown glass bong resembling a dragon or those cumbersome oblongs known as box vapes. Each three-second pull you’re doled out exactly a 2.25 milligrams dose, with just under 2 milligrams of cannabinoids. The vape vibrates to let you know when you’re done. Comparatively a puff of a joint deploys around 3 milligrams of cannabinoids. (...)
I can report that Sleep is good at sleep. Inducing it and then keeping you under. I did have a wicked weed hangover the next morning (that grogginess of not being quite finished sleeping but running out of time) but eight consecutive hours was a profound relief.
The Bliss pen was pleasant. An all-purpose high and familiar as a Sativa dominant strain or a “morning weed,” the way Indicas are soporific and considered better at night.
Hmbldt also sells Relief for pain management, Arouse to promote intimacy and Passion for seismic culminations of aforementioned intimacy. If it seems as though it’s overkill or gimmicky that we’d need Arouse and Passion, I’d say I agreed with you. That is until I tried them.
The medicinal properties of marijuana are well known—that it’s effective for alleviating physical discomfort and insomnia, or how CBD (cannabidiol), the lesser-known, non-intoxicating cannabinoid (the active agents in marijuana) behind the psychoactive THC (tetrahydrocannabidiol) is an effective treatment for seizures—but I’m a recreational user. We’re so used to seeing drugs in binary terms—sober or altered—and while intensities differ (nursing a beer vs. any time you think shots are a good idea) we rarely administer a white wine spritzer for headaches or a Long Island Iced Tea for anxiety. Usually it’s blunt-force drinking. A holistic approach to anesthetizing.
But there are benefits to customized formulations that I hadn’t before considered. Calm skews heavily CBD, you’ve got a body high without any of the mind altering effects of THC.
“THC activates a system in our own bodies called the endocannabinoid system,” says Igor Grant, the director of The University of California Center for Medicinal Cannabis Research (CMCR) and the chair of the department of psychiatry at the University of California San Diego. The CMCR studies the effects of cannabis on HIV Neuropathic pain and how it impairs your driving skills. “[They’re] signaling molecules that have to do with functions as basic as appetite control, inflammation, coordination, memory and other cognitive functions. The effect of THC is to affect these circuitries in the brain. CBD does not appear to have direct psychoactive effects. It doesn’t cause changes in cognitive function or emotions. Or neurologic coordination issues.”
Typical marijuana flower has a THC to CBD ratio of 20 or 40:1. Hmbldt’s Calm has THC to CBD ratio of 10:1. Relief is 2:1. With Calm I don’t experience paranoia—that running commentary of how high I think people think I am. I can even write on it which makes it singular to any marijuana I’ve ever sampled.
There’s a new formulation that hasn’t hit the market called Focus with a CBD to THC ratio of 4:1. It will be blended with cannabinoids that narrow your attention span to the task in front of you without compromising your creative process.
I often get overwhelmed purchasing marijuana. Like when you go to Ikea without a game plan. I waffle endlessly. There’s just too much to look at. I understand that top-shelf stuff commands flaunting. (How else to show off the bushiness of the cured flower and clusters of trichomes—those hairy crystalline sprinkles of cannabinoid?) But it’s like explaining music by smell or flavor by dance. I want to know how I’ll feel.
The vapes I bought are made by a company called hmbldt. There are six hmbldt formulations on the market and they’re labeled according to what they do. I got Sleep, the one for sleep and Calm for in case my rush-hour Lyft driver was chatty (in L.A. they’re always chatty!). They’re disposable which might be appalling given their staggeringly, demoralizingly expensive price-tag at $100 a pop. It means that you’ll need a separate pen for each ailment but it also means you don’t have to fiddle with cartridges or even flower. I don’t consume cannabis fast enough for any denomination of actual buds not to become petrified and uninviting and hmbldts have 200 doses so you can hang on to them for a while.
White, slender with a rounded tip—they’re the vape version of smoking Capri cigarettes and they’re about as long as one but wider. They look, to be honest, as if Muji made a tampon. They take their name (in a very web 2.0-y way) from Humboldt County in Northern California which evokes marine layer, Redwoods and (for those in the know) very good weed from 1996 onwards when Proposition 215 made growing medical marijuana legal in the golden state. And probably illegally since before.
Part of my decision was the brevity of the buying experience. No faffing with specials or personal suggestions (which I sometimes love but not always) but mostly it was that these days I’m scared of weed.
The thing is, at my age (mid-30s) a joint is produced with reliable frequency—barbecues, outdoor shows, birthday parties, and even a few picnic-situations where babies are present (provided they’re upwind). Basically any occasion that calls for rosé.
And I like weed. A lot. Enough that I wish I could smoke every vehicle for marijuana that crosses my path. But the last time I took a wee toke of a smoldering cone passed to me by a trusted friend in the spirit of conviviality it took me out of commission for the rest of the day. I couldn’t even speak. I watched my hand lift the joint towards my face and then it was tomorrow.
It’s not news that we’re living in a golden age of legalized marijuana. If golden is to be defined by weed so mighty it renders you catatonic. Two years ago a 19-year-old in Colorado leapt to his death upon eating a pot cookie. Louis C.K. has a bit about how he, “didn’t know they’d been working on this shit like it’s the cure for cancer.”
It’s true. Weed is virtually unrecognizable. It’s incredible to think pot’s changed this much. It used to feel low-rent like Boone’s Farm or Whip-Its. But now it’s the recreational drug version of the kid who was a nothing in middle school who becomes God-hot over summer break. To a genetically—celestially—engineered degree that could irradiate you. Weed, frankly, had evolved past my enjoyment of it. Especially if I have a job where one of the requirements is that I show up.
It’s for these reasons that I understand when people aren’t into it. It seems somehow both sleazy and intimidating. On one hand it’s a drug that’s illegal in most parts of the country and on the other, you’ve got luxury brands that are touted as the “Hermès of Marijuana,” and the Beverly Hills Cannabis Club that sells buds that cost as much as their weight in white truffles.
Plus, people who know too much about weed are annoying. Most invitations to smoke are accompanied by a story that serves as a kind (ha) of tax about Sativas or Indicas and how hybrids are the sweet spot and OG Kush or Girl Scout Cookies or else how Alaskan Thunderfuck is a magical journey. It’s like how Pappy Van Winkle bourbon doesn’t become interesting until someone threatens to pour you some. The really inviting thing about hmbldts (and perhaps this is true of most vapes), is that there’s less pressure to share.
The pens are aesthetically pleasing—certainly more so than a handblown glass bong resembling a dragon or those cumbersome oblongs known as box vapes. Each three-second pull you’re doled out exactly a 2.25 milligrams dose, with just under 2 milligrams of cannabinoids. The vape vibrates to let you know when you’re done. Comparatively a puff of a joint deploys around 3 milligrams of cannabinoids. (...)
I can report that Sleep is good at sleep. Inducing it and then keeping you under. I did have a wicked weed hangover the next morning (that grogginess of not being quite finished sleeping but running out of time) but eight consecutive hours was a profound relief.
The Bliss pen was pleasant. An all-purpose high and familiar as a Sativa dominant strain or a “morning weed,” the way Indicas are soporific and considered better at night.
Hmbldt also sells Relief for pain management, Arouse to promote intimacy and Passion for seismic culminations of aforementioned intimacy. If it seems as though it’s overkill or gimmicky that we’d need Arouse and Passion, I’d say I agreed with you. That is until I tried them.
The medicinal properties of marijuana are well known—that it’s effective for alleviating physical discomfort and insomnia, or how CBD (cannabidiol), the lesser-known, non-intoxicating cannabinoid (the active agents in marijuana) behind the psychoactive THC (tetrahydrocannabidiol) is an effective treatment for seizures—but I’m a recreational user. We’re so used to seeing drugs in binary terms—sober or altered—and while intensities differ (nursing a beer vs. any time you think shots are a good idea) we rarely administer a white wine spritzer for headaches or a Long Island Iced Tea for anxiety. Usually it’s blunt-force drinking. A holistic approach to anesthetizing.
But there are benefits to customized formulations that I hadn’t before considered. Calm skews heavily CBD, you’ve got a body high without any of the mind altering effects of THC.
“THC activates a system in our own bodies called the endocannabinoid system,” says Igor Grant, the director of The University of California Center for Medicinal Cannabis Research (CMCR) and the chair of the department of psychiatry at the University of California San Diego. The CMCR studies the effects of cannabis on HIV Neuropathic pain and how it impairs your driving skills. “[They’re] signaling molecules that have to do with functions as basic as appetite control, inflammation, coordination, memory and other cognitive functions. The effect of THC is to affect these circuitries in the brain. CBD does not appear to have direct psychoactive effects. It doesn’t cause changes in cognitive function or emotions. Or neurologic coordination issues.”
Typical marijuana flower has a THC to CBD ratio of 20 or 40:1. Hmbldt’s Calm has THC to CBD ratio of 10:1. Relief is 2:1. With Calm I don’t experience paranoia—that running commentary of how high I think people think I am. I can even write on it which makes it singular to any marijuana I’ve ever sampled.
There’s a new formulation that hasn’t hit the market called Focus with a CBD to THC ratio of 4:1. It will be blended with cannabinoids that narrow your attention span to the task in front of you without compromising your creative process.
by Mary H.K. Choi, The Atlantic | Read more:
Image: IM_photo / chromatos / Joshua Rainey Photography / Luis Carlos Jimenez del rio / Shutterstock / hmbldt / Zak Bickel Good Journalism Requires Clarity, Accuracy
Before discussing the events of today in the Senate, I want to note a subsidiary issue, a matter of press coverage. But this is not a secondary issue in terms of importance. Let me also preface this by saying I’m going to focus on another journalist: CNN’s Dana Bash. I don’t know Dana. But I’ve relied on her reporting on CNN for years. So this isn’t meant as an attack on her. To me it is simply an illustration of a broader failure of coverage.
With that, here goes.
The following is the transcript of a brief exchange between Wolf Blitzer and Bash just after Mitch McConnell delivered some brief remarks outside the White House after the Senate GOP conference met with the President. We come in immediately after McConnell finishes speaking.
I should begin by saying that I think Bash is right for many voters. But the reality is that this is the case because the coverage of national health care policy is fundamentally distorted by the imperatives of false balance or forced balance coverage. The idea here is that the two parties are so set in their ideological corners that they can’t constructively come together and find points of compromise to address issues of great public concern. But this sentiment only makes sense if you think both parties are trying to accomplish something approaching the same thing, albeit perhaps with very different strategies. That is simply not true.
This is all of a piece with the drama surrounding the successive CBO scores, each of which have been remarkably similar. The three have shown 24 million, 23 million and most recently 22 million losing their health insurance coverage by 2026. To have the numbers so close you’ve got to be following a pretty consistent strategy. The Democrats’ goal with the ACA was to increase the number of Americans who had health insurance coverage. They did it with a mix of operating through private insurance companies (the exchanges/market places) and dramatically increasing the number of Americans eligible for Medicaid, which is essentially a national single payer plan for the poor.
The results have been far from perfect. But the number of people with insurance has risen dramatically since passage of the ACA in 2010. This is an undeniable statistical reality. The Republican plan has been to repeal the bill and take coverage away from the people who received it. That may sound like a partisan way of understanding the situation. But that’s only because we’ve absorbed the skewed coverage.
We talk a lot about how Republicans real focus is getting the ACA money for a big tax cut, which is unquestionably true. You can only get the tax cut if you get back the money that went toward getting people covered. But at a deeper level this is a philosophical dispute, a basic difference in goals. It’s a difference in desired outcomes, not an ideological dispute over the best way to achieve them.
Current Republican ideology, if not all Republicans, posits that it is simply not the responsibility or place of government, certainly not the federal government, to make sure everyone has health care coverage. You can agree or disagree with that premise. But it’s not hard to understand and it is not indefensible. Very few of us think the government should step in if someone doesn’t have enough money to buy a car. We don’t think there’s a right to a home or apartment where every child has their own bedroom. On most things we accept that things are not equal, even if we believe that extremes of inequality are bad for society and even immoral.
But many of us think that healthcare is fundamentally different. It’s not just another market product that we accept people can or can’t get or can or can’t get at certain levels of quality because of wealth, chance, exertion and all the other factors that go into wealth and income. This is both a moral and ideological premise.
One might more sympathetically say that Republicans believe that the market can more reliably and cheaply provide coverage in comparison with the government. But there’s little evidence this is the case with health care coverage – certainly not when it comes to the big picture issues of constructing insurance markets in which some people have dramatically less money and dramatically higher risk. In any case, Republican health care policies since the beginning of this century have shown very little interest in using market mechanisms to expand care. After all, Obamacare is a more progressive and redistributionist implementation of an idea that emerged from Republican think-tanks looking for policy alternatives to a national health care social insurance plan like what we now call “single payer.”
When you try three times to ‘repeal and replace’ and each time you come up with something that takes away coverage from almost everyone who got it under Obamacare, that’s not an accident or a goof. That is what you’re trying to do. ‘Repeal and replace’ was a slogan that made up for simple ‘repeal’ not being acceptable to a lot of people. But in reality, it’s still repeal. Claw back the taxes, claw back the coverage.
Pretending that both parties just have very different approaches to solving a commonly agreed upon problem is really just a lie. It’s not true. One side is looking for ways to increase the number of people who have real health insurance and thus reasonable access to health care and the other is trying to get the government out of the health care provision business with the inevitable result that the opposite will be the case.
If you’re not clear on this fundamentally point, the whole thing does get really confusing. How can it be that both sides flatly refuse to work together at all? As Bash puts it, “Why can’t these parties work together on something that is such a huge part of the economy, that is something that is so vital to everybody’s lives, all of their constituents’ lives, [it’s] mind boggling.”
If you had an old building and one group wanted to refurbish and preserve it and the other wanted to tear it down, it wouldn’t surprise you that the two groups couldn’t work together on a solution. It’s an either/or. You’re trying to do two fundamentally opposite things, diametrically opposed. There’s no basis for cooperation or compromise because the fundamental goal is different. This entire health care debate has essentially been the same. Only the coverage has rarely captured that. That’s a big failure. It also explains why people get confused and even fed up.
With that, here goes.
The following is the transcript of a brief exchange between Wolf Blitzer and Bash just after Mitch McConnell delivered some brief remarks outside the White House after the Senate GOP conference met with the President. We come in immediately after McConnell finishes speaking.
BLITZER: So there is the Senate Majority Leader, Mitch McConnell. There is a headline there. He really doesn’t want to work with the Democrats if the Republican legislation were to fail. He specifically said that none of the reforms the Republicans want as far as market reform, Medicaid reform would be acceptable to the Democrats. Dana Bash, significant statements from the Majority Leader if it were to fail, basically saying it’s the Republican version, whatever tinkering they do now, has to pass.
BASH: Absolutely. Look, he is speaking the truth in a lot of ways. Philosophically, the two parties are and have been for some time, very, very different on their sort of global approach to health care. I think that people, though, out there looking at this [are] saying, why can’t these parties work together on something that is such a huge part of the economy, that is something that is so vital to everybody’s lives, all of their constituents’ lives. [It’s] mind boggling. But you know what, it happened when the Democrats passed Obamacare. They will tell you from the Obama team that they tried very hard to get Republicans and they weren’t playing ball. But it’s happening now that Republicans are in charge, too.There’s a lot here.
I should begin by saying that I think Bash is right for many voters. But the reality is that this is the case because the coverage of national health care policy is fundamentally distorted by the imperatives of false balance or forced balance coverage. The idea here is that the two parties are so set in their ideological corners that they can’t constructively come together and find points of compromise to address issues of great public concern. But this sentiment only makes sense if you think both parties are trying to accomplish something approaching the same thing, albeit perhaps with very different strategies. That is simply not true.
This is all of a piece with the drama surrounding the successive CBO scores, each of which have been remarkably similar. The three have shown 24 million, 23 million and most recently 22 million losing their health insurance coverage by 2026. To have the numbers so close you’ve got to be following a pretty consistent strategy. The Democrats’ goal with the ACA was to increase the number of Americans who had health insurance coverage. They did it with a mix of operating through private insurance companies (the exchanges/market places) and dramatically increasing the number of Americans eligible for Medicaid, which is essentially a national single payer plan for the poor.
The results have been far from perfect. But the number of people with insurance has risen dramatically since passage of the ACA in 2010. This is an undeniable statistical reality. The Republican plan has been to repeal the bill and take coverage away from the people who received it. That may sound like a partisan way of understanding the situation. But that’s only because we’ve absorbed the skewed coverage.
We talk a lot about how Republicans real focus is getting the ACA money for a big tax cut, which is unquestionably true. You can only get the tax cut if you get back the money that went toward getting people covered. But at a deeper level this is a philosophical dispute, a basic difference in goals. It’s a difference in desired outcomes, not an ideological dispute over the best way to achieve them.
Current Republican ideology, if not all Republicans, posits that it is simply not the responsibility or place of government, certainly not the federal government, to make sure everyone has health care coverage. You can agree or disagree with that premise. But it’s not hard to understand and it is not indefensible. Very few of us think the government should step in if someone doesn’t have enough money to buy a car. We don’t think there’s a right to a home or apartment where every child has their own bedroom. On most things we accept that things are not equal, even if we believe that extremes of inequality are bad for society and even immoral.
But many of us think that healthcare is fundamentally different. It’s not just another market product that we accept people can or can’t get or can or can’t get at certain levels of quality because of wealth, chance, exertion and all the other factors that go into wealth and income. This is both a moral and ideological premise.
One might more sympathetically say that Republicans believe that the market can more reliably and cheaply provide coverage in comparison with the government. But there’s little evidence this is the case with health care coverage – certainly not when it comes to the big picture issues of constructing insurance markets in which some people have dramatically less money and dramatically higher risk. In any case, Republican health care policies since the beginning of this century have shown very little interest in using market mechanisms to expand care. After all, Obamacare is a more progressive and redistributionist implementation of an idea that emerged from Republican think-tanks looking for policy alternatives to a national health care social insurance plan like what we now call “single payer.”
When you try three times to ‘repeal and replace’ and each time you come up with something that takes away coverage from almost everyone who got it under Obamacare, that’s not an accident or a goof. That is what you’re trying to do. ‘Repeal and replace’ was a slogan that made up for simple ‘repeal’ not being acceptable to a lot of people. But in reality, it’s still repeal. Claw back the taxes, claw back the coverage.
Pretending that both parties just have very different approaches to solving a commonly agreed upon problem is really just a lie. It’s not true. One side is looking for ways to increase the number of people who have real health insurance and thus reasonable access to health care and the other is trying to get the government out of the health care provision business with the inevitable result that the opposite will be the case.
If you’re not clear on this fundamentally point, the whole thing does get really confusing. How can it be that both sides flatly refuse to work together at all? As Bash puts it, “Why can’t these parties work together on something that is such a huge part of the economy, that is something that is so vital to everybody’s lives, all of their constituents’ lives, [it’s] mind boggling.”
If you had an old building and one group wanted to refurbish and preserve it and the other wanted to tear it down, it wouldn’t surprise you that the two groups couldn’t work together on a solution. It’s an either/or. You’re trying to do two fundamentally opposite things, diametrically opposed. There’s no basis for cooperation or compromise because the fundamental goal is different. This entire health care debate has essentially been the same. Only the coverage has rarely captured that. That’s a big failure. It also explains why people get confused and even fed up.
by Josh Marshall, TPM | Read more:
Tuesday, June 27, 2017
Reading Thoreau at 200
One of the smaller ironies in my life has been teaching Henry David Thoreau at an Ivy League school for half a century. Asking young people to read Thoreau can make me feel like Victor Frankenstein, waiting for a bolt of lightning: look, it’s moving, it’s alive, it’s alive! Most students are indifferent—they memorize, regurgitate, and move serenely on, untouched. Those bound for Wall Street often yawn or snicker at his call to simplify, to refuse, to resist. Perhaps a third of them react with irritation, shading into hatred. How dare he question the point of property, the meaning of wealth? The smallest contingent, and the most gratifying, are those who wake to his message.
Late adolescence is a fine time to meet a work that jolts. These days, Ayn Rand’s stock is stratospheric, J. D. Salinger’s, once untouchable, in decline. WASPs of any gender continue to weep at A River Runs Through It, and first-generation collegians still thrill to Gatsby, even when I remind them that Jay is shot dead in his gaudy swimming pool. In truth, films move them far more; they talk about The Matrix the way my friends once discussed Hemingway or Kerouac. But Walden can still start a fight. The only other book that possesses this galvanizing quality is Moby-Dick.
Down the decades, more than a few students have told me that in bad times they return to Thoreau, hoping for comfort, or at least advice. After the electoral map bled red last fall, I went to him for counsel too, but found mostly controversy. In this bicentennial year of Thoreau’s birth, Walden, or Life in the Woods (1854) is still our most famous antebellum book, and in American history he is the figure who most speaks for nature. The cultural meme of the lone seeker in the woods has become Thoreau’s chief public legacy: regrettable for him, dangerous for us. (...)
Our times have never needed the shock of Thoreau more. We face a government eager to kill all measures of natural protection in the name of corporate profit. Elected officials openly bray that environmentalism “is the greatest threat to freedom.” On federal, state, and local levels, civil liberties and free speech are under severe attack. Thoreau is too; the barriers to reading him as a voice of resistance—or reading him at all—are multiplying swiftly.
First, he is becoming an unperson. From the 1920s to the early 2000s, Walden was required reading in hundreds of thousands of U.S. high school and college survey courses. Today, Thoreau is taught far less widely. The intricate prose of Walden is a tough read in the age of tweets, so much so that several “plain English” translations are now marketed. “Civil Disobedience” was a major target of McCarthyite suppression in the 1950s, and may be again.
Second, as F. Scott Fitzgerald said, in the end authors write for professors, and the scholarly fate of Thoreau is clouded. Until the postwar era, Thoreau studies were largely left to enthusiasts. Academic criticism now argues for many versions of Thoreau (manic-depressive, gay, straight, misogynist, Marxist, Catholic, Buddhist, faerie-fixated). But other aspects still await full study: the family man, the man of spirituality, the man of science—and the man who wrote the Journal.
Those who study his peers, such as Emerson, Melville, or Dickinson, routinely examine each author’s entire output. Thoreau scholars have yet to deal fully or consistently with the Journal, which runs longer than two million words (many still unpublished), and fills 47 manuscript volumes, or 7,000 pages. It is the great untold secret of American letters, and also the distorting lens of Thoreau studies.
I spent years reading manuscript pages of the Journal, watching Thoreau’s insights take form, day upon day, as unmediated prose experiments. Unlike Emerson’s volumes, arrayed in topical order, Thoreau’s Journal follows time. Some notations arise from his surveying jobs, hiking through fields and pausing to note discoveries: a blooming plant, a foraging bird, the look of tree-shadows on water. His eye and mind are relentless. Although the entries are in present tense and seem written currente calamo, offhandedly, with the pen running on, in fact he worked from field notes, usually the next day, turning ground-truth into literature. He finds a riverbank hollow of frost crystals, and replicates exactly how they look, at a distance and then closer, imagining how they formed. His interest is in the objects, but also in how a subject perceives them—the phenomenology of observation and learning. He finds a mushroom, phallus impudicus, in the form of a penis: “Pray, what was Nature thinking of when she made this? She almost puts herself on a level of those who draw in privies.” His father’s pig escapes and leads its pursuers all over town, helpless before the animal’s cunning. He watches snowflakes land on his coat sleeve: “And they all sing, melting as they sing, of the mysteries of the number six; six, six, six.” None of these entries reached print; they celebrate instead the gift of writing.
Third, Thoreau’s literary genes have split and recombined in our culture, with disturbing results. Organic hipster? Off-the-grid prepper? His popular image has become both blurred and politicized. If Thoreau as American eco-hero peaked around the first Earth Day (1970), today he is derided by conservatives who detest his anti-business sentiments and by postmodern thinkers for whom nature is a suspect green blur. (I still recall one faculty meeting at which a tenured English professor dismissed DNA as all right, “if you believe in that sort of thing.”)
Thoreau has always had detractors, even among his friends. Emerson’s delicate, vicious smear job at his funeral, a masterly takedown in eulogy form that enraged family and friends, set the pattern for enemies like James Russell Lowell (though happily not Lowell’s goddaughter, Virginia Woolf). Our own period sensibilities can flinch when confronted with Thoreaus we did not expect—the efficient capitalist, improving graphite mixes for the family pencil works; the schoolmaster who caned nine pupils at random, then quit in a fury; the early Victorian who may have chosen chastity because his brother John never lived a full life. (Henry’s most explicit statement on the subject of sex, even in the Journal: “I fell in love with a shrub oak.”)
Yet lately I have noted a new wave of loathing. When witnesses to his life still abounded, the prime criticism of Thoreau was Not Genteel. Now, the tag is Massive Hypocrite. Reader comments on Goodreads and Amazon alone are a deluge of angry, misspelled assertions that Thoreau was a rich-boy slacker, a humorless, arrogant, lying elitist. In the trolling of Thoreau by the digital hive mind, the most durable myth is Cookies-and-Laundry: that Thoreau, claiming independence at Walden, brought his washing home to his mother, and enjoyed her cooking besides. Claims by Concord neighbors that he was a pie-stealing layabout appear as early as the 1880s; Emerson’s youngest son felt compelled to rebut them, calling his childhood friend wise, gentle, and lovable.
The most recent eruption is “Pond Scum,” a 2015 New Yorker piece of fractal wrongness by Kathryn Schulz, who paints Thoreau as cold, parochial, egotistical, incurious, misanthropic, illogical, naïve, and cruel—and misses the real story of Walden, his journey from alienation to insight. I have spent a lifetime with Thoreau. I neither love nor hate him, but I know him well. I tracked down his papers, lived in Concord, walked his trails, repeated his journeys, and read, twice, the full Journal. I knew we were in the realm of alternative facts when Schulz dismissed Thoreau as “a well-off Harvard-educated man without dependents.” For that misreading alone, Schulz stands as the Kellyanne Conway of Thoreau commentary. He was the first in his family to attend college, a minority admit (owing to regional bias against French names), working-class to the bone, and after John’s death, the one son, obliged to support his family’s two businesses, boarding house and pencil factory—inhaling graphite dust from the latter fatally weakened his lungs. He was graduated from Harvard, yes, but into a wrenching depression, the Panic of 1837, and during Walden stays, he washed his dishes, floors, and laundry with cold pond water.
Did he go home often? Of course, because his father needed help at the shop. Did he do laundry in town? We do not know, but as the only surviving son of aging boardinghouse-keepers, Thoreau was no stranger to the backbreaking, soul-killing round of 19th-century commercial domestic labor. He knew no other life until he made another one, at Walden.
Pushback on “Pond Scum” was swift and gratifying, and gifted critics such as Donovan Hohn, Jedediah Purdy, and Rebecca Solnit, who have written so well on Thoreau, reassure me that as his third century opens, intelligent readers will continue to find him. But the path to Walden is, increasingly, neglected and overgrown. I constantly meet undergraduates who have never hiked alone, held an after-school job, or lived off schedule. They don’t know the source of milk or the direction of north. They really don’t like to unplug. In seminars, they look up from Walden in cautious wonder: “Can you even say this?” Thoreau worries them; he smells of resistance and of virtue. He is powerfully, compulsively original. He will not settle.
by William Howarth, American Scholar | Read more:
Late adolescence is a fine time to meet a work that jolts. These days, Ayn Rand’s stock is stratospheric, J. D. Salinger’s, once untouchable, in decline. WASPs of any gender continue to weep at A River Runs Through It, and first-generation collegians still thrill to Gatsby, even when I remind them that Jay is shot dead in his gaudy swimming pool. In truth, films move them far more; they talk about The Matrix the way my friends once discussed Hemingway or Kerouac. But Walden can still start a fight. The only other book that possesses this galvanizing quality is Moby-Dick.
Down the decades, more than a few students have told me that in bad times they return to Thoreau, hoping for comfort, or at least advice. After the electoral map bled red last fall, I went to him for counsel too, but found mostly controversy. In this bicentennial year of Thoreau’s birth, Walden, or Life in the Woods (1854) is still our most famous antebellum book, and in American history he is the figure who most speaks for nature. The cultural meme of the lone seeker in the woods has become Thoreau’s chief public legacy: regrettable for him, dangerous for us. (...)
Our times have never needed the shock of Thoreau more. We face a government eager to kill all measures of natural protection in the name of corporate profit. Elected officials openly bray that environmentalism “is the greatest threat to freedom.” On federal, state, and local levels, civil liberties and free speech are under severe attack. Thoreau is too; the barriers to reading him as a voice of resistance—or reading him at all—are multiplying swiftly.
First, he is becoming an unperson. From the 1920s to the early 2000s, Walden was required reading in hundreds of thousands of U.S. high school and college survey courses. Today, Thoreau is taught far less widely. The intricate prose of Walden is a tough read in the age of tweets, so much so that several “plain English” translations are now marketed. “Civil Disobedience” was a major target of McCarthyite suppression in the 1950s, and may be again.
Second, as F. Scott Fitzgerald said, in the end authors write for professors, and the scholarly fate of Thoreau is clouded. Until the postwar era, Thoreau studies were largely left to enthusiasts. Academic criticism now argues for many versions of Thoreau (manic-depressive, gay, straight, misogynist, Marxist, Catholic, Buddhist, faerie-fixated). But other aspects still await full study: the family man, the man of spirituality, the man of science—and the man who wrote the Journal.
Those who study his peers, such as Emerson, Melville, or Dickinson, routinely examine each author’s entire output. Thoreau scholars have yet to deal fully or consistently with the Journal, which runs longer than two million words (many still unpublished), and fills 47 manuscript volumes, or 7,000 pages. It is the great untold secret of American letters, and also the distorting lens of Thoreau studies.
I spent years reading manuscript pages of the Journal, watching Thoreau’s insights take form, day upon day, as unmediated prose experiments. Unlike Emerson’s volumes, arrayed in topical order, Thoreau’s Journal follows time. Some notations arise from his surveying jobs, hiking through fields and pausing to note discoveries: a blooming plant, a foraging bird, the look of tree-shadows on water. His eye and mind are relentless. Although the entries are in present tense and seem written currente calamo, offhandedly, with the pen running on, in fact he worked from field notes, usually the next day, turning ground-truth into literature. He finds a riverbank hollow of frost crystals, and replicates exactly how they look, at a distance and then closer, imagining how they formed. His interest is in the objects, but also in how a subject perceives them—the phenomenology of observation and learning. He finds a mushroom, phallus impudicus, in the form of a penis: “Pray, what was Nature thinking of when she made this? She almost puts herself on a level of those who draw in privies.” His father’s pig escapes and leads its pursuers all over town, helpless before the animal’s cunning. He watches snowflakes land on his coat sleeve: “And they all sing, melting as they sing, of the mysteries of the number six; six, six, six.” None of these entries reached print; they celebrate instead the gift of writing.
Third, Thoreau’s literary genes have split and recombined in our culture, with disturbing results. Organic hipster? Off-the-grid prepper? His popular image has become both blurred and politicized. If Thoreau as American eco-hero peaked around the first Earth Day (1970), today he is derided by conservatives who detest his anti-business sentiments and by postmodern thinkers for whom nature is a suspect green blur. (I still recall one faculty meeting at which a tenured English professor dismissed DNA as all right, “if you believe in that sort of thing.”)
Thoreau has always had detractors, even among his friends. Emerson’s delicate, vicious smear job at his funeral, a masterly takedown in eulogy form that enraged family and friends, set the pattern for enemies like James Russell Lowell (though happily not Lowell’s goddaughter, Virginia Woolf). Our own period sensibilities can flinch when confronted with Thoreaus we did not expect—the efficient capitalist, improving graphite mixes for the family pencil works; the schoolmaster who caned nine pupils at random, then quit in a fury; the early Victorian who may have chosen chastity because his brother John never lived a full life. (Henry’s most explicit statement on the subject of sex, even in the Journal: “I fell in love with a shrub oak.”)
Yet lately I have noted a new wave of loathing. When witnesses to his life still abounded, the prime criticism of Thoreau was Not Genteel. Now, the tag is Massive Hypocrite. Reader comments on Goodreads and Amazon alone are a deluge of angry, misspelled assertions that Thoreau was a rich-boy slacker, a humorless, arrogant, lying elitist. In the trolling of Thoreau by the digital hive mind, the most durable myth is Cookies-and-Laundry: that Thoreau, claiming independence at Walden, brought his washing home to his mother, and enjoyed her cooking besides. Claims by Concord neighbors that he was a pie-stealing layabout appear as early as the 1880s; Emerson’s youngest son felt compelled to rebut them, calling his childhood friend wise, gentle, and lovable.
The most recent eruption is “Pond Scum,” a 2015 New Yorker piece of fractal wrongness by Kathryn Schulz, who paints Thoreau as cold, parochial, egotistical, incurious, misanthropic, illogical, naïve, and cruel—and misses the real story of Walden, his journey from alienation to insight. I have spent a lifetime with Thoreau. I neither love nor hate him, but I know him well. I tracked down his papers, lived in Concord, walked his trails, repeated his journeys, and read, twice, the full Journal. I knew we were in the realm of alternative facts when Schulz dismissed Thoreau as “a well-off Harvard-educated man without dependents.” For that misreading alone, Schulz stands as the Kellyanne Conway of Thoreau commentary. He was the first in his family to attend college, a minority admit (owing to regional bias against French names), working-class to the bone, and after John’s death, the one son, obliged to support his family’s two businesses, boarding house and pencil factory—inhaling graphite dust from the latter fatally weakened his lungs. He was graduated from Harvard, yes, but into a wrenching depression, the Panic of 1837, and during Walden stays, he washed his dishes, floors, and laundry with cold pond water.
Did he go home often? Of course, because his father needed help at the shop. Did he do laundry in town? We do not know, but as the only surviving son of aging boardinghouse-keepers, Thoreau was no stranger to the backbreaking, soul-killing round of 19th-century commercial domestic labor. He knew no other life until he made another one, at Walden.
Pushback on “Pond Scum” was swift and gratifying, and gifted critics such as Donovan Hohn, Jedediah Purdy, and Rebecca Solnit, who have written so well on Thoreau, reassure me that as his third century opens, intelligent readers will continue to find him. But the path to Walden is, increasingly, neglected and overgrown. I constantly meet undergraduates who have never hiked alone, held an after-school job, or lived off schedule. They don’t know the source of milk or the direction of north. They really don’t like to unplug. In seminars, they look up from Walden in cautious wonder: “Can you even say this?” Thoreau worries them; he smells of resistance and of virtue. He is powerfully, compulsively original. He will not settle.
by William Howarth, American Scholar | Read more:
Image: Pablo Sanchez/ Flickr; Photo-illustration by David Herbick
Shaka
“Hang loose,” “Right on,” “Thank you,” “Things are great,” “Take it easy” – in Hawaii, the shaka sign expresses all those friendly messages and more. As kamaaina know, to make the shaka, you curl your three middle fingers while extending your thumb and baby finger. For emphasis, quickly turn your hand back and forth with your knuckles facing outward.
As the story goes, that ubiquitous gesture traces its origins back to the early 1900s when Hamana Kalili worked at Kahuku Sugar Mill. His job as a presser was to feed cane through the rollers to squeeze out its juice. One day, Kalili’s right hand got caught in the rollers, and his middle, index and ring fingers were crushed.
After the accident, the plantation owners gave Kalili a new job as the security officer for the train that used to run between Sunset Beach and Kaaawa. Part of his job was to prevent kids from jumping on the train and taking joyrides as it slowly approached and departed Kahuku Station.
If Kalili saw kolohe (mischievous) kids trying to get on the train, he would yell and wave his hands to stop them. Of course, that looked a bit strange since he had only two fingers on his right hand. The kids adopted that gesture; it became their signal to indicate Kalili was not around or not looking, and the coast was clear for them to jump on the train.
According to a March 31, 2002 Honolulu Star-Bulletin story, Kalili was the choir director at his ward (congregation) of the Church of Jesus Christ of Latter-day Saints (Mormon) in Laie. Even though his back was to the congregation, worshippers recognized him when he raised his hands to direct the choir because of his missing fingers.
Kalili also served as “king” of the church fundraiser – complete with a hukilau, luau and show – that was held annually for years until the 1970s. Photos show him greeting attendees with his distinctive wave.
The term “shaka” is not a Hawaiian word. It’s attributed to David “Lippy” Espinda, a used car pitchman who ended his TV commercials in the 1960s with the gesture and an enthusiastic “Shaka, brah!” In 1976, the shaka sign was a key element of Frank Fasi’s third campaign for mayor of Honolulu. He won that race and used the shaka icon for three more successful mayoral bids, serving six terms in all.
In Hawaii, everyone from keiki to kupuna uses the shaka to express friendship, gratitude, goodwill, encouragement and unity. A little wave of the hand spreads a lot of aloha.
by Cheryl Chee Tsutsumi, Hawaiian Airlines | Read more:
Image: uncredited
As the story goes, that ubiquitous gesture traces its origins back to the early 1900s when Hamana Kalili worked at Kahuku Sugar Mill. His job as a presser was to feed cane through the rollers to squeeze out its juice. One day, Kalili’s right hand got caught in the rollers, and his middle, index and ring fingers were crushed.
After the accident, the plantation owners gave Kalili a new job as the security officer for the train that used to run between Sunset Beach and Kaaawa. Part of his job was to prevent kids from jumping on the train and taking joyrides as it slowly approached and departed Kahuku Station.
If Kalili saw kolohe (mischievous) kids trying to get on the train, he would yell and wave his hands to stop them. Of course, that looked a bit strange since he had only two fingers on his right hand. The kids adopted that gesture; it became their signal to indicate Kalili was not around or not looking, and the coast was clear for them to jump on the train.
According to a March 31, 2002 Honolulu Star-Bulletin story, Kalili was the choir director at his ward (congregation) of the Church of Jesus Christ of Latter-day Saints (Mormon) in Laie. Even though his back was to the congregation, worshippers recognized him when he raised his hands to direct the choir because of his missing fingers.
Kalili also served as “king” of the church fundraiser – complete with a hukilau, luau and show – that was held annually for years until the 1970s. Photos show him greeting attendees with his distinctive wave.
The term “shaka” is not a Hawaiian word. It’s attributed to David “Lippy” Espinda, a used car pitchman who ended his TV commercials in the 1960s with the gesture and an enthusiastic “Shaka, brah!” In 1976, the shaka sign was a key element of Frank Fasi’s third campaign for mayor of Honolulu. He won that race and used the shaka icon for three more successful mayoral bids, serving six terms in all.
In Hawaii, everyone from keiki to kupuna uses the shaka to express friendship, gratitude, goodwill, encouragement and unity. A little wave of the hand spreads a lot of aloha.
by Cheryl Chee Tsutsumi, Hawaiian Airlines | Read more:
Image: uncredited
Against Murderism
I.
Alice is a white stay-at-home mother who is moving to a new neighborhood. One of the neighborhoods in her city is mostly Middle Eastern immigrants; Alice has trouble understanding their accents, and when they socialize they talk about things like which kinds of hijab are in fashion right now. The other neighborhood is mostly white, and a lot of them are New Reformed Eastern Evangelical Episcopalian like Alice, and everyone on the block is obsessed with putting up really twee overdone Christmas decorations just like she is. She decides to move to the white neighborhood, which she thinks is a better cultural fit. Is Alice racist?
Bob is the mayor of Exampleburg, whose bus system has been losing a lot of money lately and will have to scale back its routes. He decides that the bus system should cut its least-used route. This turns out to be a bus route in a mostly-black neighborhood, which has only one-tenth the ridership of the other routes but costs just as much. Other bus routes, most of which go through equally poor mostly-white neighborhoods, are not affected. Is Bob racist?
Carol is a gay libertarian who is a two-issue voter: free markets and gay rights. She notices that immigrants from certain countries seem to be more socialist and more anti-gay than the average American native. She worries that they will become citizens and vote for socialist anti-gay policies. In order to prevent this, she supports a ban on immigration from Africa, Latin America, and the Middle East. Is Carol racist?
Dan is a progressive member of the ACLU and NAACP who has voted straight Democrat the last five elections. He is studying psychology, and encounters The Bell Curve and its theory that some of the difference in cognitive skills between races is genetic. After looking up various arguments, counterarguments, and the position of experts in the field, he decides that this is probably true. He avoids talking about this because he expects other people would misinterpret it and use it as a justification for racism; he thinks this would be completely unjustified since a difference of a few IQ points has no effect on anyone’s basic humanity. He remains active in the ACLU, the NAACP, and various anti-racist efforts in his community. Is Dan racist?
Eric is a restauranteur who is motivated entirely by profit. He moves to a very racist majority-white area where white people refuse to dine with black people. Since he wants to attract as many customers as possible, he sets up a NO BLACKS ALLOWED sign in front of his restaurant. Is Eric racist?
Fiona is an honest-to-goodness white separatist. She believes that racial groups are the natural unit of community, and that they would all be happiest set apart from each other. She doesn’t believe that any race is better than any other, just that they would all be happier if they were separate and able to do their own thing. She supports a partition plan that gives whites the US Midwest, Latinos the Southwest, and blacks the Southeast, leaving the Northeast and Northwest as multiracial enclaves for people who like that kind of thing. She would not use genocide to eliminate other races in these areas, but hopes that once the partition is set up races would migrate of their own accord. She works together with black separatist groups, believing that they share a common vision, and she hopes their countries will remain allies once they are separate. Is Fiona racist?
II.
As usual, the answer is that “racism” is a confusing word that serves as a mishmash of unlike concepts. Here are some of the definitions people use for racism:
1. Definition By Motives: An irrational feeling of hatred toward some race that causes someone to want to hurt or discriminate against them.
2. Definition By Belief: A belief that some race has negative qualities or is inferior, especially if this is innate/genetic.
3. Definition By Consequences: Anything whose consequence is harm to minorities or promotion of white supremacy, regardless of whether or not this is intentional.
Some thoughts:
Definition By Consequences Doesn’t Match Real-World Usage
I know that Definition By Consequences is the really sophisticated one, the ones that scholars in the area are most likely to unite around. But I also think it’s uniquely bad at capturing the way anyone uses the word “racism” in real life. Let me give four examples.
First, by this definition, racism can never cause anything. People like to ask questions like “Did racism contribute to electing Donald Trump?” Under this definition, the question makes no sense. It’s barely even grammatical. “Did things whose consequence is harm minorities whether or not such harm is intentional contribute to the election of Donald Trump?” Huh? If racism is just a description of what consequences something has, then it can’t be used as an causal explanation.
Second, by this definition, many racist things would be good. Suppose some tyrant wants to kill the ten million richest white people, then redistribute their things to black people. This would certainly challenge white supremacy and help minorities. So by this definition, resisting this tyrant would be racist. But obviously this tyrant is evil and resisting him is the right thing to do. So under this definition, good policies which deserve our support can nevertheless be racist. “This policy is racist” can no longer be a strong argument against a policy, even when it’s true.
Third, by this definition, it doesn’t make a lot of sense to say a particular person is racist. Racism is a property of actions, not of humans. While there are no doubt some broad patterns in people, the question “Is Bob racist?” sounds very odd in this framework, sort of like “Does Bob cause poverty?” No doubt Bob has done a few things which either help or hurt economic equality in some small way. And it’s possible that Bob is one of the rare people who organizes his life around crusading against poverty, or around crusading against attempts to end poverty. But overall the question will get you looked at funny. Meanwhile, questions like “Is Barack Obama racist?” should lead to a discussion of Obama’s policies and which races were helped or hurt by them; issues like Obama’s own race and his personal feelings shouldn’t come up at all.
Fourth, by this definition, it becomes impossible to assess the racism of an action without knowing all its consequences. Suppose the KKK holds a march through some black neighborhood to terrorize the residents. But in fact the counterprotesters outnumber the marchers ten to one, and people are actually reassured that the community supports them. The march is well-covered on various news organizations, and outrages people around the nation, who donate a lot of money to anti-racist organizations and push for stronger laws against the KKK. Plausibly, the net consequences of the march were (unintentionally) very good for black people and damaging to white supremacy. Therefore, by the Sophisticated Definition, the KKK marching the neighborhood to terrorize black residents was not racist. In fact, for the KKK not to march in this situation would be racist!
So Definition By Consequences implies that racism can never be pointed to as a cause of anything, that racist policies can often be good, that nobody “is a racist” or “isn’t a racist”, and that sometimes the KKK trying to terrorize black people is less racist than them not trying to do this. Not only have I never heard anyone try to grapple with these implications, I see no sign anyone has ever thought of them. And now that I’ve brought them up, I don’t think anyone will accept them as true, or even worry about the discrepancy.
I think this is probably because it’s a motte-and-bailey, more something that gets trotted out to win arguments than anything people actually use in real life.
by Scott Alexander, Slate Star Codex | Read more:
Labels:
Critical Thought,
Culture,
Politics,
Psychology,
Relationships
Is the Staggeringly Profitable Business of Scientific Publishing Bad For Science?
In 2011, Claudio Aspesi, a senior investment analyst at Bernstein Research in London, made a bet that the dominant firm in one of the most lucrative industries in the world was headed for a crash. Reed-Elsevier, a multinational publishing giant with annual revenues exceeding £6bn, was an investor’s darling. It was one of the few publishers that had successfully managed the transition to the internet, and a recent company report was predicting yet another year of growth. Aspesi, though, had reason to believe that that prediction – along with those of every other major financial analyst – was wrong.
The core of Elsevier’s operation is in scientific journals, the weekly or monthly publications in which scientists share their results. Despite the narrow audience, scientific publishing is a remarkably big business. With total global revenues of more than £19bn, it weighs in somewhere between the recording and the film industries in size, but it is far more profitable. In 2010, Elsevier’s scientific publishing arm reported profits of £724m on just over £2bn in revenue. It was a 36% margin – higher than Apple, Google, or Amazon posted that year.
But Elsevier’s business model seemed a truly puzzling thing. In order to make money, a traditional publisher – say, a magazine – first has to cover a multitude of costs: it pays writers for the articles; it employs editors to commission, shape and check the articles; and it pays to distribute the finished product to subscribers and retailers. All of this is expensive, and successful magazines typically make profits of around 12-15%.
The way to make money from a scientific article looks very similar, except that scientific publishers manage to duck most of the actual costs. Scientists create work under their own direction – funded largely by governments – and give it to publishers for free; the publisher pays scientific editors who judge whether the work is worth publishing and check its grammar, but the bulk of the editorial burden – checking the scientific validity and evaluating the experiments, a process known as peer review – is done by working scientists on a volunteer basis. The publishers then sell the product back to government-funded institutional and university libraries, to be read by scientists – who, in a collective sense, created the product in the first place.
It is as if the New Yorker or the Economist demanded that journalists write and edit each other’s work for free, and asked the government to foot the bill. Outside observers tend to fall into a sort of stunned disbelief when describing this setup. A 2004 parliamentary science and technology committee report on the industry drily observed that “in a traditional market suppliers are paid for the goods they provide”. A 2005 Deutsche Bank report referred to it as a “bizarre” “triple-pay” system, in which “the state funds most research, pays the salaries of most of those checking the quality of research, and then buys most of the published product”.
Scientists are well aware that they seem to be getting a bad deal. The publishing business is “perverse and needless”, the Berkeley biologist Michael Eisen wrote in a 2003 article for the Guardian, declaring that it “should be a public scandal”. Adrian Sutton, a physicist at Imperial College, told me that scientists “are all slaves to publishers. What other industry receives its raw materials from its customers, gets those same customers to carry out the quality control of those materials, and then sells the same materials back to the customers at a vastly inflated price?” (A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”.)
Many scientists also believe that the publishing industry exerts too much influence over what scientists choose to study, which is ultimately bad for science itself. Journals prize new and spectacular results – after all, they are in the business of selling subscriptions – and scientists, knowing exactly what kind of work gets published, align their submissions accordingly. This produces a steady stream of papers, the importance of which is immediately apparent. But it also means that scientists do not have an accurate map of their field of inquiry. Researchers may end up inadvertently exploring dead ends that their fellow scientists have already run up against, solely because the information about previous failures has never been given space in the pages of the relevant scientific publications. A 2013 study, for example, reported that half of all clinical trials in the US are never published in a journal.
According to critics, the journal system actually holds back scientific progress. In a 2008 essay, Dr Neal Young of the National Institutes of Health (NIH), which funds and conducts medical research for the US government, argued that, given the importance of scientific innovation to society, “there is a moral imperative to reconsider how scientific data are judged and disseminated”.
Aspesi, after talking to a network of more than 25 prominent scientists and activists, had come to believe the tide was about to turn against the industry that Elsevier led. More and more research libraries, which purchase journals for universities, were claiming that their budgets were exhausted by decades of price increases, and were threatening to cancel their multi-million-pound subscription packages unless Elsevier dropped its prices. State organisations such as the American NIH and the German Research Foundation (DFG) had recently committed to making their research available through free online journals, and Aspesi believed that governments might step in and ensure that all publicly funded research would be available for free, to anyone. Elsevier and its competitors would be caught in a perfect storm, with their customers revolting from below, and government regulation looming above.
In March 2011, Aspesi published a report recommending that his clients sell Elsevier stock. A few months later, in a conference call between Elsevier management and investment firms, he pressed the CEO of Elsevier, Erik Engstrom, about the deteriorating relationship with the libraries. He asked what was wrong with the business if “your customers are so desperate”. Engstrom dodged the question. Over the next two weeks, Elsevier stock tumbled by more than 20%, losing £1bn in value. The problems Aspesi saw were deep and structural, and he believed they would play out over the next half-decade – but things already seemed to be moving in the direction he had predicted.
Over the next year, however, most libraries backed down and committed to Elsevier’s contracts, and governments largely failed to push an alternative model for disseminating research. In 2012 and 2013, Elsevier posted profit margins of more than 40%. The following year, Aspesi reversed his recommendation to sell. “He listened to us too closely, and he got a bit burned,” David Prosser, the head of Research Libraries UK, and a prominent voice for reforming the publishing industry, told me recently. Elsevier was here to stay.
Aspesi was not the first person to incorrectly predict the end of the scientific publishing boom, and he is unlikely to be the last. It is hard to believe that what is essentially a for-profit oligopoly functioning within an otherwise heavily regulated, government-funded enterprise can avoid extinction in the long run. But publishing has been deeply enmeshed in the science profession for decades. Today, every scientist knows that their career depends on being published, and professional success is especially determined by getting work into the most prestigious journals. The long, slow, nearly directionless work pursued by some of the most influential scientists of the 20th century is no longer a viable career option. Under today’s system, the father of genetic sequencing, Fred Sanger, who published very little in the two decades between his 1958 and 1980 Nobel prizes, may well have found himself out of a job.
Even scientists who are fighting for reform are often not aware of the roots of the system: how, in the boom years after the second world war, entrepreneurs built fortunes by taking publishing out of the hands of scientists and expanding the business on a previously unimaginable scale. And no one was more transformative and ingenious than Robert Maxwell, who turned scientific journals into a spectacular money-making machine that bankrolled his rise in British society. Maxwell would go on to become an MP, a press baron who challenged Rupert Murdoch, and one of the most notorious figures in British life. But his true importance was far larger than most of us realise. Improbable as it might sound, few people in the last century have done more to shape the way science is conducted today than Maxwell.
by Stephen Buranyi, The Guardian | Read more:
The core of Elsevier’s operation is in scientific journals, the weekly or monthly publications in which scientists share their results. Despite the narrow audience, scientific publishing is a remarkably big business. With total global revenues of more than £19bn, it weighs in somewhere between the recording and the film industries in size, but it is far more profitable. In 2010, Elsevier’s scientific publishing arm reported profits of £724m on just over £2bn in revenue. It was a 36% margin – higher than Apple, Google, or Amazon posted that year.
But Elsevier’s business model seemed a truly puzzling thing. In order to make money, a traditional publisher – say, a magazine – first has to cover a multitude of costs: it pays writers for the articles; it employs editors to commission, shape and check the articles; and it pays to distribute the finished product to subscribers and retailers. All of this is expensive, and successful magazines typically make profits of around 12-15%.
The way to make money from a scientific article looks very similar, except that scientific publishers manage to duck most of the actual costs. Scientists create work under their own direction – funded largely by governments – and give it to publishers for free; the publisher pays scientific editors who judge whether the work is worth publishing and check its grammar, but the bulk of the editorial burden – checking the scientific validity and evaluating the experiments, a process known as peer review – is done by working scientists on a volunteer basis. The publishers then sell the product back to government-funded institutional and university libraries, to be read by scientists – who, in a collective sense, created the product in the first place.
It is as if the New Yorker or the Economist demanded that journalists write and edit each other’s work for free, and asked the government to foot the bill. Outside observers tend to fall into a sort of stunned disbelief when describing this setup. A 2004 parliamentary science and technology committee report on the industry drily observed that “in a traditional market suppliers are paid for the goods they provide”. A 2005 Deutsche Bank report referred to it as a “bizarre” “triple-pay” system, in which “the state funds most research, pays the salaries of most of those checking the quality of research, and then buys most of the published product”.
Scientists are well aware that they seem to be getting a bad deal. The publishing business is “perverse and needless”, the Berkeley biologist Michael Eisen wrote in a 2003 article for the Guardian, declaring that it “should be a public scandal”. Adrian Sutton, a physicist at Imperial College, told me that scientists “are all slaves to publishers. What other industry receives its raw materials from its customers, gets those same customers to carry out the quality control of those materials, and then sells the same materials back to the customers at a vastly inflated price?” (A representative of RELX Group, the official name of Elsevier since 2015, told me that it and other publishers “serve the research community by doing things that they need that they either cannot, or do not do on their own, and charge a fair price for that service”.)
Many scientists also believe that the publishing industry exerts too much influence over what scientists choose to study, which is ultimately bad for science itself. Journals prize new and spectacular results – after all, they are in the business of selling subscriptions – and scientists, knowing exactly what kind of work gets published, align their submissions accordingly. This produces a steady stream of papers, the importance of which is immediately apparent. But it also means that scientists do not have an accurate map of their field of inquiry. Researchers may end up inadvertently exploring dead ends that their fellow scientists have already run up against, solely because the information about previous failures has never been given space in the pages of the relevant scientific publications. A 2013 study, for example, reported that half of all clinical trials in the US are never published in a journal.
According to critics, the journal system actually holds back scientific progress. In a 2008 essay, Dr Neal Young of the National Institutes of Health (NIH), which funds and conducts medical research for the US government, argued that, given the importance of scientific innovation to society, “there is a moral imperative to reconsider how scientific data are judged and disseminated”.
Aspesi, after talking to a network of more than 25 prominent scientists and activists, had come to believe the tide was about to turn against the industry that Elsevier led. More and more research libraries, which purchase journals for universities, were claiming that their budgets were exhausted by decades of price increases, and were threatening to cancel their multi-million-pound subscription packages unless Elsevier dropped its prices. State organisations such as the American NIH and the German Research Foundation (DFG) had recently committed to making their research available through free online journals, and Aspesi believed that governments might step in and ensure that all publicly funded research would be available for free, to anyone. Elsevier and its competitors would be caught in a perfect storm, with their customers revolting from below, and government regulation looming above.
In March 2011, Aspesi published a report recommending that his clients sell Elsevier stock. A few months later, in a conference call between Elsevier management and investment firms, he pressed the CEO of Elsevier, Erik Engstrom, about the deteriorating relationship with the libraries. He asked what was wrong with the business if “your customers are so desperate”. Engstrom dodged the question. Over the next two weeks, Elsevier stock tumbled by more than 20%, losing £1bn in value. The problems Aspesi saw were deep and structural, and he believed they would play out over the next half-decade – but things already seemed to be moving in the direction he had predicted.
Over the next year, however, most libraries backed down and committed to Elsevier’s contracts, and governments largely failed to push an alternative model for disseminating research. In 2012 and 2013, Elsevier posted profit margins of more than 40%. The following year, Aspesi reversed his recommendation to sell. “He listened to us too closely, and he got a bit burned,” David Prosser, the head of Research Libraries UK, and a prominent voice for reforming the publishing industry, told me recently. Elsevier was here to stay.
Aspesi was not the first person to incorrectly predict the end of the scientific publishing boom, and he is unlikely to be the last. It is hard to believe that what is essentially a for-profit oligopoly functioning within an otherwise heavily regulated, government-funded enterprise can avoid extinction in the long run. But publishing has been deeply enmeshed in the science profession for decades. Today, every scientist knows that their career depends on being published, and professional success is especially determined by getting work into the most prestigious journals. The long, slow, nearly directionless work pursued by some of the most influential scientists of the 20th century is no longer a viable career option. Under today’s system, the father of genetic sequencing, Fred Sanger, who published very little in the two decades between his 1958 and 1980 Nobel prizes, may well have found himself out of a job.
Even scientists who are fighting for reform are often not aware of the roots of the system: how, in the boom years after the second world war, entrepreneurs built fortunes by taking publishing out of the hands of scientists and expanding the business on a previously unimaginable scale. And no one was more transformative and ingenious than Robert Maxwell, who turned scientific journals into a spectacular money-making machine that bankrolled his rise in British society. Maxwell would go on to become an MP, a press baron who challenged Rupert Murdoch, and one of the most notorious figures in British life. But his true importance was far larger than most of us realise. Improbable as it might sound, few people in the last century have done more to shape the way science is conducted today than Maxwell.
by Stephen Buranyi, The Guardian | Read more:
Image: Dom McKenzie
Monday, June 26, 2017
A Utopia for a Dystopian Age
The term “utopia” was coined 500 years ago. By conjoining the Greek adverb “ou” (“not”) and the noun “topos” (“place”) the English humanist and politician Thomas More conceived of a place that is not — literally a “nowhere” or “noplace.” More’s learned readers would also have recognized another pun. The pronunciation of “utopia” can just as well be associated with “eu-topia,” which in Greek means “happy place.” Happiness, More might have suggested, is something we can only imagine. And yet imagining it, as philosophers, artists and politicians have done ever since, is far from pointless.
More was no doubt a joker. “Utopia,” his fictional travelogue about an island of plenty and equality, is told by a character whose name, Hythloday, yet another playful conjoining of Greek words, signifies something like “nonsense peddler.” Although More comes across as being quite fond of his noplace, he occasionally interrupts the narrative by warning against the islanders’ rejection of private property. Living under the reign of the autocratic Henry VIII, and being a prominent social figure, More might not have wanted to rock the boat too much.
Precisely that — rocking the boat — has, however, been the underlying aim of the great utopias that have shaped Western culture. It has animated and informed progressive thinking, providing direction and a sense of purpose to struggles for social change and emancipation. From the vantage point of the utopian imagination, history — that gushing river of seemingly contingent micro-events — has taken on meaning, becoming a steadfast movement toward the sought-for condition supposedly able to justify all previous striving and suffering.
Utopianism can be dreamy in a John Lennon “Imagine”-esque way. Yet it has also been ready to intervene and bring about concrete transformation.
Utopias come in different forms. Utopias of desire, as in Hieronymus Bosch’s painting “The Garden of Earthly Delights,” focus on happiness, tying it to the satisfaction of needs. Such utopias, demanding the complete alleviation of pain and sometimes glorious spaces of enjoyment and pleasure, tend, at least in modern times, to rely on technology. The utopias of technology see social, bodily and environmental ills as requiring technological solutions. We know such solutions all too well: ambitious city-planning schemes and robotics as well as dreams of cosmic expansion and immortality. (...)
Today, the utopian impulse seems almost extinguished. The utopias of desire make little sense in a world overrun by cheap entertainment, unbridled consumerism and narcissistic behavior. The utopias of technology are less impressive than ever now that — after Hiroshima and Chernobyl — we are fully aware of the destructive potential of technology. Even the internet, perhaps the most recent candidate for technological optimism, turns out to have a number of potentially disastrous consequences, among them a widespread disregard for truth and objectivity, as well as an immense increase in the capacity for surveillance. The utopias of justice seem largely to have been eviscerated by 20th-century totalitarianism. After the Gulag Archipelago, the Khmer Rouge’s killing fields and the Cultural Revolution, these utopias seem both philosophically and politically dead.
The great irony of all forms of utopianism can hardly escape us. They say one thing, but when we attempt to realize them they seem to imply something entirely different. Their demand for perfection in all things human is often pitched at such a high level that they come across as aggressive and ultimately destructive. Their rejection of the past, and of established practice, is subject to its own logic of brutality.
And not only has the utopian imagination been stung by its own failures, it has also had to face up to the two fundamental dystopias of our time: those of ecological collapse and thermonuclear warfare. The utopian imagination thrives on challenges. Yet these are not challenges but chillingly realistic scenarios of utter destruction and the eventual elimination of the human species. Add to that the profoundly anti-utopian nature of the right-wing movements that have sprung up in the United States and Europe and the prospects for any kind of meaningful utopianism may seem bleak indeed. In matters social and political, we seem doomed if not to cynicism, then at least to a certain coolheadedness.
Anti-utopianism may, as in much recent liberalism, call for controlled, incremental change. The main task of government, Barack Obama ended up saying, is to avoid doing stupid stuff. However, anti-utopianism may also become atavistic and beckon us to return, regardless of any cost, to an idealized past. In such cases, the utopian narrative gets replaced by myth. And while the utopian narrative is universalistic and future-oriented, myth is particularistic and backward-looking. Myths purport to tell the story of us, our origin and of what it is that truly matters for us. Exclusion is part of their nature.
Can utopianism be rescued? Should it be? To many people the answer to both questions is a resounding no.
There are reasons, however, to think that a fully modern society cannot do without a utopian consciousness. To be modern is to be oriented toward the future. It is to be open to change even radical change, when called for. With its willingness to ride roughshod over all established certainties and ways of life, classical utopianism was too grandiose, too rationalist and ultimately too cold. We need the ability to look beyond the present. But we also need More’s insistence on playfulness. Once utopias are embodied in ideologies, they become dangerous and even deadly.
by Espen Hammer, NY Times | Read more:
Image: Hieronymus Bosch
More was no doubt a joker. “Utopia,” his fictional travelogue about an island of plenty and equality, is told by a character whose name, Hythloday, yet another playful conjoining of Greek words, signifies something like “nonsense peddler.” Although More comes across as being quite fond of his noplace, he occasionally interrupts the narrative by warning against the islanders’ rejection of private property. Living under the reign of the autocratic Henry VIII, and being a prominent social figure, More might not have wanted to rock the boat too much.
Precisely that — rocking the boat — has, however, been the underlying aim of the great utopias that have shaped Western culture. It has animated and informed progressive thinking, providing direction and a sense of purpose to struggles for social change and emancipation. From the vantage point of the utopian imagination, history — that gushing river of seemingly contingent micro-events — has taken on meaning, becoming a steadfast movement toward the sought-for condition supposedly able to justify all previous striving and suffering.
Utopianism can be dreamy in a John Lennon “Imagine”-esque way. Yet it has also been ready to intervene and bring about concrete transformation.
Utopias come in different forms. Utopias of desire, as in Hieronymus Bosch’s painting “The Garden of Earthly Delights,” focus on happiness, tying it to the satisfaction of needs. Such utopias, demanding the complete alleviation of pain and sometimes glorious spaces of enjoyment and pleasure, tend, at least in modern times, to rely on technology. The utopias of technology see social, bodily and environmental ills as requiring technological solutions. We know such solutions all too well: ambitious city-planning schemes and robotics as well as dreams of cosmic expansion and immortality. (...)
Today, the utopian impulse seems almost extinguished. The utopias of desire make little sense in a world overrun by cheap entertainment, unbridled consumerism and narcissistic behavior. The utopias of technology are less impressive than ever now that — after Hiroshima and Chernobyl — we are fully aware of the destructive potential of technology. Even the internet, perhaps the most recent candidate for technological optimism, turns out to have a number of potentially disastrous consequences, among them a widespread disregard for truth and objectivity, as well as an immense increase in the capacity for surveillance. The utopias of justice seem largely to have been eviscerated by 20th-century totalitarianism. After the Gulag Archipelago, the Khmer Rouge’s killing fields and the Cultural Revolution, these utopias seem both philosophically and politically dead.
The great irony of all forms of utopianism can hardly escape us. They say one thing, but when we attempt to realize them they seem to imply something entirely different. Their demand for perfection in all things human is often pitched at such a high level that they come across as aggressive and ultimately destructive. Their rejection of the past, and of established practice, is subject to its own logic of brutality.
And not only has the utopian imagination been stung by its own failures, it has also had to face up to the two fundamental dystopias of our time: those of ecological collapse and thermonuclear warfare. The utopian imagination thrives on challenges. Yet these are not challenges but chillingly realistic scenarios of utter destruction and the eventual elimination of the human species. Add to that the profoundly anti-utopian nature of the right-wing movements that have sprung up in the United States and Europe and the prospects for any kind of meaningful utopianism may seem bleak indeed. In matters social and political, we seem doomed if not to cynicism, then at least to a certain coolheadedness.
Anti-utopianism may, as in much recent liberalism, call for controlled, incremental change. The main task of government, Barack Obama ended up saying, is to avoid doing stupid stuff. However, anti-utopianism may also become atavistic and beckon us to return, regardless of any cost, to an idealized past. In such cases, the utopian narrative gets replaced by myth. And while the utopian narrative is universalistic and future-oriented, myth is particularistic and backward-looking. Myths purport to tell the story of us, our origin and of what it is that truly matters for us. Exclusion is part of their nature.
Can utopianism be rescued? Should it be? To many people the answer to both questions is a resounding no.
There are reasons, however, to think that a fully modern society cannot do without a utopian consciousness. To be modern is to be oriented toward the future. It is to be open to change even radical change, when called for. With its willingness to ride roughshod over all established certainties and ways of life, classical utopianism was too grandiose, too rationalist and ultimately too cold. We need the ability to look beyond the present. But we also need More’s insistence on playfulness. Once utopias are embodied in ideologies, they become dangerous and even deadly.
by Espen Hammer, NY Times | Read more:
Image: Hieronymus Bosch
Labels:
Critical Thought,
Environment,
Psychology,
Technology
A Cyberattack ‘the World Isn’t Ready For’
There have been times over the last two months when Golan Ben-Oni has felt like a voice in the wilderness.
On April 29, someone hit his employer, IDT Corporation, with two cyberweapons that had been stolen from the National Security Agency. Mr. Ben-Oni, the global chief information officer at IDT, was able to fend them off, but the attack left him distraught.
In 22 years of dealing with hackers of every sort, he had never seen anything like it. Who was behind it? How did they evade all of his defenses? How many others had been attacked but did not know it?
Since then, Mr. Ben-Oni has been sounding alarm bells, calling anyone who will listen at the White House, the Federal Bureau of Investigation, the New Jersey attorney general’s office and the top cybersecurity companies in the country to warn them about an attack that may still be invisibly striking victims undetected around the world. (...)
Two weeks after IDT was hit, the cyberattack known as WannaCry ravaged computers at hospitals in England, universities in China, rail systems in Germany, even auto plants in Japan. No doubt it was destructive. But what Mr. Ben-Oni had witnessed was much worse, and with all eyes on the WannaCry destruction, few seemed to be paying attention to the attack on IDT’s systems — and most likely others around the world.
The strike on IDT, a conglomerate with headquarters in a nondescript gray building here with views of the Manhattan skyline 15 miles away, was similar to WannaCry in one way: Hackers locked up IDT data and demanded a ransom to unlock it.
But the ransom demand was just a smoke screen for a far more invasive attack that stole employee credentials. With those credentials in hand, hackers could have run free through the company’s computer network, taking confidential information or destroying machines.
Worse, the assault, which has never been reported before, was not spotted by some of the nation’s leading cybersecurity products, the top security engineers at its biggest tech companies, government intelligence analysts or the F.B.I., which remains consumed with the WannaCry attack.
Were it not for a digital black box that recorded everything on IDT’s network, along with Mr. Ben-Oni’s tenacity, the attack might have gone unnoticed.
Scans for the two hacking tools used against IDT indicate that the company is not alone. In fact, tens of thousands of computer systems all over the world have been “backdoored” by the same N.S.A. weapons. Mr. Ben-Oni and other security researchers worry that many of those other infected computers are connected to transportation networks, hospitals, water treatment plants and other utilities. (...)
The WannaCry attack — which the N.S.A. and security researchers have tied to North Korea — employed one N.S.A. cyberweapon; the IDT assault used two.
Both WannaCry and the IDT attack used a hacking tool the agency had code-named EternalBlue. The tool took advantage of unpatched Microsoft servers to automatically spread malware from one server to another, so that within 24 hours North Korea’s hackers had spread their ransomware to more than 200,000 servers around the globe.
The attack on IDT went a step further with another stolen N.S.A. cyberweapon, called DoublePulsar. The N.S.A. used DoublePulsar to penetrate computer systems without tripping security alarms. It allowed N.S.A. spies to inject their tools into the nerve center of a target’s computer system, called the kernel, which manages communications between a computer’s hardware and its software.
In the pecking order of a computer system, the kernel is at the very top, allowing anyone with secret access to it to take full control of a machine. It is also a dangerous blind spot for most security software, allowing attackers to do what they want and go unnoticed. In IDT’s case, attackers used DoublePulsar to steal an IDT contractor’s credentials. Then they deployed ransomware in what appears to be a cover for their real motive: broader access to IDT’s businesses.
On April 29, someone hit his employer, IDT Corporation, with two cyberweapons that had been stolen from the National Security Agency. Mr. Ben-Oni, the global chief information officer at IDT, was able to fend them off, but the attack left him distraught.
In 22 years of dealing with hackers of every sort, he had never seen anything like it. Who was behind it? How did they evade all of his defenses? How many others had been attacked but did not know it?
Since then, Mr. Ben-Oni has been sounding alarm bells, calling anyone who will listen at the White House, the Federal Bureau of Investigation, the New Jersey attorney general’s office and the top cybersecurity companies in the country to warn them about an attack that may still be invisibly striking victims undetected around the world. (...)
Two weeks after IDT was hit, the cyberattack known as WannaCry ravaged computers at hospitals in England, universities in China, rail systems in Germany, even auto plants in Japan. No doubt it was destructive. But what Mr. Ben-Oni had witnessed was much worse, and with all eyes on the WannaCry destruction, few seemed to be paying attention to the attack on IDT’s systems — and most likely others around the world.
The strike on IDT, a conglomerate with headquarters in a nondescript gray building here with views of the Manhattan skyline 15 miles away, was similar to WannaCry in one way: Hackers locked up IDT data and demanded a ransom to unlock it.
But the ransom demand was just a smoke screen for a far more invasive attack that stole employee credentials. With those credentials in hand, hackers could have run free through the company’s computer network, taking confidential information or destroying machines.
Worse, the assault, which has never been reported before, was not spotted by some of the nation’s leading cybersecurity products, the top security engineers at its biggest tech companies, government intelligence analysts or the F.B.I., which remains consumed with the WannaCry attack.
Were it not for a digital black box that recorded everything on IDT’s network, along with Mr. Ben-Oni’s tenacity, the attack might have gone unnoticed.
Scans for the two hacking tools used against IDT indicate that the company is not alone. In fact, tens of thousands of computer systems all over the world have been “backdoored” by the same N.S.A. weapons. Mr. Ben-Oni and other security researchers worry that many of those other infected computers are connected to transportation networks, hospitals, water treatment plants and other utilities. (...)
The WannaCry attack — which the N.S.A. and security researchers have tied to North Korea — employed one N.S.A. cyberweapon; the IDT assault used two.
Both WannaCry and the IDT attack used a hacking tool the agency had code-named EternalBlue. The tool took advantage of unpatched Microsoft servers to automatically spread malware from one server to another, so that within 24 hours North Korea’s hackers had spread their ransomware to more than 200,000 servers around the globe.
The attack on IDT went a step further with another stolen N.S.A. cyberweapon, called DoublePulsar. The N.S.A. used DoublePulsar to penetrate computer systems without tripping security alarms. It allowed N.S.A. spies to inject their tools into the nerve center of a target’s computer system, called the kernel, which manages communications between a computer’s hardware and its software.
In the pecking order of a computer system, the kernel is at the very top, allowing anyone with secret access to it to take full control of a machine. It is also a dangerous blind spot for most security software, allowing attackers to do what they want and go unnoticed. In IDT’s case, attackers used DoublePulsar to steal an IDT contractor’s credentials. Then they deployed ransomware in what appears to be a cover for their real motive: broader access to IDT’s businesses.
Sunday, June 25, 2017
Did the Fun Work?
If anything can make enchantment terse, it is the German compound noun. Through the bluntest lexical conglomeration, these words capture concepts so ineffable that they would otherwise float away. Take the Austrian art historian Alois Riegl’s term, Kunstwollen—Kunst (art) + wollen (will), or “will to art”—later defined by Erwin Panofsky as “the sum or unity of creative powers manifested in any given artistic phenomenon.” (Panofsky then appended to this mouthful a footnote parsing precisely what he meant by “artistic phenomenon.”) A particular favorite compound of mine is Kurort, literally “cure-place,” but better translated as “spa town” or “health resort.” There’s an elegiac romance to Kurort that brings to mind images of parasols and gouty gentlemen taking the waters, the world of Thomas Mann’s Magic Mountain. Nevertheless, Kurort’s cocktail of connotations—mixing leisure, self-improvement, health, physical pleasure, relaxation, gentility, and moral rectitude—remains as fresh as ever. Yoga retreats and team-building ropes courses may have all but replaced mineral baths, but wellness vacations and medical tourism are still big business.
What continues to fuel this industry (by now a heritage one) is the durable belief that leisure ought to achieve something—a firmer bottom, new kitchen abilities, triumph over depression. In fact, why not go for the sublime leisure-success trifecta: physical, practical, and spiritual? One vacation currently offered in Sri Lanka features cycling, a tea tutorial, and a visit to a Buddhist temple, a package that promises to be active (but not draining), educational (but not tedious), and fun (but not dissolute). The “Experiences” section of Airbnb advertises all kinds of self- and life-improving activities, including a Korean food course, elementary corsetry, and even a microfinance workshop. (...)
Leisure, it turns out, requires measurement and evaluation. First of all, our irksome question remains: When partaking of leisure, how can you know that you aren’t slipping into idleness? Second, because leisure is a deserved reward, it should be fun, amusing, diverting, or otherwise pleasurable. This requirement begets another set of questions, perhaps even more existential in scope: How do leisure seekers even know whether they’re enjoying themselves, and if they are, whether the enjoyment . . . worked? Was the restoration sufficient? The self improved? The fun had?
These questions are most easily, if superficially, answered via the medley of social media platforms and portable devices bestowed on us by the wonders of consumer-product-driven innovation. Fitbit points, likes, and heart-eyed emoji faces have become the units of measurement by which we evaluate our own experiences. These tokens offer reassurance that our time is being optimally spent; they represent our leisure accomplishments. Social media and camera-equipped portable devices have given us the opportunity to solicit positive feedback from our friends, and indeed from the world at large, nonstop. Even when we are otherwise occupied or asleep, our photos and posts beam out, ever ready to scoop up likes and faves. Yet under the guise of fun and “connection,” we are simply extending the Taylorist drive to document, measure, and analyze into the realm of leisure. Thinkers from Frank Lloyd Wright to John Maynard Keynes once predicted that technology would free us from toil, but as we all know, the devices it has yielded have only ended up increasing workloads. They have also taken command of leisure, yoking it to the constant labor of self-branding required under neoliberal capitalism, and making us complicit in our own surveillance to boot.
Not that there’s anything inherently wrong or self-exploitative about showing off your newly acquired basket-weaving skills on Instagram—and anyway, the line between leisure and labor is not always clearly drawn. From gardening to tweeting, labor often overlaps with pleasure and entertainment under certain conditions. But the fact that the platforms on which we document, communicate, and measure our leisure are owned by massive for-profit corporations that trade upon our freely given content ought to make us wonder not only what, exactly, they might be getting out of all this activity, but also how it frames our own ideas of what leisure is. If the satisfaction of posting on social media derives from garnering likes in the so-called attention economy, then posters will, according to a crude market logic, select what they believe to be the most “likeable” content for posting, and furthermore, will often alter their behavior to generate precisely that content. The mirror of social media metrics offers to show us whether we enjoyed ourselves, but just as with mirrors, we have to work to get back the reflection we want to see.
So Many Feels
The cult of productivity is a greedy thing; it sucks up both the time we spend in leisure and the very feelings it stirs in us. Happiness and other pleasant sensations must themselves become productive, which is why we talk of leisure being “restorative” or “rejuvenating.” Through coffee breaks and shorter workweeks, employers from municipal governments to investment banks are encouraging their workers to take time off, all under the guise of benevolent care. But these schemes are ultimately aimed at maximizing productivity and quelling discontent (and besides, employers maintain the power to retract these privileges at their own whims). Work depletes us emotionally, physically, and intellectually, and that is why we are entitled to periods of leisure—not because leisure is a human right or good in and of itself, but because it enables us to climb back onto the hamster wheel of marketplace activity in good cheer.
As neoliberalism reduces happiness to its uses, it steers our interests toward confirming our own feelings via external assessment. This assessment just so happens to require apparatuses (smartphones, laptops, Apple watches) and measurement units (faves, shares, star ratings) that turn us into eager buyers of consumer products and require our willing submission to corporate surveillance. None of which means that your Airbnb truffle-hunting experience—as well as subsequently posting about it and basking in the likes—didn’t make you happy. It simply means that the events and behavior that brought about this happiness coincide with the profit motives of a vast network of institutions that extends far beyond any one individual.
So they want us to buy their stuff and hand over our data. Fine. But why do they demand that we be so insistently, outwardly happy?
by Miya Tokumitsu , The Baffler | Read more:
Image: via
What continues to fuel this industry (by now a heritage one) is the durable belief that leisure ought to achieve something—a firmer bottom, new kitchen abilities, triumph over depression. In fact, why not go for the sublime leisure-success trifecta: physical, practical, and spiritual? One vacation currently offered in Sri Lanka features cycling, a tea tutorial, and a visit to a Buddhist temple, a package that promises to be active (but not draining), educational (but not tedious), and fun (but not dissolute). The “Experiences” section of Airbnb advertises all kinds of self- and life-improving activities, including a Korean food course, elementary corsetry, and even a microfinance workshop. (...)
Leisure, it turns out, requires measurement and evaluation. First of all, our irksome question remains: When partaking of leisure, how can you know that you aren’t slipping into idleness? Second, because leisure is a deserved reward, it should be fun, amusing, diverting, or otherwise pleasurable. This requirement begets another set of questions, perhaps even more existential in scope: How do leisure seekers even know whether they’re enjoying themselves, and if they are, whether the enjoyment . . . worked? Was the restoration sufficient? The self improved? The fun had?
These questions are most easily, if superficially, answered via the medley of social media platforms and portable devices bestowed on us by the wonders of consumer-product-driven innovation. Fitbit points, likes, and heart-eyed emoji faces have become the units of measurement by which we evaluate our own experiences. These tokens offer reassurance that our time is being optimally spent; they represent our leisure accomplishments. Social media and camera-equipped portable devices have given us the opportunity to solicit positive feedback from our friends, and indeed from the world at large, nonstop. Even when we are otherwise occupied or asleep, our photos and posts beam out, ever ready to scoop up likes and faves. Yet under the guise of fun and “connection,” we are simply extending the Taylorist drive to document, measure, and analyze into the realm of leisure. Thinkers from Frank Lloyd Wright to John Maynard Keynes once predicted that technology would free us from toil, but as we all know, the devices it has yielded have only ended up increasing workloads. They have also taken command of leisure, yoking it to the constant labor of self-branding required under neoliberal capitalism, and making us complicit in our own surveillance to boot.
Not that there’s anything inherently wrong or self-exploitative about showing off your newly acquired basket-weaving skills on Instagram—and anyway, the line between leisure and labor is not always clearly drawn. From gardening to tweeting, labor often overlaps with pleasure and entertainment under certain conditions. But the fact that the platforms on which we document, communicate, and measure our leisure are owned by massive for-profit corporations that trade upon our freely given content ought to make us wonder not only what, exactly, they might be getting out of all this activity, but also how it frames our own ideas of what leisure is. If the satisfaction of posting on social media derives from garnering likes in the so-called attention economy, then posters will, according to a crude market logic, select what they believe to be the most “likeable” content for posting, and furthermore, will often alter their behavior to generate precisely that content. The mirror of social media metrics offers to show us whether we enjoyed ourselves, but just as with mirrors, we have to work to get back the reflection we want to see.
So Many Feels
The cult of productivity is a greedy thing; it sucks up both the time we spend in leisure and the very feelings it stirs in us. Happiness and other pleasant sensations must themselves become productive, which is why we talk of leisure being “restorative” or “rejuvenating.” Through coffee breaks and shorter workweeks, employers from municipal governments to investment banks are encouraging their workers to take time off, all under the guise of benevolent care. But these schemes are ultimately aimed at maximizing productivity and quelling discontent (and besides, employers maintain the power to retract these privileges at their own whims). Work depletes us emotionally, physically, and intellectually, and that is why we are entitled to periods of leisure—not because leisure is a human right or good in and of itself, but because it enables us to climb back onto the hamster wheel of marketplace activity in good cheer.
As neoliberalism reduces happiness to its uses, it steers our interests toward confirming our own feelings via external assessment. This assessment just so happens to require apparatuses (smartphones, laptops, Apple watches) and measurement units (faves, shares, star ratings) that turn us into eager buyers of consumer products and require our willing submission to corporate surveillance. None of which means that your Airbnb truffle-hunting experience—as well as subsequently posting about it and basking in the likes—didn’t make you happy. It simply means that the events and behavior that brought about this happiness coincide with the profit motives of a vast network of institutions that extends far beyond any one individual.
So they want us to buy their stuff and hand over our data. Fine. But why do they demand that we be so insistently, outwardly happy?
by Miya Tokumitsu , The Baffler | Read more:
Image: via
Subscribe to:
Posts (Atom)