Friday, November 24, 2017
Why Did We Start Farming?
When our ancestors began to control fire, most likely somewhere in Africa around 400,000 years ago, the planet was set on a new course. We have little idea and even less evidence of how early humans made fire; perhaps they carried around smouldering bundles of leaves from forest fires, or captured the sparks thrown off when chipping stone or rubbing sticks together. However it happened, the human control of fire made an indelible mark on the earth’s ecosystems, and marked the beginning of the Anthropocene – the epoch in which humans have had a significant impact on the planet.
In Against the Grain James Scott describes these early stages as a ‘“thin” Anthropocene’, but ever since, the Anthropocene has been getting thicker. New layers of human impact were added by the adoption of farming about ten thousand years ago, the invention of the steam engine around 1780, and the dropping of the atomic bomb in 1945. Today the Anthropocene is so dense that we have virtually lost sight of anything that could be called ‘the natural world’.
Fire changed humans as well as the world. Eating cooked food transformed our bodies; we developed a much shorter digestive tract, meaning that more metabolic energy was available to grow our brains. At the same time, Homo sapiens became domesticated by its dependence on fire for warmth, protection and fuel. If this was the start of human progress towards ‘civilisation’, then – according to the conventional narrative – the next step was the invention of agriculture around ten thousand years ago. Farming, it is said, saved us from a dreary nomadic Stone Age hunter-gatherer existence by allowing us to settle down, build towns and develop the city-states that were the centres of early civilisations. People flocked to them for the security, leisure and economic opportunities gained from living within thick city walls. The story continues with the collapse of the city-states and barbarian insurgency, plunging civilised worlds – ancient Mesopotamia, China, Mesoamerica – into their dark ages. Thus civilisations rise and fall. Or so we are told.
The perfectly formed city-state is the ideal, deeply ingrained in the Western psyche, on which our notion of the nation-state is founded, ultimately inspiring Donald Trump’s notion of a ‘city’ wall to keep out the barbarian Mexican horde, and Brexiters’ desire to ‘take back control’ from insurgent European bureaucrats. But what if the conventional narrative is entirely wrong? What if ancient ruins testify to an aberration in the normal state of human affairs rather than a glorious and ancient past to whose achievements we should once again aspire? What if the origin of farming wasn’t a moment of liberation but of entrapment? Scott offers an alternative to the conventional narrative that is altogether more fascinating, not least in the way it omits any self-congratulation about human achievement. His account of the deep past doesn’t purport to be definitive, but it is surely more accurate than the one we’re used to, and it implicitly exposes the flaws in contemporary political ideas that ultimately rest on a narrative of human progress and on the ideal of the city/nation-state.
Why did people start farming? At the ‘Man the Hunter’ symposium in Chicago in 1966, Marshall Sahlins drew on research from the likes of Richard B. Lee among the !Kung of the Kalahari to argue that hunter-gatherers enjoyed the ‘original affluent society’. Even in the most marginal environments, he said, hunter-gatherers weren’t engaged in a constant struggle for survival, but had a leisurely lifestyle. Sahlins and his sources may have pushed the argument a little too far, neglecting to consider, for instance, the time spent preparing food (lots of mongongo nuts to crack). But their case was strong enough to deal a severe blow to the idea that farming was salvation for hunter-gatherers: however you cut it, farming involves much higher workloads and incurs more physical ailments than relying on the wild. And the more we discover, as Scott points out, the better a hunter-gatherer diet, health and work-life balance look.
This is especially true of the hunter-gatherers who dwelled in the wetlands where the first farming communities developed, in the Fertile Crescent, the arc of South-West Asia now covered by Jordan, the Occupied Palestinian Territories, Israel, Lebanon, Syria, southern Turkey, Iran and Iraq. Scott’s book focuses on Mesopotamia – the land between the Tigris and the Euphrates where the first city-states also appeared – though it takes many diversions into ancient China, Mesoamerica, and the Roman and Greek ancient worlds. Until about ten thousand years ago, Mesopotamia had been a world of hunter-gatherers with access to a huge range of resources: reeds and sedges for building and food, a great variety of edible plants (clubrush, cat’s-tails, water lily, bulrush), tortoises, fish, molluscs, crustaceans, birds, waterfowl, small mammals and migrating gazelles, which were the chief source of protein. The wild larder was routinely replenished by the annual cycle of the ripening of fruits and wild vegetables, and the seasonal changes that brought the arrival of migratory species.
Wetland environments were available to hunter-gatherers elsewhere in the world too. In China’s Hangzhou Bay, phenomenally well-preserved waterlogged sites show that hunter-gatherers became sedentary amid a bounteous range of wild resources. I do my own fieldwork in Wadi Faynan, in southern Jordan, which is now an arid, largely treeless landscape, but 12,000 years ago a perennial river flowed there. Where it joined with the river of Wadi Dana, an oasis-like niche was created. That is where the early Neolithic site of WF16 is now located (we call it Neolithic even though there is no trace of domesticated crops and animals). A dense cluster of about thirty semi-subterranean dwellings was constructed there between 12,500 and 10,500 years ago by hunter-gatherers who were clearly enjoying a diverse and resilient set of local resources: hunting wild goats, trapping birds, collecting figs, wild grass, nuts and so forth. I suspect they were also practising some form of environmental management, setting fires to promote young shoots, building small dams to retain and divert water, and undertaking selective culls among wild herds to sustain animal populations.
The key to food security was diversity: if, by chance, a particular foodstuff gave out, there were always more to choose from. And so hunter-gatherers could become sedentary if they wished, without having to grow crops or rear livestock. The first cultivation of barley and wheat came from the slight modification of wild stands – weeding, removing pests, transplanting, sowing seeds into alluvial soils. This would have provided hunter-gatherers with a new source of food at the cost of little additional effort. The mystery is why cereal-farming came to be so dominant. Why hunter-gatherers passed up their affluent lifestyle in favour of far more onerous and risky existences growing a narrow range of crops and managing livestock is a fundamental question to which we have no good answer. Was it by choice, or was that first sowing of seed a trap, locking people into a seasonal cycle of planting and harvesting from which we have been unable to escape?
In Against the Grain James Scott describes these early stages as a ‘“thin” Anthropocene’, but ever since, the Anthropocene has been getting thicker. New layers of human impact were added by the adoption of farming about ten thousand years ago, the invention of the steam engine around 1780, and the dropping of the atomic bomb in 1945. Today the Anthropocene is so dense that we have virtually lost sight of anything that could be called ‘the natural world’.
Fire changed humans as well as the world. Eating cooked food transformed our bodies; we developed a much shorter digestive tract, meaning that more metabolic energy was available to grow our brains. At the same time, Homo sapiens became domesticated by its dependence on fire for warmth, protection and fuel. If this was the start of human progress towards ‘civilisation’, then – according to the conventional narrative – the next step was the invention of agriculture around ten thousand years ago. Farming, it is said, saved us from a dreary nomadic Stone Age hunter-gatherer existence by allowing us to settle down, build towns and develop the city-states that were the centres of early civilisations. People flocked to them for the security, leisure and economic opportunities gained from living within thick city walls. The story continues with the collapse of the city-states and barbarian insurgency, plunging civilised worlds – ancient Mesopotamia, China, Mesoamerica – into their dark ages. Thus civilisations rise and fall. Or so we are told.The perfectly formed city-state is the ideal, deeply ingrained in the Western psyche, on which our notion of the nation-state is founded, ultimately inspiring Donald Trump’s notion of a ‘city’ wall to keep out the barbarian Mexican horde, and Brexiters’ desire to ‘take back control’ from insurgent European bureaucrats. But what if the conventional narrative is entirely wrong? What if ancient ruins testify to an aberration in the normal state of human affairs rather than a glorious and ancient past to whose achievements we should once again aspire? What if the origin of farming wasn’t a moment of liberation but of entrapment? Scott offers an alternative to the conventional narrative that is altogether more fascinating, not least in the way it omits any self-congratulation about human achievement. His account of the deep past doesn’t purport to be definitive, but it is surely more accurate than the one we’re used to, and it implicitly exposes the flaws in contemporary political ideas that ultimately rest on a narrative of human progress and on the ideal of the city/nation-state.
Why did people start farming? At the ‘Man the Hunter’ symposium in Chicago in 1966, Marshall Sahlins drew on research from the likes of Richard B. Lee among the !Kung of the Kalahari to argue that hunter-gatherers enjoyed the ‘original affluent society’. Even in the most marginal environments, he said, hunter-gatherers weren’t engaged in a constant struggle for survival, but had a leisurely lifestyle. Sahlins and his sources may have pushed the argument a little too far, neglecting to consider, for instance, the time spent preparing food (lots of mongongo nuts to crack). But their case was strong enough to deal a severe blow to the idea that farming was salvation for hunter-gatherers: however you cut it, farming involves much higher workloads and incurs more physical ailments than relying on the wild. And the more we discover, as Scott points out, the better a hunter-gatherer diet, health and work-life balance look.
This is especially true of the hunter-gatherers who dwelled in the wetlands where the first farming communities developed, in the Fertile Crescent, the arc of South-West Asia now covered by Jordan, the Occupied Palestinian Territories, Israel, Lebanon, Syria, southern Turkey, Iran and Iraq. Scott’s book focuses on Mesopotamia – the land between the Tigris and the Euphrates where the first city-states also appeared – though it takes many diversions into ancient China, Mesoamerica, and the Roman and Greek ancient worlds. Until about ten thousand years ago, Mesopotamia had been a world of hunter-gatherers with access to a huge range of resources: reeds and sedges for building and food, a great variety of edible plants (clubrush, cat’s-tails, water lily, bulrush), tortoises, fish, molluscs, crustaceans, birds, waterfowl, small mammals and migrating gazelles, which were the chief source of protein. The wild larder was routinely replenished by the annual cycle of the ripening of fruits and wild vegetables, and the seasonal changes that brought the arrival of migratory species.
Wetland environments were available to hunter-gatherers elsewhere in the world too. In China’s Hangzhou Bay, phenomenally well-preserved waterlogged sites show that hunter-gatherers became sedentary amid a bounteous range of wild resources. I do my own fieldwork in Wadi Faynan, in southern Jordan, which is now an arid, largely treeless landscape, but 12,000 years ago a perennial river flowed there. Where it joined with the river of Wadi Dana, an oasis-like niche was created. That is where the early Neolithic site of WF16 is now located (we call it Neolithic even though there is no trace of domesticated crops and animals). A dense cluster of about thirty semi-subterranean dwellings was constructed there between 12,500 and 10,500 years ago by hunter-gatherers who were clearly enjoying a diverse and resilient set of local resources: hunting wild goats, trapping birds, collecting figs, wild grass, nuts and so forth. I suspect they were also practising some form of environmental management, setting fires to promote young shoots, building small dams to retain and divert water, and undertaking selective culls among wild herds to sustain animal populations.
The key to food security was diversity: if, by chance, a particular foodstuff gave out, there were always more to choose from. And so hunter-gatherers could become sedentary if they wished, without having to grow crops or rear livestock. The first cultivation of barley and wheat came from the slight modification of wild stands – weeding, removing pests, transplanting, sowing seeds into alluvial soils. This would have provided hunter-gatherers with a new source of food at the cost of little additional effort. The mystery is why cereal-farming came to be so dominant. Why hunter-gatherers passed up their affluent lifestyle in favour of far more onerous and risky existences growing a narrow range of crops and managing livestock is a fundamental question to which we have no good answer. Was it by choice, or was that first sowing of seed a trap, locking people into a seasonal cycle of planting and harvesting from which we have been unable to escape?
by Steve Mithen, LRB | Read more:
Image: uncredited
Ushering My Father to a (Mostly) Good Death
“How about Tuesday?”
My father is propped up on three pillows in bed, talking logistics with my sister and me. We’ve just brought him his Ovaltine and insulin.
“Or would Thursday be better? That’s a couple days after the kids are done with camp.”
“Ok, let’s plan on Thursday.”
My father is scheduling his death. Sort of. He’s deciding when to stop going to dialysis. That starts the bodily clock that will lead to his falling into sleep more and more often, and then into a coma, and eventually nothingness.
He is remarkably sanguine about the prospect, which we’ve all had a long time to consider. A master of the understatement, he promises it’s not a terribly hard decision, to stop treatment and let nature takes its course, “but it is a bit irreversible.”
If I’m honest, he’s ready now to stop dialysis. It’s a brutal routine for someone in his condition, incredibly weak and fragile from living with end-stage pancreatic cancer, kidney disease, and diabetes. It’s painful for him to hold his head and neck up, which he has to do to get to the dialysis center. During the procedure, he must be closely watched so his blood pressure doesn’t plummet.
But he’s always been a generous man. He’s willing to sacrifice his own comfort in his dying days for the convenience of his family, since we all want to be present at the end. If he pushes his last day of dialysis to Tuesday, then my sister can still go on the California vacation she’d been planning with her family. If he pushes it to Thursday, I can still take the journalism fellowship I’d accepted. It will also give his grandchildren time to finish up their summer jobs and fly down.
Are we selfish for allowing him to make these choices? Possibly. But he insists, as he always has, that living for his children’s and grandchildren’s happiness is what gives his existence meaning. We hope that’s true. This is a man who spent his career as a professional decision analyst but always picked the worst-colored ties.
As it happens, though, when Thursday comes, he just can’t get out of the house. He is practically crying from discomfort as the caretaker lifts him off the bed onto his rollator, to start the journey up the stair lift and into the car. I tell him it’s okay. He can get back in bed. He looks so relieved when we rest his head back on the pillows.
I cancel my Amtrak ticket home to western Massachusetts and tell my husband not to expect me for the rest of the month. (...)
I’m at the kitchen table trying to figure out which insulin pen hasn’t yet reached its expiration date. I’m also making my second Nespresso of the morning. And I’m eavesdropping on my parents through the baby monitor.
We tried different methods of communication, and nothing worked very well. My father’s room is on the ground floor, and most of the house’s activity is a floor above. He would try clanking the metal bar above his hospital bed with a spoon, but it wasn’t loud enough and he’d be exhausted by the time someone noticed. He used to be able to use his cell phone to call the house landline, but his fine motor skills got too shaky to dial the right number.
We finally realized the best method was the same one we use for infants. That way, when he talks or moans or coughs, we hear it on the next floor — as long as we have the volume up and the remote monitor nearby. My mom once heard his ghostly voice calling out in pain from the upstairs bathroom, where I’d left the monitor by accident. I almost knocked over the dog running downstairs to respond. I’ve taken to keeping it tied to my belt with a string.
Of course, he loses something with this method: privacy. He forgets that any conversation he has — on the phone, or with a visitor — is also heard by whoever has the other device. We probably should turn it off, but then we might forget to turn it back on. Plus, it’s awfully tempting to listen in on deathbed conversations.
Which is how I find myself listening to my parents talk, for the first time in a long time, about life, death, and marriage. She doesn’t like going down to the bottom floor (she says it’s hard on her legs, plus it’s too musty, and a little sad), but now that he can’t come upstairs, she has no choice.
“How will you fare after I’m gone?” he asks my mother.
They are not a terribly affectionate couple, not in the last few decades. She tends to be irritable, he can get defensive. She likes cruises and entertainment news on TV, he likes to read and write and think deeply about his profession. They have separate bank accounts. But they are still quite attached to each other.
“Well, I’ve gotten used to you being gone, in a way,” she says. “For the last 20 years, you’ve been working on your book. I’ve had to find other things to do.”
“That must have been frustrating.”
“Yes, it was.”
Or:
“I feel sort of guilty, but I’ve booked a cruise,” my mom says. “For September.”
“Why would you feel guilty?”
“Because I’m assuming I won’t need to be at home anymore. It just feels like I’m counting on you being gone.”
“Well, that’s a pretty safe bet. You shouldn’t feel guilty. I’m glad you’re going.”
Then quiet. I finally turn off the monitor. (...)
For the past year, my teenage son has taken one or two items of his grandfather’s clothing home every time he visits. It’s weird to see my boy wearing a track suit or Hawaiian shirt that Dad spent so many years shuffling around in. My mother gets frustrated — “I bought those for Rex, and he hardly has any clothes left” — but dad doesn’t mind; he loves Sam wearing his clothes.
Dad’s tchotchkes are a bigger challenge to give away. He has awful taste in souvenirs. There’s an oversized green wine glass that says “Sexy Bitch.” I once asked why he had it in his room. “Because I couldn’t think of anyone to give it to.”
Then there’s his “treasure drawer.” In it, a quick-acting corkscrew, never opened. A prickly rubber ball that lights up when it bounces. An oak toilet paper holder. A shell necklace he bought in a cruise ship gift shop. A beeswax candle. He wants to make sure no one fights over his stuff. I assure him that will not be a problem. (But I want the corkscrew.)
He wants me to find something that my daughter might like. “We had some lovely conversations on her last visit,” he says. “I feel like I really got to know the young woman she’s going to become.” I pick up a couple of hand-sized metallic exercise balls. I’m not sure she’ll know what to do with them.
He also warns me, somewhat sheepishly, that there’s a box in the closet of, let’s say, “erotic” literature.
“What do you think Goodwill does with that sort of thing?” he said.
We will not be donating that box to Goodwill.
My father is propped up on three pillows in bed, talking logistics with my sister and me. We’ve just brought him his Ovaltine and insulin.
“Or would Thursday be better? That’s a couple days after the kids are done with camp.”
“Ok, let’s plan on Thursday.”
My father is scheduling his death. Sort of. He’s deciding when to stop going to dialysis. That starts the bodily clock that will lead to his falling into sleep more and more often, and then into a coma, and eventually nothingness.He is remarkably sanguine about the prospect, which we’ve all had a long time to consider. A master of the understatement, he promises it’s not a terribly hard decision, to stop treatment and let nature takes its course, “but it is a bit irreversible.”
If I’m honest, he’s ready now to stop dialysis. It’s a brutal routine for someone in his condition, incredibly weak and fragile from living with end-stage pancreatic cancer, kidney disease, and diabetes. It’s painful for him to hold his head and neck up, which he has to do to get to the dialysis center. During the procedure, he must be closely watched so his blood pressure doesn’t plummet.
But he’s always been a generous man. He’s willing to sacrifice his own comfort in his dying days for the convenience of his family, since we all want to be present at the end. If he pushes his last day of dialysis to Tuesday, then my sister can still go on the California vacation she’d been planning with her family. If he pushes it to Thursday, I can still take the journalism fellowship I’d accepted. It will also give his grandchildren time to finish up their summer jobs and fly down.
Are we selfish for allowing him to make these choices? Possibly. But he insists, as he always has, that living for his children’s and grandchildren’s happiness is what gives his existence meaning. We hope that’s true. This is a man who spent his career as a professional decision analyst but always picked the worst-colored ties.
As it happens, though, when Thursday comes, he just can’t get out of the house. He is practically crying from discomfort as the caretaker lifts him off the bed onto his rollator, to start the journey up the stair lift and into the car. I tell him it’s okay. He can get back in bed. He looks so relieved when we rest his head back on the pillows.
I cancel my Amtrak ticket home to western Massachusetts and tell my husband not to expect me for the rest of the month. (...)
I’m at the kitchen table trying to figure out which insulin pen hasn’t yet reached its expiration date. I’m also making my second Nespresso of the morning. And I’m eavesdropping on my parents through the baby monitor.
We tried different methods of communication, and nothing worked very well. My father’s room is on the ground floor, and most of the house’s activity is a floor above. He would try clanking the metal bar above his hospital bed with a spoon, but it wasn’t loud enough and he’d be exhausted by the time someone noticed. He used to be able to use his cell phone to call the house landline, but his fine motor skills got too shaky to dial the right number.
We finally realized the best method was the same one we use for infants. That way, when he talks or moans or coughs, we hear it on the next floor — as long as we have the volume up and the remote monitor nearby. My mom once heard his ghostly voice calling out in pain from the upstairs bathroom, where I’d left the monitor by accident. I almost knocked over the dog running downstairs to respond. I’ve taken to keeping it tied to my belt with a string.
Of course, he loses something with this method: privacy. He forgets that any conversation he has — on the phone, or with a visitor — is also heard by whoever has the other device. We probably should turn it off, but then we might forget to turn it back on. Plus, it’s awfully tempting to listen in on deathbed conversations.
Which is how I find myself listening to my parents talk, for the first time in a long time, about life, death, and marriage. She doesn’t like going down to the bottom floor (she says it’s hard on her legs, plus it’s too musty, and a little sad), but now that he can’t come upstairs, she has no choice.
“How will you fare after I’m gone?” he asks my mother.
They are not a terribly affectionate couple, not in the last few decades. She tends to be irritable, he can get defensive. She likes cruises and entertainment news on TV, he likes to read and write and think deeply about his profession. They have separate bank accounts. But they are still quite attached to each other.
“Well, I’ve gotten used to you being gone, in a way,” she says. “For the last 20 years, you’ve been working on your book. I’ve had to find other things to do.”
“That must have been frustrating.”
“Yes, it was.”
Or:
“I feel sort of guilty, but I’ve booked a cruise,” my mom says. “For September.”
“Why would you feel guilty?”
“Because I’m assuming I won’t need to be at home anymore. It just feels like I’m counting on you being gone.”
“Well, that’s a pretty safe bet. You shouldn’t feel guilty. I’m glad you’re going.”
Then quiet. I finally turn off the monitor. (...)
For the past year, my teenage son has taken one or two items of his grandfather’s clothing home every time he visits. It’s weird to see my boy wearing a track suit or Hawaiian shirt that Dad spent so many years shuffling around in. My mother gets frustrated — “I bought those for Rex, and he hardly has any clothes left” — but dad doesn’t mind; he loves Sam wearing his clothes.
Dad’s tchotchkes are a bigger challenge to give away. He has awful taste in souvenirs. There’s an oversized green wine glass that says “Sexy Bitch.” I once asked why he had it in his room. “Because I couldn’t think of anyone to give it to.”
Then there’s his “treasure drawer.” In it, a quick-acting corkscrew, never opened. A prickly rubber ball that lights up when it bounces. An oak toilet paper holder. A shell necklace he bought in a cruise ship gift shop. A beeswax candle. He wants to make sure no one fights over his stuff. I assure him that will not be a problem. (But I want the corkscrew.)
He wants me to find something that my daughter might like. “We had some lovely conversations on her last visit,” he says. “I feel like I really got to know the young woman she’s going to become.” I pick up a couple of hand-sized metallic exercise balls. I’m not sure she’ll know what to do with them.
He also warns me, somewhat sheepishly, that there’s a box in the closet of, let’s say, “erotic” literature.
“What do you think Goodwill does with that sort of thing?” he said.
We will not be donating that box to Goodwill.
by Karen Brown, Longreads | Read more:
Image: Karen Brown
New Zealand’s War on Rats Could Change the World
Until the 13th century, the only land mammals in New Zealand were bats. In this furless world, local birds evolved a docile temperament. Many of them, like the iconic kiwi and the giant kakapo parrot, lost their powers of flight. Gentle and grounded, they were easy prey for the rats, dogs, cats, stoats, weasels, and possums that were later introduced by humans. Between them, these predators devour more than 26 million chicks and eggs every year. They have already driven a quarter of the nation’s unique birds to extinction.
Many species now persist only in offshore islands where rats and their ilk have been successfully eradicated, or in small mainland sites like Zealandia where they are encircled by predator-proof fences. The songs in those sanctuaries are echoes of the New Zealand that was.
But perhaps, they also represent the New Zealand that could be.
In recent years, many of the country’s conservationists and residents have rallied behind Predator-Free 2050, an extraordinarily ambitious plan to save the country’s birds by eradicating its invasive predators. Native birds of prey will be unharmed, but Predator-Free 2050’s research strategy, which is released today, spells doom for rats, possums, and stoats (a large weasel). They are to die, every last one of them. No country, anywhere in the world, has managed such a task in an area that big. The largest island ever cleared of rats, Australia’s Macquarie Island, is just 50 square miles in size. New Zealand is 2,000 times bigger. But, the country has committed to fulfilling its ecological moonshot within three decades.
Beginning as a grassroots movement, Predator-Free 2050 has picked up huge public support and official government backing. Former Minister for Conservation Maggie Barry once described the initiative as “the most important conservation project in the history of our country.” If it works, Zealandia’s fence would be irrelevant; the entire nation would be a song-filled sanctuary where kiwis trundle unthreatened and kakapos once again boom through the night.
By coincidence, the rise of the Predator-Free 2050 conceit took place alongside the birth of a tool that could help make it a reality—CRISPR, the revolutionary technique that allows scientists to edit genes with precision and ease. In its raw power, some conservationists see a way of achieving impossible-sounding feats like exterminating an island’s rats by spreading genes through the wild population that make it difficult for the animals to reproduce. Think Children of Men, but for rats. Other scientists, including at least one gene-editing pioneer, see the potential for ecological catastrophe, beginning in an island nation with good intentions but eventually enveloping the globe. (...)
In 2014, kevin Esvelt, a biologist at MIT, drew a Venn diagram that troubles him to this day. In it, he and his colleagues laid out several possible uses for gene drives—a nascent technology for spreading designer genes through groups of wild animals. Typically, a given gene has a 50-50 chance of being passed to the next generation. But gene drives turn that coin toss into a guarantee, allowing traits to zoom through populations in just a few generations. There are a few natural examples, but with CRISPR, scientists can deliberately engineer such drives.
Suppose you have a population of rats, roughly half of which are brown, and the other half white. Now, imagine there is a gene that affects each rat's color. It comes in two forms, one leading to brown fur, and the other leading to white fur. A male with two brown copies mates with a female with two white copies, and all their offspring inherit one of each. Those offspring breed themselves, and the brown and white genes continue cascading through the generations in a 50-50 split. This is the usual story of inheritance. But you can subvert it with CRISPR, by programming the brown gene to cut its counterpart and replace it with another copy of itself. Now, the rats’ children are all brown-furred, as are their grandchildren, and soon the whole population is brown.
Forget fur. The same technique could spread an antimalarial gene through a mosquito population, or drought-resistance through crop plants. The applications are vast, but so are the risks. In theory, gene drives spread so quickly and relentlessly that they could rewrite an entire wild population, and once released, they would be hard to contain. If the concept of modifying the genes of organisms is already distasteful to some, gene drives magnify that distaste across national, continental, and perhaps even global scales.
Esvelt understood that from the beginning. In an early paper discussing gene drives, he and his colleagues discussed the risks, and suggested several safeguards. But they also included a pretty Venn diagram that outlined several possible applications, including using gene drives to control invasive species—like rats. That was exactly the kind of innovation that New Zealand was after. You could spread a gene that messes with the rodent’s fertility, or that biases them toward one sex or the other. Without need for poisons or traps, their population would eventually crash.
Please don’t do it, says Esvelt. “It was profoundly wrong of me to even suggest it, because I badly misled many conservationists who are desperately in need of hope. It was an embarrassing mistake.”
Through mathematical simulations conducted with colleagues at Harvard, he has now shown that gene drives are even more invasive than he expected. Even the weakest CRISPR-based gene drives would thoroughly invade wild populations, if just a few carriers were released. They’re so powerful that Esvelt says they shouldn’t be tested on a small scale. If conservationists tried to eliminate rats on a remote island using gene drives, it would only take a few strongly swimming rodents to spread the drive to the mainland—and beyond. “You cannot simply sequester them and wall them off from the wider world,” Esvelt says. They’ll eventually spread throughout the full range of the species they target. And if that species is the brown rat, you’re talking about the entire planet.
Together with Neil Gemmell from the University of Otago, who is advising Predator-Free 2050, Esvelt has written an opinion piece explicitly asking conservationists to steer clear of standard gene drives. “We want to really drive home—ha ha—that this is a technology that isn’t suitable for the vast majority of potential applications that people imagine for it,” he says. (The only possible exceptions, he says, are eliminating certain diseases like malaria and schistosomiasis, which affect hundreds of millions of lives and have proven hard to control.)
It’s not ready yet, either. Even if gene drives were given a green light today, Gemmell says it would take at least 2 to 3 years to develop carrier animals, another 2 years to test those individuals in a lab, and several years more to set up a small field trial. And these technical hurdles pale in comparison to the political ones. Rats are vermin to many cultures, but they’re also holy to some, and they’re likely to be crucial parts of many ecosystems around the world. Eradicating them is not something that any single nation could do unilaterally. It would have to be a global decision—and that’s unlikely. Consider how much effort it has taken to reach international agreements about climate change—another crisis in which the actions of certain nations have disproportionately reshaped the ecosystems of the entire world. Genetic tools have now become so powerful that they could trigger similar changes, but faster and perhaps more irreversibly.
“In a global society, we can’t act in isolation,” says Gemmell. “Some of these tools we’re thinking about developing will cross international borders. New Zealand is an island nation relatively isolated from everyone else, but what if this was a conversation happening in the United States about eradicating rodents? What if Canadians and Mexicans had a different view? This is something that should be addressed.”
[ed. See also: Mail Order CRISPR Kits Allow Absolutely Anyone to Hack DNA.]
Many species now persist only in offshore islands where rats and their ilk have been successfully eradicated, or in small mainland sites like Zealandia where they are encircled by predator-proof fences. The songs in those sanctuaries are echoes of the New Zealand that was.
But perhaps, they also represent the New Zealand that could be.In recent years, many of the country’s conservationists and residents have rallied behind Predator-Free 2050, an extraordinarily ambitious plan to save the country’s birds by eradicating its invasive predators. Native birds of prey will be unharmed, but Predator-Free 2050’s research strategy, which is released today, spells doom for rats, possums, and stoats (a large weasel). They are to die, every last one of them. No country, anywhere in the world, has managed such a task in an area that big. The largest island ever cleared of rats, Australia’s Macquarie Island, is just 50 square miles in size. New Zealand is 2,000 times bigger. But, the country has committed to fulfilling its ecological moonshot within three decades.
Beginning as a grassroots movement, Predator-Free 2050 has picked up huge public support and official government backing. Former Minister for Conservation Maggie Barry once described the initiative as “the most important conservation project in the history of our country.” If it works, Zealandia’s fence would be irrelevant; the entire nation would be a song-filled sanctuary where kiwis trundle unthreatened and kakapos once again boom through the night.
By coincidence, the rise of the Predator-Free 2050 conceit took place alongside the birth of a tool that could help make it a reality—CRISPR, the revolutionary technique that allows scientists to edit genes with precision and ease. In its raw power, some conservationists see a way of achieving impossible-sounding feats like exterminating an island’s rats by spreading genes through the wild population that make it difficult for the animals to reproduce. Think Children of Men, but for rats. Other scientists, including at least one gene-editing pioneer, see the potential for ecological catastrophe, beginning in an island nation with good intentions but eventually enveloping the globe. (...)
In 2014, kevin Esvelt, a biologist at MIT, drew a Venn diagram that troubles him to this day. In it, he and his colleagues laid out several possible uses for gene drives—a nascent technology for spreading designer genes through groups of wild animals. Typically, a given gene has a 50-50 chance of being passed to the next generation. But gene drives turn that coin toss into a guarantee, allowing traits to zoom through populations in just a few generations. There are a few natural examples, but with CRISPR, scientists can deliberately engineer such drives.
Suppose you have a population of rats, roughly half of which are brown, and the other half white. Now, imagine there is a gene that affects each rat's color. It comes in two forms, one leading to brown fur, and the other leading to white fur. A male with two brown copies mates with a female with two white copies, and all their offspring inherit one of each. Those offspring breed themselves, and the brown and white genes continue cascading through the generations in a 50-50 split. This is the usual story of inheritance. But you can subvert it with CRISPR, by programming the brown gene to cut its counterpart and replace it with another copy of itself. Now, the rats’ children are all brown-furred, as are their grandchildren, and soon the whole population is brown.
Forget fur. The same technique could spread an antimalarial gene through a mosquito population, or drought-resistance through crop plants. The applications are vast, but so are the risks. In theory, gene drives spread so quickly and relentlessly that they could rewrite an entire wild population, and once released, they would be hard to contain. If the concept of modifying the genes of organisms is already distasteful to some, gene drives magnify that distaste across national, continental, and perhaps even global scales.
Esvelt understood that from the beginning. In an early paper discussing gene drives, he and his colleagues discussed the risks, and suggested several safeguards. But they also included a pretty Venn diagram that outlined several possible applications, including using gene drives to control invasive species—like rats. That was exactly the kind of innovation that New Zealand was after. You could spread a gene that messes with the rodent’s fertility, or that biases them toward one sex or the other. Without need for poisons or traps, their population would eventually crash.
Please don’t do it, says Esvelt. “It was profoundly wrong of me to even suggest it, because I badly misled many conservationists who are desperately in need of hope. It was an embarrassing mistake.”
Through mathematical simulations conducted with colleagues at Harvard, he has now shown that gene drives are even more invasive than he expected. Even the weakest CRISPR-based gene drives would thoroughly invade wild populations, if just a few carriers were released. They’re so powerful that Esvelt says they shouldn’t be tested on a small scale. If conservationists tried to eliminate rats on a remote island using gene drives, it would only take a few strongly swimming rodents to spread the drive to the mainland—and beyond. “You cannot simply sequester them and wall them off from the wider world,” Esvelt says. They’ll eventually spread throughout the full range of the species they target. And if that species is the brown rat, you’re talking about the entire planet.
Together with Neil Gemmell from the University of Otago, who is advising Predator-Free 2050, Esvelt has written an opinion piece explicitly asking conservationists to steer clear of standard gene drives. “We want to really drive home—ha ha—that this is a technology that isn’t suitable for the vast majority of potential applications that people imagine for it,” he says. (The only possible exceptions, he says, are eliminating certain diseases like malaria and schistosomiasis, which affect hundreds of millions of lives and have proven hard to control.)
It’s not ready yet, either. Even if gene drives were given a green light today, Gemmell says it would take at least 2 to 3 years to develop carrier animals, another 2 years to test those individuals in a lab, and several years more to set up a small field trial. And these technical hurdles pale in comparison to the political ones. Rats are vermin to many cultures, but they’re also holy to some, and they’re likely to be crucial parts of many ecosystems around the world. Eradicating them is not something that any single nation could do unilaterally. It would have to be a global decision—and that’s unlikely. Consider how much effort it has taken to reach international agreements about climate change—another crisis in which the actions of certain nations have disproportionately reshaped the ecosystems of the entire world. Genetic tools have now become so powerful that they could trigger similar changes, but faster and perhaps more irreversibly.
“In a global society, we can’t act in isolation,” says Gemmell. “Some of these tools we’re thinking about developing will cross international borders. New Zealand is an island nation relatively isolated from everyone else, but what if this was a conversation happening in the United States about eradicating rodents? What if Canadians and Mexicans had a different view? This is something that should be addressed.”
by Ed Yong, The Atlantic | Read more:
Image: Stas Kulesh/Getty Images[ed. See also: Mail Order CRISPR Kits Allow Absolutely Anyone to Hack DNA.]
Labels:
Animals,
Biology,
Environment,
Science,
Technology
Thursday, November 23, 2017
The Great American Sex Panic of 2017
I confess to being troubled rather than elated by the daily rumble of idols falling to accusations of “sexual misconduct,” the morbid masscult fixation that conceals private titillation, knowing smirks, and sadistic lip-smacking behind a public mask of solemn reproof.
Weinstein and Trump and Roy Moore and Bill Clinton are vile pigs and creeps, no doubt; I have always detested the smug neoliberal performance-art strut of Al Franken and the careerist-toady journalism of Glenn Thrush and Charlie Rose, the latest dominoes to tumble amid the barrage of public accusations of “inappropriate” advances or touching.
But the boundary between cultural tolerance/intolerance blurs and shifts with each passing revelation, as the litany of sins, ancient or recent, cardinal or venal, snowballs into an avalanche of aggrieved, undifferentiated accusation—a stampeding herd of “Me-Tooists.” Successive waves of long-forgotten gropes and slurps now overwhelm the news channel chyrons, leaving us with the sense that no greater crime against humanity is possible than an unsolicited horndog lunge of the hand or tongue, some of them from twenty or thirty years past but divulged only in the past few weeks.
Let’s be honest—these “shocking” revelations about Franken—that he tried to tongue-kiss a woman one time in a rehearsal and mock-grabbed her somnolent breasts in a silly frat-house pose or that maybe his hand strayed too far toward a woman’s derriere as he obliged her with a photo at a state fair five years ago—would have elicited nothing more than a public yawn just a few weeks or months ago in the BW (Before Weinstein) era; in fact, these two women, seemingly unperturbed enough to leave these incidents unreported for five or six years, would likely not have thought to join the solemn procession of the violated on national TV if not for the stampede effect of each successive cri de coeur.
But is it an advance in collective ethical consciousness when the public reservoir of shock and indignation is so easily churned up and tapped out over erotic peccadillos? And here I must of course distinguish between outright rape—always a viscerally sickening crime against human dignity— or implied or explicit threats to a woman worker’s livelihood over sexual “favors” on the one hand, and on the other the impetuous volcanic eruptions of erotic passion that inevitably leave one or both partners discomfited or embarrassed or forlorn by unexpected or unwelcome overtures, tactile or verbal. As the left blogger Michael J. Smith points out, “Not all acts are equally grave—an off-color joke is not as bad as a grope, and a grope is not as bad as a rape.” Then what interest of sanity or reason is served by this reckless lumping together of flicks of the tongue and forcible rapes into the single broad-brush term “sexual misconduct,” as though there is no important difference between an oafish pat or crude remark at an office party and a gang rape? This would be like applying the term “communist” alike to advocates of single payer healthcare and campaigners for one-party centralized control of the entire economy—oh wait, we have seen precisely that: during the McCarthy era. Now then . . . is all this beginning to have a familiar ring to it?
And not merely deeds but words have fallen under scrutiny: on Sunday Jeffrey Tambor joined the ranks of the accused, walking the plank by quitting his acclaimed Amazon series Transparent in the wake of two allegations of the use of “lewd” language in front of his assistant and a fellow actor. So the stain of ostracism has now spread from conduct to mere speech.
Alarmingly, the Pecksniffian word lewd has enjoyed a recent rehabilitation among the corporate-media “news” networks, cogs in giant infotainment conglomerates whose cash flow depends precisely on mass dissemination of HD depictions of explicit sexual “lewdness” and violence that their news departments then deplore when evidenced in real life. “Lewd” enjoyed a boomlet during the presidential campaign when the pro-Clinton newsies and talking-head strategists were professing daily bouts of horror at the revelations of the Donald’s coarse frat-boy talk on Access Hollywood. This seems to have been the first time this word had gained any traction since seventeenth-century Salem and Victorian England. This battalion of elite lewdness police are the same Ivy League graduates who in college probably considered Henry Miller a genius, not in spite of, but because of, his portrayal of raw lust in language that makes Trump’s private palaver or Tambor’s japes seem tepid and repressed by comparison. (It’s not impossible that some of these same people consider Quentin Tarantino, cinematic maestro of the vile obscenities of language and violence, a great auteur as well.) The whole spectacle is at once comical and nauseating. (...)
Something surpassingly strange is at work here—a wrong-headed authoritarian ire over the spasmodic misfires of the human comedy combined with some primal meltdown of a besieged and increasingly desperate ruling class and its longstanding winking sexual hypocrisies. It is a moral panic that is, ironically, immoral at its core: repressive and diversionary, an identity-politics orgy of misdirected moral energies that breeds a chilling conformity of word and deed and, in so doing, cripples the critical faculties and independence of spirit needed to challenge the status quo the PC monitors profess to abhor. In reality, their speech and conduct codes foster a spirit of regimentation rather than rebellion, thereby shoring up the power of the repressive elites that are leading the human race to social, economic, and ecological disaster.
Weinstein and Trump and Roy Moore and Bill Clinton are vile pigs and creeps, no doubt; I have always detested the smug neoliberal performance-art strut of Al Franken and the careerist-toady journalism of Glenn Thrush and Charlie Rose, the latest dominoes to tumble amid the barrage of public accusations of “inappropriate” advances or touching.
But the boundary between cultural tolerance/intolerance blurs and shifts with each passing revelation, as the litany of sins, ancient or recent, cardinal or venal, snowballs into an avalanche of aggrieved, undifferentiated accusation—a stampeding herd of “Me-Tooists.” Successive waves of long-forgotten gropes and slurps now overwhelm the news channel chyrons, leaving us with the sense that no greater crime against humanity is possible than an unsolicited horndog lunge of the hand or tongue, some of them from twenty or thirty years past but divulged only in the past few weeks.Let’s be honest—these “shocking” revelations about Franken—that he tried to tongue-kiss a woman one time in a rehearsal and mock-grabbed her somnolent breasts in a silly frat-house pose or that maybe his hand strayed too far toward a woman’s derriere as he obliged her with a photo at a state fair five years ago—would have elicited nothing more than a public yawn just a few weeks or months ago in the BW (Before Weinstein) era; in fact, these two women, seemingly unperturbed enough to leave these incidents unreported for five or six years, would likely not have thought to join the solemn procession of the violated on national TV if not for the stampede effect of each successive cri de coeur.
But is it an advance in collective ethical consciousness when the public reservoir of shock and indignation is so easily churned up and tapped out over erotic peccadillos? And here I must of course distinguish between outright rape—always a viscerally sickening crime against human dignity— or implied or explicit threats to a woman worker’s livelihood over sexual “favors” on the one hand, and on the other the impetuous volcanic eruptions of erotic passion that inevitably leave one or both partners discomfited or embarrassed or forlorn by unexpected or unwelcome overtures, tactile or verbal. As the left blogger Michael J. Smith points out, “Not all acts are equally grave—an off-color joke is not as bad as a grope, and a grope is not as bad as a rape.” Then what interest of sanity or reason is served by this reckless lumping together of flicks of the tongue and forcible rapes into the single broad-brush term “sexual misconduct,” as though there is no important difference between an oafish pat or crude remark at an office party and a gang rape? This would be like applying the term “communist” alike to advocates of single payer healthcare and campaigners for one-party centralized control of the entire economy—oh wait, we have seen precisely that: during the McCarthy era. Now then . . . is all this beginning to have a familiar ring to it?
And not merely deeds but words have fallen under scrutiny: on Sunday Jeffrey Tambor joined the ranks of the accused, walking the plank by quitting his acclaimed Amazon series Transparent in the wake of two allegations of the use of “lewd” language in front of his assistant and a fellow actor. So the stain of ostracism has now spread from conduct to mere speech.
Alarmingly, the Pecksniffian word lewd has enjoyed a recent rehabilitation among the corporate-media “news” networks, cogs in giant infotainment conglomerates whose cash flow depends precisely on mass dissemination of HD depictions of explicit sexual “lewdness” and violence that their news departments then deplore when evidenced in real life. “Lewd” enjoyed a boomlet during the presidential campaign when the pro-Clinton newsies and talking-head strategists were professing daily bouts of horror at the revelations of the Donald’s coarse frat-boy talk on Access Hollywood. This seems to have been the first time this word had gained any traction since seventeenth-century Salem and Victorian England. This battalion of elite lewdness police are the same Ivy League graduates who in college probably considered Henry Miller a genius, not in spite of, but because of, his portrayal of raw lust in language that makes Trump’s private palaver or Tambor’s japes seem tepid and repressed by comparison. (It’s not impossible that some of these same people consider Quentin Tarantino, cinematic maestro of the vile obscenities of language and violence, a great auteur as well.) The whole spectacle is at once comical and nauseating. (...)
Something surpassingly strange is at work here—a wrong-headed authoritarian ire over the spasmodic misfires of the human comedy combined with some primal meltdown of a besieged and increasingly desperate ruling class and its longstanding winking sexual hypocrisies. It is a moral panic that is, ironically, immoral at its core: repressive and diversionary, an identity-politics orgy of misdirected moral energies that breeds a chilling conformity of word and deed and, in so doing, cripples the critical faculties and independence of spirit needed to challenge the status quo the PC monitors profess to abhor. In reality, their speech and conduct codes foster a spirit of regimentation rather than rebellion, thereby shoring up the power of the repressive elites that are leading the human race to social, economic, and ecological disaster.
[ed. Happy Thanksgiving everybody... knock yourselves out. See also: The Ancestral Burden of Being a Detroit Lions Fan.]
Robert Sapolsky On Depression
[ed. Excellent. I've watched this a couple times now and especially intrigued by the process whereby recurrent shocks to a person's system can apparently set up a cyclic and increasingly diminished ability to overcome this affliction (45:55 - 50:20).]
Is the Stupidity of Our Age Unique?
Broadly speaking, there are two popular views of human history. One view is that our ancestors were ignorant, fearful, and credulous. Then we discovered science, and medicine, and birth control, and since then, our society has gradually become more humane. The other view is that the human race used to be dignified, spiritually enlightened, and fully-integrated within our communities and our natural environment. Increasingly, however, in the unnatural pressure-cooker of modernization, we are all becoming more and more depressed and selfish. Among academic historians, these two views are known as the “Everything Is Fantastic Nowadays!” and the “Everything is Garbage Nowadays!” schools of thought, respectively.
This worldview split does not divvy up along clear political lines. In a gathering of miscellaneous lefties, if you were to expound on the virtues of a kinder, simpler, pre-industrial past, it’s a toss-up whether you’d be casually ID’d as a cooperative agrarian socialist or denounced as a crypto-fascist. Likewise, if you were to make an impassioned plea for the importance of scientific knowledge and its ability to solve certain kinds of human problems, you might be hailed as a free-thinker, or you might be written off as a liberal technocrat. It’s all very confusing.
Current Affairs is not here to adjudicate whether the past was good or bad. In our view, the past had many things going for it. There were more trees and animals, the buildings were more attractive, old people weren’t put into containment silos, and everybody got more exercise (albeit often via war). At the same time, of course, the past was a fucking nightmare. Lots of people died from infection, childbirth, or literally pooping themselves to death. Murder was much more readily accepted as a reasonable form of dispute resolution. The weak were trampled upon in proportionally greater numbers. People had to farm all the damn time, regardless of whether they enjoyed manual labor, and even if they were scared of earthworms.
One thing we can say with reasonable confidence, however, is that while the ways human beings have shaped their environments have changed over time, human beings themselves, with minor variations, have always been just the same. We are the blundering, dyspeptic, misbegotten wretches that our forefathers were. The décor changes, but the humans remain the humans.
This realization should frighten us, of course, because the history of civilization is largely the history of organized atrocities. But in another sense, it should also encourage us. There is a prevailing, pessimistic view—held by impatient, forward-looking futurists and nostalgic traditionalists alike—that the people of the 21st century are uncommonly stupid. If we don’t keep the wheels of scientific and educational progress rolling—or, alternatively, if we don’t return to the golden age when People Knew How To Think—the human race is doomed, they say. Here comes the “idiocracy”: we are drifting toward an eternity of pudgy torpor, distracted by useless plastic whirligigs and reality television. It is the Age of the Fidget Spinner. Our brains have turned to soft cheese, our culture is decadent and superficial.
This, as it turns out, is all nonsense. There seems to be a vague notion that people used to sit around making star charts and reading edifying books until somebody invented video games and reality television. But the truth is that people of all classes and educational levels have always been highly susceptible to bullshit. They have always enjoyed stupid pastimes and spent money on useless items. They have always talked trash about each other, and taken delight in one another’s misfortunes. They have always been celebrity-obsessed. They have always sought unsavory outlets for their sexual and violent fantasies. Is any of this laudable? Not especially. But is any of it new? Not at all. Nor does our history suggest that these facts of human nature are ever likely to change. The most we can do is just continue to muddle through, and try, day to day, to be the least ghastly versions of ourselves we can.
Ah, but you don’t believe us! Then let us take a peek into the historical offal bucket and see what our predecessors were up to.
This worldview split does not divvy up along clear political lines. In a gathering of miscellaneous lefties, if you were to expound on the virtues of a kinder, simpler, pre-industrial past, it’s a toss-up whether you’d be casually ID’d as a cooperative agrarian socialist or denounced as a crypto-fascist. Likewise, if you were to make an impassioned plea for the importance of scientific knowledge and its ability to solve certain kinds of human problems, you might be hailed as a free-thinker, or you might be written off as a liberal technocrat. It’s all very confusing.
Current Affairs is not here to adjudicate whether the past was good or bad. In our view, the past had many things going for it. There were more trees and animals, the buildings were more attractive, old people weren’t put into containment silos, and everybody got more exercise (albeit often via war). At the same time, of course, the past was a fucking nightmare. Lots of people died from infection, childbirth, or literally pooping themselves to death. Murder was much more readily accepted as a reasonable form of dispute resolution. The weak were trampled upon in proportionally greater numbers. People had to farm all the damn time, regardless of whether they enjoyed manual labor, and even if they were scared of earthworms.One thing we can say with reasonable confidence, however, is that while the ways human beings have shaped their environments have changed over time, human beings themselves, with minor variations, have always been just the same. We are the blundering, dyspeptic, misbegotten wretches that our forefathers were. The décor changes, but the humans remain the humans.
This realization should frighten us, of course, because the history of civilization is largely the history of organized atrocities. But in another sense, it should also encourage us. There is a prevailing, pessimistic view—held by impatient, forward-looking futurists and nostalgic traditionalists alike—that the people of the 21st century are uncommonly stupid. If we don’t keep the wheels of scientific and educational progress rolling—or, alternatively, if we don’t return to the golden age when People Knew How To Think—the human race is doomed, they say. Here comes the “idiocracy”: we are drifting toward an eternity of pudgy torpor, distracted by useless plastic whirligigs and reality television. It is the Age of the Fidget Spinner. Our brains have turned to soft cheese, our culture is decadent and superficial.
This, as it turns out, is all nonsense. There seems to be a vague notion that people used to sit around making star charts and reading edifying books until somebody invented video games and reality television. But the truth is that people of all classes and educational levels have always been highly susceptible to bullshit. They have always enjoyed stupid pastimes and spent money on useless items. They have always talked trash about each other, and taken delight in one another’s misfortunes. They have always been celebrity-obsessed. They have always sought unsavory outlets for their sexual and violent fantasies. Is any of this laudable? Not especially. But is any of it new? Not at all. Nor does our history suggest that these facts of human nature are ever likely to change. The most we can do is just continue to muddle through, and try, day to day, to be the least ghastly versions of ourselves we can.
Ah, but you don’t believe us! Then let us take a peek into the historical offal bucket and see what our predecessors were up to.
by Brianna Rennix and Nathan J. Robinson, Current Affairs | Read more:
Image: uncredited
Gobo
Before the internet, we relied on newspapers and broadcasters to filter much of our information, choosing curators based on their styles, reputations and biases – did you want a Wall Street Journal or New York Times view of the world? Fox News or NPR? The rise of powerful search engines made it possible to filter information based on our own interests – if you’re interested in sumo wrestling, you can learn whatever Google will show you, even if professional curators don’t see the sport as a priority.
Social media has presented a new problem for filters. The theory behind social media is that we want to pay attention to what our friends and family think is important. In practice, paying attention to everything 500 or 1500 friends are interested in is overwhelming – Robin Dunbar theorizes that people have a hard limit to how many relationships we can cognitively maintain. Twitter solves this problem with a social hack: it’s okay to miss posts on your feed because so many are flowing by… though Twitter now tries to catch you up on important posts if you had the temerity to step away from the service for a few hours.
Facebook and other social media platforms solve the problem a different way: the algorithm. Facebook’s news feed usually differs sharply from a list of the most recent items posted by your friends and pages you follow – instead, it’s been personalized using thousands of factors, meaning you’ll see posts Facebook thinks you’ll want to see from hours or days ago, while you’ll miss some recent posts the algorithm thinks won’t interest you. Research from the labs of Christian Sandvig and Karrie Karahalios suggests that even heavy Facebook users aren’t aware that algorithms shape their use of the service, and that many have experienced anxiety about not receiving responses to posts the algorithm suppressed.
Many of the anxieties about Facebook and other social platforms are really anxieties about filtering. The filter bubble, posited by Eli Pariser, is the idea that our natural tendencies towards homophily get amplified by filters designed to give us what we want, not ideas that challenge us, leading to ideological isolation and polarization. Fake news designed to mislead audiences and garner ad views relies on the fact that Facebook’s algorithms have a difficult time determining whether information is true or not, but can easily see whether information is new and popular, sharing information that’s received strong reactions from previous audiences. When Congress demands action on fake news and Kremlin propaganda, they’re requesting another form of filtering, based on who’s creating content and on whether it’s factually accurate. (...)
Algorithmic filters optimize platforms for user retention and engagement, keeping our eyes firmly on the site so that our attention can be sold to advertisers. We thought it was time that we all had a tool that let us filter social media the ways we choose. What if we could choose to challenge ourselves one day, encountering perspectives from outside our normal orbits, and relax another day, filtering for what’s funniest and most viral. So we built Gobo.
What’s Gobo?
Gobo is a social media aggregator with filters you control. You can use Gobo to control what’s edited out of your feed, or configure it to include news and points of view from outside your usual orbit. Gobo aims to be completely transparent, showing you why each post was included in your feed and inviting you to explore what was filtered out by your current filter settings. (...)
How does it work?
Gobo retrieves posts from people you follow on Twitter and Facebook and analyzes them using simple machine learning-based filters. You can set those filters – seriousness, rudeness, virality, gender and brands – to eliminate some posts from your feed. The “politics” slider works differently, “filtering in”, instead of “filtering out” – if you set the slider towards “lots of perspectives”, our “news echo” algorithm will start adding in posts from media outlets that you likely don’t read every day.
Social media has presented a new problem for filters. The theory behind social media is that we want to pay attention to what our friends and family think is important. In practice, paying attention to everything 500 or 1500 friends are interested in is overwhelming – Robin Dunbar theorizes that people have a hard limit to how many relationships we can cognitively maintain. Twitter solves this problem with a social hack: it’s okay to miss posts on your feed because so many are flowing by… though Twitter now tries to catch you up on important posts if you had the temerity to step away from the service for a few hours.
Facebook and other social media platforms solve the problem a different way: the algorithm. Facebook’s news feed usually differs sharply from a list of the most recent items posted by your friends and pages you follow – instead, it’s been personalized using thousands of factors, meaning you’ll see posts Facebook thinks you’ll want to see from hours or days ago, while you’ll miss some recent posts the algorithm thinks won’t interest you. Research from the labs of Christian Sandvig and Karrie Karahalios suggests that even heavy Facebook users aren’t aware that algorithms shape their use of the service, and that many have experienced anxiety about not receiving responses to posts the algorithm suppressed.Many of the anxieties about Facebook and other social platforms are really anxieties about filtering. The filter bubble, posited by Eli Pariser, is the idea that our natural tendencies towards homophily get amplified by filters designed to give us what we want, not ideas that challenge us, leading to ideological isolation and polarization. Fake news designed to mislead audiences and garner ad views relies on the fact that Facebook’s algorithms have a difficult time determining whether information is true or not, but can easily see whether information is new and popular, sharing information that’s received strong reactions from previous audiences. When Congress demands action on fake news and Kremlin propaganda, they’re requesting another form of filtering, based on who’s creating content and on whether it’s factually accurate. (...)
Algorithmic filters optimize platforms for user retention and engagement, keeping our eyes firmly on the site so that our attention can be sold to advertisers. We thought it was time that we all had a tool that let us filter social media the ways we choose. What if we could choose to challenge ourselves one day, encountering perspectives from outside our normal orbits, and relax another day, filtering for what’s funniest and most viral. So we built Gobo.
What’s Gobo?
Gobo is a social media aggregator with filters you control. You can use Gobo to control what’s edited out of your feed, or configure it to include news and points of view from outside your usual orbit. Gobo aims to be completely transparent, showing you why each post was included in your feed and inviting you to explore what was filtered out by your current filter settings. (...)
How does it work?
Gobo retrieves posts from people you follow on Twitter and Facebook and analyzes them using simple machine learning-based filters. You can set those filters – seriousness, rudeness, virality, gender and brands – to eliminate some posts from your feed. The “politics” slider works differently, “filtering in”, instead of “filtering out” – if you set the slider towards “lots of perspectives”, our “news echo” algorithm will start adding in posts from media outlets that you likely don’t read every day.
by Ethan Zukerman, My Heart's in Accra | Read more:
Image: Gobo
Wednesday, November 22, 2017
Tuesday, November 21, 2017
What Do We Do with the Art of Monstrous Men?
Roman Polanski, Woody Allen, Bill Cosby, William Burroughs, Richard Wagner, Sid Vicious, V. S. Naipaul, John Galliano, Norman Mailer, Ezra Pound, Caravaggio, Floyd Mayweather, though if we start listing athletes we’ll never stop. And what about the women? The list immediately becomes much more difficult and tentative: Anne Sexton? Joan Crawford? Sylvia Plath? Does self-harm count? Okay, well, it’s back to the men I guess: Pablo Picasso, Max Ernst, Lead Belly, Miles Davis, Phil Spector.
They did or said something awful, and made something great. The awful thing disrupts the great work; we can’t watch or listen to or read the great work without remembering the awful thing. Flooded with knowledge of the maker’s monstrousness, we turn away, overcome by disgust. Or … we don’t. We continue watching, separating or trying to separate the artist from the art. Either way: disruption. They are monster geniuses, and I don’t know what to do about them.
We’ve all been thinking about monsters in the Trump era. For me, it began a few years ago. I was researching Roman Polanski for a book I was writing and found myself awed by his monstrousness. It was monumental, like the Grand Canyon. And yet. When I watched his movies, their beauty was another kind of monument, impervious to my knowledge of his iniquities. I had exhaustively read about his rape of thirteen-year-old Samantha Gailey; I feel sure no detail on record remained unfamiliar to me. Despite this knowledge, I was still able to consume his work. Eager to. The more I researched Polanski, the more I became drawn to his films, and I watched them again and again—especially the major ones: Repulsion, Rosemary’s Baby, Chinatown. Like all works of genius, they invited repetition. I ate them. They became part of me, the way something loved does.
I wasn’t supposed to love this work, or this man. He’s the object of boycotts and lawsuits and outrage. In the public’s mind, man and work seem to be the same thing. But are they? Ought we try to separate the art from the artist, the maker from the made? Do we undergo a willful forgetting when we want to listen to, say, Wagner’s Ring cycle? (Forgetting is easier for some than others; Wagner’s work has rarely been performed in Israel.) Or do we believe genius gets special dispensation, a behavioral hall pass?
And how does our answer change from situation to situation? Certain pieces of art seem to have been rendered unconsumable by their maker’s transgressions—how can one watch The Cosby Show after the rape allegations against Bill Cosby? I mean, obviously it’s technically doable, but are we even watching the show? Or are we taking in the spectacle of our own lost innocence?
And is it simply a matter of pragmatics? Do we withhold our support if the person is alive and therefore might benefit financially from our consumption of their work? Do we vote with our wallets? If so, is it okay to stream, say, a Roman Polanski movie for free? Can we, um, watch it at a friend’s house?
The men want to know why Woody Allen makes us so mad. Woody Allen slept with Soon-Yi Previn, the child of his life partner Mia Farrow. Soon-Yi was a teenager in his care the first time they slept together, and he the most famous film director in the world.
I took the fucking of Soon-Yi as a terrible betrayal of me personally. When I was young, I felt like Woody Allen. I intuited or believed he represented me on-screen. He was me. This is one of the peculiar aspects of his genius—this ability to stand in for the audience. The identification was exacerbated by the seeming powerlessness of his usual on-screen persona: skinny as a kid, short as a kid, confused by an uncaring, incomprehensible world. (Like Chaplin before him.) I felt closer to him than seems reasonable for a little girl to feel about a grown-up male filmmaker. In some mad way, I felt he belonged to me. I had always seen him as one of us, the powerless. Post-Soon-Yi, I saw him as a predator.
My response wasn’t logical; it was emotional.
I had watched the movie at least a dozen times before, but even so, it charmed me all over again. Annie Hall is a jeu d’esprit, an Astaire soft shoe, a helium balloon straining at its ribbon. It’s a love story for people who don’t believe in love: Annie and Alvy come together, pull apart, come together, and then break up for good. Their relationship was pointless all along, and entirely worthwhile. Annie’s refrain of “la di da” is the governing spirit of the enterprise, the collection of nonsense syllables that give joyous expression to Allen’s dime-store existentialism. “La di da” means, Nothing matters. It means, Let’s have fun while we crash and burn. It means, Our hearts are going to break, isn’t it a lark?
Annie Hall is the greatest comic film of the twentieth century—better than Bringing Up Baby, better even than Caddyshack—because it acknowledges the irrepressible nihilism that lurks at the center of all comedy. Also, it’s really funny. To watch Annie Hall is to feel, for just a moment, that one belongs to humanity. Watching, you feel almost mugged by that sense of belonging. That fabricated connection can be more beautiful than love itself. And that’s what we call great art. In case you were wondering.
Look, I don’t get to go around feeling connected to humanity all the time. It’s a rare pleasure. And I’m supposed to give it up just because Woody Allen misbehaved? It hardly seems fair.
Then Sara grew wistful: “I don’t know where to put all my feelings about Woody Allen,” she said. Well, exactly.
This, I think, is what happens to so many of us when we consider the work of the monster geniuses—we tell ourselves we’re having ethical thoughts when really what we’re having is moral feelings. We put words around these feelings and call them opinions: “What Woody Allen did was very wrong.” And feelings come from someplace more elemental than thought. The fact was this: I felt upset by the story of Woody and Soon-Yi. I wasn’t thinking; I was feeling. I was affronted, personally somehow.
by Claire Dederer, Paris Review | Read more:
They did or said something awful, and made something great. The awful thing disrupts the great work; we can’t watch or listen to or read the great work without remembering the awful thing. Flooded with knowledge of the maker’s monstrousness, we turn away, overcome by disgust. Or … we don’t. We continue watching, separating or trying to separate the artist from the art. Either way: disruption. They are monster geniuses, and I don’t know what to do about them.
We’ve all been thinking about monsters in the Trump era. For me, it began a few years ago. I was researching Roman Polanski for a book I was writing and found myself awed by his monstrousness. It was monumental, like the Grand Canyon. And yet. When I watched his movies, their beauty was another kind of monument, impervious to my knowledge of his iniquities. I had exhaustively read about his rape of thirteen-year-old Samantha Gailey; I feel sure no detail on record remained unfamiliar to me. Despite this knowledge, I was still able to consume his work. Eager to. The more I researched Polanski, the more I became drawn to his films, and I watched them again and again—especially the major ones: Repulsion, Rosemary’s Baby, Chinatown. Like all works of genius, they invited repetition. I ate them. They became part of me, the way something loved does.I wasn’t supposed to love this work, or this man. He’s the object of boycotts and lawsuits and outrage. In the public’s mind, man and work seem to be the same thing. But are they? Ought we try to separate the art from the artist, the maker from the made? Do we undergo a willful forgetting when we want to listen to, say, Wagner’s Ring cycle? (Forgetting is easier for some than others; Wagner’s work has rarely been performed in Israel.) Or do we believe genius gets special dispensation, a behavioral hall pass?
And how does our answer change from situation to situation? Certain pieces of art seem to have been rendered unconsumable by their maker’s transgressions—how can one watch The Cosby Show after the rape allegations against Bill Cosby? I mean, obviously it’s technically doable, but are we even watching the show? Or are we taking in the spectacle of our own lost innocence?
And is it simply a matter of pragmatics? Do we withhold our support if the person is alive and therefore might benefit financially from our consumption of their work? Do we vote with our wallets? If so, is it okay to stream, say, a Roman Polanski movie for free? Can we, um, watch it at a friend’s house?
*
But hold up for a minute: Who is this “we” that’s always turning up in critical writing anyway? We is an escape hatch. We is cheap. We is a way of simultaneously sloughing off personal responsibility and taking on the mantle of easy authority. It’s the voice of the middle-brow male critic, the one who truly believes he knows how everyone else should think. We is corrupt. We is make-believe. The real question is this: can I love the art but hate the artist? Can you? When I say we, I mean I. I mean you.
*
I know Polanski is worse, whatever that means, and Cosby is more current. But for me the ur-monster is Woody Allen.The men want to know why Woody Allen makes us so mad. Woody Allen slept with Soon-Yi Previn, the child of his life partner Mia Farrow. Soon-Yi was a teenager in his care the first time they slept together, and he the most famous film director in the world.
I took the fucking of Soon-Yi as a terrible betrayal of me personally. When I was young, I felt like Woody Allen. I intuited or believed he represented me on-screen. He was me. This is one of the peculiar aspects of his genius—this ability to stand in for the audience. The identification was exacerbated by the seeming powerlessness of his usual on-screen persona: skinny as a kid, short as a kid, confused by an uncaring, incomprehensible world. (Like Chaplin before him.) I felt closer to him than seems reasonable for a little girl to feel about a grown-up male filmmaker. In some mad way, I felt he belonged to me. I had always seen him as one of us, the powerless. Post-Soon-Yi, I saw him as a predator.
My response wasn’t logical; it was emotional.
*
One rainy afternoon, in the spring of 2017, I flopped down on the living-room couch and committed an act of transgression. No, not that one. What I did was, I on-demanded Annie Hall. It was easy. I just clicked the OK button on my massive universal remote and then rummaged around in a bag of cookies while the opening credits rolled. As acts of transgression go, it was pretty undramatic.I had watched the movie at least a dozen times before, but even so, it charmed me all over again. Annie Hall is a jeu d’esprit, an Astaire soft shoe, a helium balloon straining at its ribbon. It’s a love story for people who don’t believe in love: Annie and Alvy come together, pull apart, come together, and then break up for good. Their relationship was pointless all along, and entirely worthwhile. Annie’s refrain of “la di da” is the governing spirit of the enterprise, the collection of nonsense syllables that give joyous expression to Allen’s dime-store existentialism. “La di da” means, Nothing matters. It means, Let’s have fun while we crash and burn. It means, Our hearts are going to break, isn’t it a lark?
Annie Hall is the greatest comic film of the twentieth century—better than Bringing Up Baby, better even than Caddyshack—because it acknowledges the irrepressible nihilism that lurks at the center of all comedy. Also, it’s really funny. To watch Annie Hall is to feel, for just a moment, that one belongs to humanity. Watching, you feel almost mugged by that sense of belonging. That fabricated connection can be more beautiful than love itself. And that’s what we call great art. In case you were wondering.
Look, I don’t get to go around feeling connected to humanity all the time. It’s a rare pleasure. And I’m supposed to give it up just because Woody Allen misbehaved? It hardly seems fair.
*
When I mentioned in passing I was writing about Allen, my friend Sara reported that she’d seen a Little Free Library in her neighborhood absolutely crammed to its tiny rafters with books by and about Allen. It made us both laugh—the mental image of some furious, probably female, fan who just couldn’t bear the sight of those books any longer and stuffed them all in the cute little house.Then Sara grew wistful: “I don’t know where to put all my feelings about Woody Allen,” she said. Well, exactly.
*
I told another smart friend that I was writing about Woody Allen. “I have very many thoughts about Woody Allen!” she said, all excited to share. We were drinking wine on her porch and she settled in, the late afternoon light illuminating her face. “I’m so mad at him! I was already pissed at him over the Soon-Yi thing, and then came the—what’s the kid’s name—Dylan? Then came the Dylan allegations, and the horrible dismissive statements he made about that. And I hate the way he talks about Soon-Yi, always going on about how he’s enriched her life.”This, I think, is what happens to so many of us when we consider the work of the monster geniuses—we tell ourselves we’re having ethical thoughts when really what we’re having is moral feelings. We put words around these feelings and call them opinions: “What Woody Allen did was very wrong.” And feelings come from someplace more elemental than thought. The fact was this: I felt upset by the story of Woody and Soon-Yi. I wasn’t thinking; I was feeling. I was affronted, personally somehow.
*
Here’s how to have some complicated emotions: watch Manhattan.by Claire Dederer, Paris Review | Read more:
Image: Manhattan
Labels:
Celebrities,
Culture,
Literature,
Movies,
Psychology,
Relationships
Mark Lanegan
[ed. Read the comments. At least the show highlighted former Screaming Trees frontman Mark Lanegan (even if Lanegan left Seattle quite a while ago). See also: Bourdain's field notes: Seattle.]
Erased
From 1951 to 1953, Robert Rauschenberg made a number of artworks that explore the limits and very definition of art. These works recall and effectively extend the notion of the artist as creator of ideas, a concept first broached by Marcel Duchamp (1887–1968) with his iconic readymades of the early twentieth century. With Erased de Kooning Drawing (1953), Rauschenberg set out to discover whether an artwork could be produced entirely through erasure—an act focused on the removal of marks rather than their accumulation.
Rauschenberg first tried erasing his own drawings but ultimately decided that in order for the experiment to succeed he had to begin with an artwork that was undeniably significant in its own right. He approached Willem de Kooning (1904–1997), an artist for whom he had tremendous respect, and asked him for a drawing to erase. Somewhat reluctantly, de Kooning agreed. After Rauschenberg completed the laborious erasure, he and fellow artist Jasper Johns (b. 1930) devised a scheme for labeling, matting, and framing the work, with Johns inscribing the following words below the now-obliterated de Kooning drawing:
ERASED de KOONING DRAWING
ROBERT RAUSCHENBERG
1953
The simple, gilded frame and understated inscription are integral parts of the finished artwork, offering the sole indication of the psychologically loaded act central to its creation. Without the inscription, we would have no idea what is in the frame; the piece would be indecipherable.
via: SFMOMA | Read more:
[ed. This kind of 'art' drives me nuts (which, probably, is partially the intent). Here's another example, with an ironic twist.]
h/t YMFY
The FCC Will Move to Kill Net Neutrality Over Thanksgiving
FCC Chairman Ajit Pai is planning to make good on his promise to kill net neutrality this weekend, under cover of the holidays, ushering in an era in which the largest telcoms corporations can extract bribes from the largest internet corporations to shut small, innovative and competitive internet services from connecting to you.
Under this regime, a company like Fox News or Google could pay a bribe to a company like Comcast, and in exchange, Comcast would make sure that its subscribers would get slower connections to their rivals. This is billed as "free enterprise."
Net Neutrality was won in America thanks to an improbable uprising of everyday people who finally figured out that this boring, wonky policy issue would directly affect their futures, and the way they work, learn, raise their children, stay in touch with their families, start businesses, participate in the public sphere, stay healthy and elect their leaders. Millions of Americans called, wrote, marched and donated and won over the largest, best-funded corporations in America, beating them and forcing the Obama-era FCC to protect the free, fair, open internet.
Ironically, Trump owes his victory to the neutral internet, where insurgent racists and conspiracy theorists were able to gather and network in support of Trumpism without having to outbid mainstream political rivals. Across Trumpist forums, the brighter among his supporters are aghast that a Trump appointee is about to destroy the factors that made their communities possible.
Ajit Pai is planning to introduce his anti-neutrality fast-track vote over the holiday weekend in the hopes that we'll be too busy eating or shopping to notice.
He's wrong.
Thanksgiving is when students go home for the holidays to fix their internet connections and clean the malware out of their computers. Those students -- awake to the Trumpist threat to their futures -- will spend this weekend explaining to their parents why they need to participate in the fight for a neutral net.
Thankgiving is when workers stay home from the office, participating in online forums and social media, where they will have raucous conversations about this existential threat -- because a free, fair and open internet isn't more important than climate change or gender discrimination or racism or inequality, but a free, fair and open internet is how we're going to win all those fights.
Sneaking in major policy changes over the holiday weekend is a bad look. People are better at understanding procedural irregularities than they are at understanding substance. It's hard to understand what "net neutrality" means -- but it's easy to understand that Ajit Pai isn't killing it in secret because he wants to make sure you're pleasantly surprised on Monday. (...)
Under this regime, a company like Fox News or Google could pay a bribe to a company like Comcast, and in exchange, Comcast would make sure that its subscribers would get slower connections to their rivals. This is billed as "free enterprise."
Net Neutrality was won in America thanks to an improbable uprising of everyday people who finally figured out that this boring, wonky policy issue would directly affect their futures, and the way they work, learn, raise their children, stay in touch with their families, start businesses, participate in the public sphere, stay healthy and elect their leaders. Millions of Americans called, wrote, marched and donated and won over the largest, best-funded corporations in America, beating them and forcing the Obama-era FCC to protect the free, fair, open internet.Ironically, Trump owes his victory to the neutral internet, where insurgent racists and conspiracy theorists were able to gather and network in support of Trumpism without having to outbid mainstream political rivals. Across Trumpist forums, the brighter among his supporters are aghast that a Trump appointee is about to destroy the factors that made their communities possible.
Ajit Pai is planning to introduce his anti-neutrality fast-track vote over the holiday weekend in the hopes that we'll be too busy eating or shopping to notice.
He's wrong.
Thanksgiving is when students go home for the holidays to fix their internet connections and clean the malware out of their computers. Those students -- awake to the Trumpist threat to their futures -- will spend this weekend explaining to their parents why they need to participate in the fight for a neutral net.
Thankgiving is when workers stay home from the office, participating in online forums and social media, where they will have raucous conversations about this existential threat -- because a free, fair and open internet isn't more important than climate change or gender discrimination or racism or inequality, but a free, fair and open internet is how we're going to win all those fights.
Sneaking in major policy changes over the holiday weekend is a bad look. People are better at understanding procedural irregularities than they are at understanding substance. It's hard to understand what "net neutrality" means -- but it's easy to understand that Ajit Pai isn't killing it in secret because he wants to make sure you're pleasantly surprised on Monday. (...)
Except this obfuscation plan isn't "devilishly brilliant," it's a massive underestimation of the brutal backlash awaiting the broadband industry and its myopic water carriers. Survey after survey (including those conducted by the cable industry itself) have found net neutrality has broad, bipartisan support. The plan is even unpopular among the traditional Trump trolls over at 4chan /pol/ that spent the last week drinking onion juice. It's a mammoth turd of a proposal, and outside of the color guard at the lead of the telecom industry's sockpuppet parade -- the majority of informed Americans know it.
Net neutrality has been a fifteen year fight to protect the very health of the internet itself from predatory duopolists like Comcast. Killing it isn't something you can hide behind the green bean amandine, and it's not a small scandal you can bury via the late Friday news dump. This effort is, by absolutely any measure, little more than a grotesque hand out to one of the least competitive -- and most disliked -- industries in America. Trying to obfuscate this reality via the holidays doesn't change that. Neither does giving the plan an Orwellian name like "Restoring Internet Freedom."
It's abundantly clear that if the FCC and supporters were truly proud of what they were doing, they wouldn't feel the need to try and hide it. If this was an FCC that actually wanted to have a candid, useful public conversation about rolling back net neutrality, it wouldn't be actively encouraging fraud and abuse of the agency's comment system. To date, the entire proceeding has been little more than a glorified, giant middle finger to the public at large, filled with scandal and misinformation. And the public at large -- across partisan aisles -- is very much aware of that fact.FCC Plan To Use Thanksgiving To 'Hide' Its Attack On Net Neutrality Vastly Underestimates The Looming Backlash [Karl Bode/Techdirt]
by Cory Doctorow, Boing Boing | Read more:
Image: Bryce Durbin/Techcrunch
[ed. See also: An Open Letter to the FCC (by NY Attny. Gen. Eric Schneiderman]
[ed. See also: An Open Letter to the FCC (by NY Attny. Gen. Eric Schneiderman]
Labels:
Business,
Economics,
Government,
Law,
Politics,
Technology
The Republican War on College
There are two big problems with the GOP’s claim that its tax-reform proposals help the middle class.
The first, and most obvious, is that both the House bill, which passed last week, and the Senate bill would raise taxes on tens of millions of middle-class and low-income households by the end of the decade, according to several analyses of the bills.
The second reason is subtler, but perhaps equally significant. To pay for a permanent tax cut on corporations, the plan raises taxes on colleges and college students, which is part of a broader Republican war on higher education in the U.S. This is a big deal, because in the last half-century, the most important long-term driver of wage growth has arguably been college.
The House bill would reduce benefits for higher education by more than $60 billion in the coming decade. It would shock graduate students with sudden tax increases, punish student debtors, and force schools to raise tuition at a time when higher education already feels unaffordable for many students. On balance, the GOP plan would encourage large corporations to invest in new machines in the workplace, while discouraging American workers from investing in themselves.
“It’s a huge tax increase on income-constrained students, and the worst part of it is that it would come as a shock to thousands of grad students who aren’t expecting it,” says Kim Rueben, a senior fellow at the Urban Institute. In an essay for The New York Times, Erin Rousseau, a graduate student at MIT who studies mental-health disorders, said she is paid $33,000 for up to 80 hours of weekly work as a teacher and researcher, but she pays nothing for her tuition, which is priced at $51,000. By counting that tuition as income, the GOP plan would raise her tax bill by about $9,000. Graduate school would suddenly become unaffordable for thousands of students who aren’t independently wealthy or being subsidized by rich parents, and many who are currently enrolled would be forced to drop out.
But that’s just the start. In fact, modern GOP economic policy amounts to a massive, coordinated, multilevel attack on higher education in the U.S. It starts with the elite colleges. The House bill would apply an excise tax to private universities with endowments worth more than $250,000 per full-time student. The measure is expected to raise about $2.5 billion from approximately 70 topflight universities.
Elite colleges are just the most prestigious sliver of the U.S.’s higher-education system. Despite their outsize role in national research, Harvard and Stanford, with their multibillion-dollar endowments, might not inspire much sympathy. There is a graver, yet subtler, threat to American higher education, according to Rueben. It is a domino set of public policies and their consequences, starting with Republican health-care and tax policy, continuing on to state budgets, and ending with higher tuition for students.
The first link of this causal chain is health-care spending. Republicans are obsessed with cutting federal spending on Medicaid. Doing so, Rueben points out, would result in many states spending much more on health and public welfare to make up the difference. All things equal, that means states have to raise taxes. But the House bill reduces, and the Senate bill eliminates, the deduction for state and local taxes. This would raise effective tax rates on rich residents in high-income states, making it harder for state legislators to raise taxes again, which would in turn create a dilemma for states needing more health-care dollars without higher taxes. One solution: Spend less on higher education, and force colleges to raise tuition. This wouldn’t be a new initiative so much as the continuation of an old one. Over the past 20 years, the most important contributor to higher tuition costs has been declining public spending on colleges, as cash-strapped states shift money to health care. When states pay less, students pay more.
If that chain of causation was too confusing, here’s an executive summary: By pressuring states to spend more on health care while hampering their ability to raise taxes (never an easy thing to begin with), GOP tax and budget policies could deprive public colleges of state funding, which would force American students to pay more.
This would almost certainly lead to a rise in student debt. So it would make sense to make that debt easier to pay off. The House bill does the opposite. It would eliminate a provision that allows low- and middle-income student debtors to deduct up to $2,500 in student-loan interest each year. (...)
For the white middle class, a turn against college is a profound historical irony. The GI Bill was more responsible than almost any other law in fashioning the 20th century’s middle class. Many Trump voters feel left behind, or worry that their children will grow up poorer. It’s extremely unlikely that these families will personally benefit from a large tax cut for General Electric and Apple. What they could use, instead, is some extra money today, plus an education system that prepares their kids for a new career, in a field that isn’t in structural decline.
Designing that sort of policy is totally possible, at least mathematically. For the multitrillion-dollar cost of reducing the corporate income tax from 35 percent to 20 percent, the U.S. could provide universal pre-K education and free tuition at public colleges for nonaffluent students. For far less, it could keep the tax code’s current higher-education benefits, which help millions to get a college degree and find a higher-paying job.
The first, and most obvious, is that both the House bill, which passed last week, and the Senate bill would raise taxes on tens of millions of middle-class and low-income households by the end of the decade, according to several analyses of the bills.
The second reason is subtler, but perhaps equally significant. To pay for a permanent tax cut on corporations, the plan raises taxes on colleges and college students, which is part of a broader Republican war on higher education in the U.S. This is a big deal, because in the last half-century, the most important long-term driver of wage growth has arguably been college.The House bill would reduce benefits for higher education by more than $60 billion in the coming decade. It would shock graduate students with sudden tax increases, punish student debtors, and force schools to raise tuition at a time when higher education already feels unaffordable for many students. On balance, the GOP plan would encourage large corporations to invest in new machines in the workplace, while discouraging American workers from investing in themselves.
* * *
The most dramatic attack on higher education in the GOP House bill is its tax on tuition waivers. This may sound arcane to non–graduate students, but it’s a huge deal. Nearly 150,000 graduate students, many of whom represent the future of scientific and academic research, don’t pay tuition in exchange for teaching classes or doing other work at university. The GOP tax plan would treat their unpaid tuition as income. So imagine you are a graduate student earning $30,000 from your school for teaching, while attending a $50,000 program. Under current law, your taxable income is $30,000. Under the GOP tax plan, your taxable income is $80,000. Your tax bill could quadruple. (The provision is not in the latest Senate version of the bill.)“It’s a huge tax increase on income-constrained students, and the worst part of it is that it would come as a shock to thousands of grad students who aren’t expecting it,” says Kim Rueben, a senior fellow at the Urban Institute. In an essay for The New York Times, Erin Rousseau, a graduate student at MIT who studies mental-health disorders, said she is paid $33,000 for up to 80 hours of weekly work as a teacher and researcher, but she pays nothing for her tuition, which is priced at $51,000. By counting that tuition as income, the GOP plan would raise her tax bill by about $9,000. Graduate school would suddenly become unaffordable for thousands of students who aren’t independently wealthy or being subsidized by rich parents, and many who are currently enrolled would be forced to drop out.
But that’s just the start. In fact, modern GOP economic policy amounts to a massive, coordinated, multilevel attack on higher education in the U.S. It starts with the elite colleges. The House bill would apply an excise tax to private universities with endowments worth more than $250,000 per full-time student. The measure is expected to raise about $2.5 billion from approximately 70 topflight universities.
Elite colleges are just the most prestigious sliver of the U.S.’s higher-education system. Despite their outsize role in national research, Harvard and Stanford, with their multibillion-dollar endowments, might not inspire much sympathy. There is a graver, yet subtler, threat to American higher education, according to Rueben. It is a domino set of public policies and their consequences, starting with Republican health-care and tax policy, continuing on to state budgets, and ending with higher tuition for students.
The first link of this causal chain is health-care spending. Republicans are obsessed with cutting federal spending on Medicaid. Doing so, Rueben points out, would result in many states spending much more on health and public welfare to make up the difference. All things equal, that means states have to raise taxes. But the House bill reduces, and the Senate bill eliminates, the deduction for state and local taxes. This would raise effective tax rates on rich residents in high-income states, making it harder for state legislators to raise taxes again, which would in turn create a dilemma for states needing more health-care dollars without higher taxes. One solution: Spend less on higher education, and force colleges to raise tuition. This wouldn’t be a new initiative so much as the continuation of an old one. Over the past 20 years, the most important contributor to higher tuition costs has been declining public spending on colleges, as cash-strapped states shift money to health care. When states pay less, students pay more.
If that chain of causation was too confusing, here’s an executive summary: By pressuring states to spend more on health care while hampering their ability to raise taxes (never an easy thing to begin with), GOP tax and budget policies could deprive public colleges of state funding, which would force American students to pay more.
This would almost certainly lead to a rise in student debt. So it would make sense to make that debt easier to pay off. The House bill does the opposite. It would eliminate a provision that allows low- and middle-income student debtors to deduct up to $2,500 in student-loan interest each year. (...)
For the white middle class, a turn against college is a profound historical irony. The GI Bill was more responsible than almost any other law in fashioning the 20th century’s middle class. Many Trump voters feel left behind, or worry that their children will grow up poorer. It’s extremely unlikely that these families will personally benefit from a large tax cut for General Electric and Apple. What they could use, instead, is some extra money today, plus an education system that prepares their kids for a new career, in a field that isn’t in structural decline.
Designing that sort of policy is totally possible, at least mathematically. For the multitrillion-dollar cost of reducing the corporate income tax from 35 percent to 20 percent, the U.S. could provide universal pre-K education and free tuition at public colleges for nonaffluent students. For far less, it could keep the tax code’s current higher-education benefits, which help millions to get a college degree and find a higher-paying job.
by Derek Thompson, The Atlantic | Read more:
Image: Brian Snyder/Reuters
Can A.I. Be Taught to Explain Itself?
In September, Michal Kosinski published a study that he feared might end his career. The Economist broke the news first, giving it a self-consciously anodyne title: “Advances in A.I. Are Used to Spot Signs of Sexuality.” But the headlines quickly grew more alarmed. By the next day, the Human Rights Campaign and Glaad, formerly known as the Gay and Lesbian Alliance Against Defamation, had labeled Kosinski’s work “dangerous” and “junk science.” (They claimed it had not been peer reviewed, though it had.) In the next week, the tech-news site The Verge had run an article that, while carefully reported, was nonetheless topped with a scorching headline: “The Invention of A.I. ‘Gaydar’ Could Be the Start of Something Much Worse.”
Kosinski has made a career of warning others about the uses and potential abuses of data. Four years ago, he was pursuing a Ph.D. in psychology, hoping to create better tests for signature personality traits like introversion or openness to change. But he and a collaborator soon realized that Facebook might render personality tests superfluous: Instead of asking if someone liked poetry, you could just see if they “liked” Poetry Magazine. In 2014, they published a study showing that if given 200 of a user’s likes, they could predict that person’s personality-test answers better than their own romantic partner could.
After getting his Ph.D., Kosinski landed a teaching position at the Stanford Graduate School of Business and soon started looking for new data sets to investigate. One in particular stood out: faces. For decades, psychologists have been leery about associating personality traits with physical characteristics, because of the lasting taint of phrenology and eugenics; studying faces this way was, in essence, a taboo. But to understand what that taboo might reveal when questioned, Kosinski knew he couldn’t rely on a human judgment.
Kosinski first mined 200,000 publicly posted dating profiles, complete with pictures and information ranging from personality to political views. Then he poured that data into an open-source facial-recognition algorithm — a so-called deep neural network, built by researchers at Oxford University — and asked it to find correlations between people’s faces and the information in their profiles. The algorithm failed to turn up much, until, on a lark, Kosinski turned its attention to sexual orientation. The results almost defied belief. In previous research, the best any human had done at guessing sexual orientation from a profile picture was about 60 percent — slightly better than a coin flip. Given five pictures of a man, the deep neural net could predict his sexuality with as much as 91 percent accuracy. For women, that figure was lower but still remarkable: 83 percent.
Much like his earlier work, Kosinski’s findings raised questions about privacy and the potential for discrimination in the digital age, suggesting scenarios in which better programs and data sets might be able to deduce anything from political leanings to criminality. But there was another question at the heart of Kosinski’s paper, a genuine mystery that went almost ignored amid all the media response: How was the computer doing what it did? What was it seeing that humans could not?
It was Kosinski’s own research, but when he tried to answer that question, he was reduced to a painstaking hunt for clues. At first, he tried covering up or exaggerating parts of faces, trying to see how those changes would affect the machine’s predictions. Results were inconclusive. But Kosinski knew that women, in general, have bigger foreheads, thinner jaws and longer noses than men. So he had the computer spit out the 100 faces it deemed most likely to be gay or straight and averaged the proportions of each. It turned out that the faces of gay men exhibited slightly more “feminine” proportions, on average, and that the converse was true for women. If this was accurate, it could support the idea that testosterone levels — already known to mold facial features — help mold sexuality as well.
But it was impossible to say for sure. Other evidence seemed to suggest that the algorithms might also be picking up on culturally driven traits, like straight men wearing baseball hats more often. Or — crucially — they could have been picking up on elements of the photos that humans don’t even recognize. “Humans might have trouble detecting these tiny footprints that border on the infinitesimal,” Kosinski says. “Computers can do that very easily.”
It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.
This isn’t merely a theoretical concern. In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.
Those techniques can seem inescapably alien to our own ways of thinking. Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.
[ed. See also: Caught in the Web]
Kosinski has made a career of warning others about the uses and potential abuses of data. Four years ago, he was pursuing a Ph.D. in psychology, hoping to create better tests for signature personality traits like introversion or openness to change. But he and a collaborator soon realized that Facebook might render personality tests superfluous: Instead of asking if someone liked poetry, you could just see if they “liked” Poetry Magazine. In 2014, they published a study showing that if given 200 of a user’s likes, they could predict that person’s personality-test answers better than their own romantic partner could.
After getting his Ph.D., Kosinski landed a teaching position at the Stanford Graduate School of Business and soon started looking for new data sets to investigate. One in particular stood out: faces. For decades, psychologists have been leery about associating personality traits with physical characteristics, because of the lasting taint of phrenology and eugenics; studying faces this way was, in essence, a taboo. But to understand what that taboo might reveal when questioned, Kosinski knew he couldn’t rely on a human judgment.Kosinski first mined 200,000 publicly posted dating profiles, complete with pictures and information ranging from personality to political views. Then he poured that data into an open-source facial-recognition algorithm — a so-called deep neural network, built by researchers at Oxford University — and asked it to find correlations between people’s faces and the information in their profiles. The algorithm failed to turn up much, until, on a lark, Kosinski turned its attention to sexual orientation. The results almost defied belief. In previous research, the best any human had done at guessing sexual orientation from a profile picture was about 60 percent — slightly better than a coin flip. Given five pictures of a man, the deep neural net could predict his sexuality with as much as 91 percent accuracy. For women, that figure was lower but still remarkable: 83 percent.
Much like his earlier work, Kosinski’s findings raised questions about privacy and the potential for discrimination in the digital age, suggesting scenarios in which better programs and data sets might be able to deduce anything from political leanings to criminality. But there was another question at the heart of Kosinski’s paper, a genuine mystery that went almost ignored amid all the media response: How was the computer doing what it did? What was it seeing that humans could not?
It was Kosinski’s own research, but when he tried to answer that question, he was reduced to a painstaking hunt for clues. At first, he tried covering up or exaggerating parts of faces, trying to see how those changes would affect the machine’s predictions. Results were inconclusive. But Kosinski knew that women, in general, have bigger foreheads, thinner jaws and longer noses than men. So he had the computer spit out the 100 faces it deemed most likely to be gay or straight and averaged the proportions of each. It turned out that the faces of gay men exhibited slightly more “feminine” proportions, on average, and that the converse was true for women. If this was accurate, it could support the idea that testosterone levels — already known to mold facial features — help mold sexuality as well.
But it was impossible to say for sure. Other evidence seemed to suggest that the algorithms might also be picking up on culturally driven traits, like straight men wearing baseball hats more often. Or — crucially — they could have been picking up on elements of the photos that humans don’t even recognize. “Humans might have trouble detecting these tiny footprints that border on the infinitesimal,” Kosinski says. “Computers can do that very easily.”
It has become commonplace to hear that machines, armed with machine learning, can outperform humans at decidedly human tasks, from playing Go to playing “Jeopardy!” We assume that is because computers simply have more data-crunching power than our soggy three-pound brains. Kosinski’s results suggested something stranger: that artificial intelligences often excel by developing whole new ways of seeing, or even thinking, that are inscrutable to us. It’s a more profound version of what’s often called the “black box” problem — the inability to discern exactly what machines are doing when they’re teaching themselves novel skills — and it has become a central concern in artificial-intelligence research. In many arenas, A.I. methods have advanced with startling speed; deep neural networks can now detect certain kinds of cancer as accurately as a human. But human doctors still have to make the decisions — and they won’t trust an A.I. unless it can explain itself.
This isn’t merely a theoretical concern. In 2018, the European Union will begin enforcing a law requiring that any decision made by a machine be readily explainable, on penalty of fines that could cost companies like Google and Facebook billions of dollars. The law was written to be powerful and broad and fails to define what constitutes a satisfying explanation or how exactly those explanations are to be reached. It represents a rare case in which a law has managed to leap into a future that academics and tech companies are just beginning to devote concentrated effort to understanding. As researchers at Oxford dryly noted, the law “could require a complete overhaul of standard and widely used algorithmic techniques” — techniques already permeating our everyday lives.
Those techniques can seem inescapably alien to our own ways of thinking. Instead of certainty and cause, A.I. works off probability and correlation. And yet A.I. must nonetheless conform to the society we’ve built — one in which decisions require explanations, whether in a court of law, in the way a business is run or in the advice our doctors give us. The disconnect between how we make decisions and how machines make them, and the fact that machines are making more and more decisions for us, has birthed a new push for transparency and a field of research called explainable A.I., or X.A.I. Its goal is to make machines able to account for the things they learn, in ways that we can understand. But that goal, of course, raises the fundamental question of whether the world a machine sees can be made to match our own.
by Cliff Kuang, NY Times | Read more:
Image: Derek Brahney. Source photo: J.R. Eyerman/The Life Picture Collection/Getty[ed. See also: Caught in the Web]
Monday, November 20, 2017
Subscribe to:
Comments (Atom)






