Monday, January 22, 2018

Meat and the H-Word

We all know, or at least we can all figure out with a moment’s honest reflection, that our dominant attitudes on animals are inconsistent. Someone can be incredibly disturbed by the notion of eating their puppy, but happily consume bacon every other morning, and the cognitive dissonance between the two positions never seems to cause any bother. If we’re being serious, though, we know that many sows are smarter than chihuahuas, and that all of the traits that cause us to love our pets are just as present in the animals we regularly devour the murdered corpses of. (I am sorry, that was a somewhat extreme way of putting it.) This is a commonplace observation, but in a way that’s what makes it so strange: it’s obvious that we have no rational reason to think some animals are friends and others are food. The only differences are tradition and the strength of the relationships we happen to have developed with the friend-animals, but that’s no more a justification of the distinction than it would be to say “I only eat people who aren’t my friends.” Even though nobody can justify it, though, it continues. People solve the question “Why do you treat some animals as if they have personalities but other equally sophisticated animals as if they are inanimate lumps of flavor and calories?” by simply pretending the question hasn’t been asked, or by making some remark like “Well, if pigs would quit making themselves taste so good, I could quit eating them.”

The truth is disturbing, which is why it’s so easily ignored. I’m sure I don’t have to remind you of all the remarkable facts about pigs. First, the stereotypes are false: they are clean animals and don’t sweat, and they don’t “pig out” but prefer to eat slowly and methodically. They are, as Glenn Greenwald puts it, “among the planet’s most intelligent, social, and emotionally complicated species, capable of great joy, play, love, connection, suffering and pain.” They can be housebroken, and can be trained to walk on a leash and do tricks. They dream, they play, they snuggle. They can roll out rugs, play videogames, and herd sheep. They love sunbathing and belly rubs. But don’t take my word for it—listen to the testimony of this man who accidentally adopted a 500-pound pig:

She’s unlike any animal I’ve met. Her intelligence is unbelievable. She’s house trained and even opens the back door with her snout to let herself out to pee. Her food is mainly kibble, plus fruit and vegetables. Her favourite treat is a cupcake. She’s bathed regularly and pigs don’t sweat, so she doesn’t smell. If you look a pig closely in the eyes, it’s startling; there’s something so inexplicably human. When you’re lying next to her and talking, you know she understands. It was emotional realising she was a commercial pig. The more we discovered about what her life could have been, it seemed crazy to us that we ate animals, so we stopped.

I want to note something that often passes by too quickly, which is that the sentience of animals like pigs and cows is almost impossible to deny. Animals can clearly feel “distress” and “pleasure,” and since they have nervous systems just like we do, these feelings are being felt by a “consciousness.” If a human eyeball captures light and creates images that are seen from within, so does a pig’s eyeball, because eyes are eyes. In other words, pigs have an internal life: there is something it is like to be a pig. We’ll almost certainly never know what that’s like, and it’s impossible to even speculate on, but if we believe that other humans are conscious, it is unclear why other animals wouldn’t be, albeit in a more rudimentary way. No, they don’t understand differential calculus or Althusser’s theory of interpellation. (Neither do I.) But they share with us the more morally crucial quality of being able to feel things. They can be happy and they can suffer.

Of course, critics suggest that this is just irrational anthropomorphism: the idea of animal emotions is false, because emotions are concepts we have developed to understand our own experiences as humans, and we have no idea what the parallel experiences in animals are like and whether they are properly comparable. The temptation to attributes human traits to animals is certainly difficult to resist; I can’t help but see sloths that look like they’re smiling as actually smiling, but these sloths almost certainly have no idea that they are smiling. Likewise, whenever I see a basset hound I feel compelled to try to cheer it up, even though I know that sad-eyed dogs aren’t really sad. Even if we do posit that animals feel emotions, nobody can know just how distant their consciousnesses are from our own. We have an intuitive sense that “being a bug” doesn’t feel like much, but how similar is being a water vole to being an antelope versus being a dragonfly? All of it is speculation. David Foster Wallace, in considering the Lobster Question (“Is it all right to boil a sentient creature alive just for our gustatory pleasure?”), noted that the issues of “whether and how different kinds of animals feel pain, and of whether and why it might be justifiable to inflict pain on them in order to eat them, turn out to be extremely complex and difficult,” and many can’t actually be resolved satisfactorily. How do you know what agony means to a lobster? Still, he said, “standing at the stove, it is hard to deny in any meaningful way that this is a living creature experiencing pain and wishing to avoid/escape the painful experience… To my lay mind, the lobster’s behavior in the kettle appears to be the expression of a preference; and it may well be that an ability to form preferences is the decisive criterion for real suffering.”

And lobsters are a trickier case than other more complex creatures, since they’re freaky and difficult to empathize with. As we speak of higher-order creatures who have anatomy and behavioral traits more closely paralleling our own, there is at least good evidence to suggest that various nonhuman animals can experience terrible pain. (Again, hardly anyone would deny this with dogs, and once we accept that we just need to be willing to carry our reasoning through.) Once we accept that these beings experience pain, it next becomes necessary to admit that humans inflict a lot of it on them. We massacre tens of billions of animals a year, and their brief lives are often filled with nothing but pain and fear. The “lucky” ones are those like the male chicks who are deemed “useless” and are “suffocated, gassed or minced alive at a day old.” At least they will be spared the life of torture that awaits most of the creatures raised in factory farms. I don’t know how many atrocity tales to tell here, because again, this is not something unknown, but something “known yet ignored.” I can tell you about animals living next to the rotting corpses of their offspring, animals beaten, shocked, sliced, living in their own blood and feces. I could show you horrible pictures, but I won’t. Here’s Greenwald describing a practice used in pig farms:

Pigs are placed in a crate made of iron bars that is the exact length and width of their bodies, so they can do nothing for their entire lives but stand on a concrete floor, never turn around, never see any outdoors, never even see their tails, never move more than an inch. They are put in so-called farrowing crates when they give birth, and their piglets run underneath them to suckle and are often trampled to death. The sows are bred repeatedly this way until their fertility declines, at which point they are slaughtered and turned into meat. The pigs are so desperate to get out of their crates that they often spend weeks trying to bite through the iron bars until their gums gush blood, bash their heads against the walls, and suffer a disease in which their organs end up mangled in the wrong places, from the sheer physical trauma of trying to escape from a tiny space or from acute anxiety.

Separate from the issue of “conditions” is the issue of killing itself. Obviously, it is better if an animal lives in relative comfort before it is slaughtered, and better if their deaths are imposed “humanely.” But personally, I find the idea of “humane slaughter” oxymoronic, because I’m disturbed by the taking of life as well as by suffering. This part is difficult to persuade people of, since it depends largely on a moral instinct about whether an animal’s life is “inherently” valuable, and whether they should have some kind of autonomy or dignity. Plenty of people who could agree that animal torture is wrong can still believe that eating animals is unobjectionable in and of itself. My disagreement with this comes from my deep gut feeling that opposing torture but endorsing killing is like saying “Of course, the people we eat shouldn’t be kept in tiny cages before we kill them, that’s abominable.” Once you grant that animals are conscious, and have “feelings” of one kind of another, and “wills” (i.e. that there are things they want and things they don’t want, and they don’t want to die), the whole process of mass killing seems irredeemably horrifying. (...)

Because people slip so naturally into oblivious complicity, it’s crucial to actively examine the world around you for evidence of things hidden. What am I missing? What have I accepted as ordinary that might in fact be atrocious? Am I in denial about something that will be clear in retrospect? Every time I apply this kind of thinking to meat-eating, I get chills. Here we have set up mass industrial slaughter, a world built on the suffering and death of billions of creatures. The scale of the carnage is unfathomable. (I know sharks aren’t particularly sympathetic, but I’m still shocked by the statistic that while sharks kill 8 people per year, humans kill 11,000 sharks per hour.) Yet we hide all of it away, we don’t talk about it. Laws are passed to prevent people from even taking photographs of it. That makes me feel the same way I do about the death penalty: if this weren’t atrocious, it wouldn’t need to be kept out of view. “Mass industrial slaughter.” There’s no denying that’s what it is. Yet that sounds like something a decent society shouldn’t have in it.

by Nathan J. Robinson, Current Affairs |  Read more:
Image: Katherine Lam

photo: markk

Sunday, January 21, 2018

Saturday, January 20, 2018

It's the (Democracy-Poisoning) Golden Age of Free Speech

In today’s networked environment, when anyone can broadcast live or post their thoughts to a social network, it would seem that censorship ought to be impossible. This should be the golden age of free speech.

And sure, it is a golden age of free speech—if you can believe your lying eyes. Is that footage you’re watching real? Was it really filmed where and when it says it was? Is it being shared by alt-right trolls or a swarm of Russian bots? Was it maybe even generated with the help of artificial intelligence? (Yes, there are systems that can create increasingly convincing fake videos.)

Or let’s say you were the one who posted that video. If so, is anyone even watching it? Or has it been lost in a sea of posts from hundreds of millions of content pro­ducers? Does it play well with Facebook’s algorithm? Is YouTube recommending it?

Maybe you’re lucky and you’ve hit a jackpot in today’s algorithmic public sphere: an audience that either loves you or hates you. Is your post racking up the likes and shares? Or is it raking in a different kind of “engagement”: Have you received thousands of messages, mentions, notifications, and emails threatening and mocking you? Have you been doxed for your trouble? Have invisible, angry hordes ordered 100 pizzas to your house? Did they call in a SWAT team—men in black arriving, guns drawn, in the middle of dinner?

Standing there, your hands over your head, you may feel like you’ve run afoul of the awesome power of the state for speaking your mind. But really you just pissed off 4chan. Or entertained them. Either way, congratulations: You’ve found an audience.
***
Here's how this golden age of speech actually works: In the 21st century, the capacity to spread ideas and reach an audience is no longer limited by access to expensive, centralized broadcasting infrastructure. It’s limited instead by one’s ability to garner and distribute attention. And right now, the flow of the world’s attention is structured, to a vast and overwhelming degree, by just a few digital platforms: Facebook, Google (which owns YouTube), and, to a lesser extent, Twitter.

These companies—which love to hold themselves up as monuments of free expression—have attained a scale unlike anything the world has ever seen; they’ve come to dominate media distribution, and they increasingly stand in for the public sphere itself. But at their core, their business is mundane: They’re ad brokers. To virtually anyone who wants to pay them, they sell the capacity to precisely target our eyeballs. They use massive surveillance of our behavior, online and off, to generate increasingly accurate, automated predictions of what advertisements we are most susceptible to and what content will keep us clicking, tapping, and scrolling down a bottomless feed.

So what does this algorithmic public sphere tend to feed us? In tech parlance, Facebook and YouTube are “optimized for engagement,” which their defenders will tell you means that they’re just giving us what we want. But there’s nothing natural or inevitable about the specific ways that Facebook and YouTube corral our attention. The patterns, by now, are well known. As Buzzfeed famously reported in November 2016, “top fake election news stories generated more total engagement on Facebook than top election stories from 19 major news outlets combined.”

Humans are a social species, equipped with few defenses against the natural world beyond our ability to acquire knowledge and stay in groups that work together. We are particularly susceptible to glimmers of novelty, messages of affirmation and belonging, and messages of outrage toward perceived enemies. These kinds of messages are to human community what salt, sugar, and fat are to the human appetite. And Facebook gorges us on them—in what the company’s first president, Sean Parker, recently called “a social-­validation feedback loop.”

There are, moreover, no nutritional labels in this cafeteria. For Facebook, YouTube, and Twitter, all speech—whether it’s a breaking news story, a saccharine animal video, an anti-Semitic meme, or a clever advertisement for razors—is but “content,” each post just another slice of pie on the carousel. A personal post looks almost the same as an ad, which looks very similar to a New York Times article, which has much the same visual feel as a fake newspaper created in an afternoon.

What’s more, all this online speech is no longer public in any traditional sense. Sure, Facebook and Twitter sometimes feel like places where masses of people experience things together simultaneously. But in reality, posts are targeted and delivered privately, screen by screen by screen. Today’s phantom public sphere has been fragmented and submerged into billions of individual capillaries. Yes, mass discourse has become far easier for everyone to participate in—but it has simultaneously become a set of private conversations happening behind your back. Behind everyone’s backs.

Not to put too fine a point on it, but all of this invalidates much of what we think about free speech—conceptually, legally, and ethically.

The most effective forms of censorship today involve meddling with trust and attention, not muzzling speech itself. As a result, they don’t look much like the old forms of censorship at all. They look like viral or coordinated harassment campaigns, which harness the dynamics of viral outrage to impose an unbearable and disproportionate cost on the act of speaking out. They look like epidemics of disinformation, meant to undercut the credibility of valid information sources. They look like bot-fueled campaigns of trolling and distraction, or piecemeal leaks of hacked materials, meant to swamp the attention of traditional media.

These tactics usually don’t break any laws or set off any First Amendment alarm bells. But they all serve the same purpose that the old forms of censorship did: They are the best available tools to stop ideas from spreading and gaining purchase. They can also make the big platforms a terrible place to interact with other people.

Even when the big platforms themselves suspend or boot someone off their networks for violating “community standards”—an act that doeslook to many people like old-fashioned censorship—it’s not technically an infringement on free speech, even if it is a display of immense platform power. Anyone in the world can still read what the far-right troll Tim “Baked Alaska” Gionet has to say on the internet. What Twitter has denied him, by kicking him off, is attention.

Many more of the most noble old ideas about free speech simply don’t compute in the age of social media. John Stuart Mill’s notion that a “marketplace of ideas” will elevate the truth is flatly belied by the virality of fake news. And the famous American saying that “the best cure for bad speech is more speech”—a paraphrase of Supreme Court justice Louis Brandeis—loses all its meaning when speech is at once mass but also nonpublic. How do you respond to what you cannot see? How can you cure the effects of “bad” speech with more speech when you have no means to target the same audience that received the original message?

Mark Zuckerberg holds up Facebook’s mission to “connect the world” and “bring the world closer together” as proof of his company’s civic virtue. “In 2016, people had billions of interactions and open discussions on Facebook,” he said proudly in an online video, looking back at the US election. “Candidates had direct channels to communicate with tens of millions of citizens.”

This idea that more speech—more participation, more connection—constitutes the highest, most unalloyed good is a common refrain in the tech industry. But a historian would recognize this belief as a fallacy on its face. Connectivity is not a pony. Facebook doesn’t just connect democracy-­loving Egyptian dissidents and fans of the videogame Civilization; it brings together white supremacists, who can now assemble far more effectively. It helps connect the efforts of radical Buddhist monks in Myanmar, who now have much more potent tools for spreading incitement to ethnic cleansing—fueling the fastest- growing refugee crisis in the world.

The freedom of speech is an important democratic value, but it’s not the only one. In the liberal tradition, free speech is usually understood as a vehicle—a necessary condition for achieving certain other societal ideals: for creating a knowledgeable public; for engendering healthy, rational, and informed debate; for holding powerful people and institutions accountable; for keeping communities lively and vibrant. What we are seeing now is that when free speech is treated as an end and not a means, it is all too possible to thwart and distort everything it is supposed to deliver.

by Zeynep Tufekci, Wired | Read more:
Image: Adam Maida

Charles S. Raleigh, Law of the Wild, 1881
via:

The Instagrammable Charm of the Bourgeoisie

It is tempting to believe that we live in a time uniquely saturated with images. And indeed, the numbers are staggering: Instagrammers upload about 95 million photos and videos every day. A quarter of Americans use the app, and the vast majority of them are under 40. Because Instagram skews so much younger than Facebook or Twitter, it is where “tastemakers” and “influencers” now live online, and where their audiences spend hours each day making and absorbing visual content. But so much of what seems bleeding edge may well be old hat; the trends, behaviors, and modes of perception and living that so many op-ed columnists and TED-talk gurus attribute to smartphones and other technological advances are rooted in the much older aesthetic of the picturesque.

Wealthy eighteenth-century English travelers such as Gray used technology to mediate and pictorialize their experiences of nature just as Instagrammers today hold up their phones and deliberate over filters. To better appreciate the picturesque, travelers in the late 1700s were urged to use what was known as a gray mirror or “Claude glass,” which would simplify the visual field and help separate the subject matter from the background, much like an Instagram filter. Artists and aesthetes would carry these tablet-sized convex mirrors with them, and position themselves with their backs to whatever they wished to behold—the exact move that Gray was attempting when he tumbled into a ditch. The artist and Anglican priest William Gilpin, who is often credited with coining the term “picturesque,” even went so far as to mount a Claude mirror in his carriage so that, rather than looking at the actual scenery passing outside his window, he could instead experience the landscape as a mediated, aestheticized “succession of high-coloured pictures.”

Connections between the Instragrammable and the picturesque go deeper than framing methods, however. The aesthetics are also linked by shared bourgeois preoccupations with commodification and class identity. By understanding how Instagram was prefigured by a previous aesthetic movement—one which arose while the middle class was first emerging—we can come closer to understanding our current moment’s tensions between beauty, capitalism, and the pursuit of an authentic life. (...)

While the word “picturesque” came into circulation in the early 1700s to describe anything that looked “like a picture,” it solidified into a stable aesthetic by the late 1700s, when travelers began recording their trips through Europe and England with sketches, etchings, and the occasional painting. The method for circulating their images was more cumbersome than ours, but largely followed the same formula as today. A wealthy traveler trained in draftsmanship (whom we would now call an influencer) would take a months-long journey, carrying art supplies to record picturesque scenes. When he returned home, these images were turned into etchings, which could then be mass-produced, sold individually or bound together to create a record of his travels for his friends and family to peruse.

This practice had its roots in the Grand Tour, a rite of passage for young male aristocrats entering government and diplomacy, in which they roamed the continent for a few years with the aim of accruing gentlemanly knowledge of the world. But the picturesque travelers of the late eighteenth century were a new type of tourist, men and women born during a period of rapid economic and social change. This was the world of Jane Austen, in which a burgeoning middle class sought to solidify and improve its position in English society by adopting practices that signaled prosperity and refinement. (...)

For Gilpin, the picturesque was not just an aesthetic, but a mindset that projected compositional principles onto a landscape while constantly comparing that landscape against previous trips and pictures, a kind of window-shopping of the soul. But the direct experience of picturesque nature is really secondary to having recorded it, either on paper or in memory. “There may be more pleasure in recollecting, and recording,” he writes, “from a few transient lines, the scenes we have admired, than in the present enjoyment of them.” Only recently catching up with the insights of our forebears, the pleasures of recording and archiving have been rediscovered by digital media theorists, such as Nathan Jurgenson, who calls this preoccupation “nostalgia for the present.” Typically, this condition is associated with photographic image-making, and especially with digital technology, but these preoccupations obviously preceded the advent of the camera. (...)

Today you can still find echoes of the picturesque in travel photos on Instagram. A friend’s recent trip to Cuba, for example, will feature leathery old men smoking cigars among palm trees and pastel junkers. Or simply search #VanLife to see an endless stream of vintage Volkswagens chugging through the red desert landscape of the American Southwest. But rather than concentrate on generic similarities between the picturesque and images one finds on Instagram, it is more illuminating to think of how both aesthetics arose from similar socioeconomic and class circumstances—manifesting, according to Price, as images filled with “interesting and entertaining particulars.”

Price’s use of the word “interesting” is significant in understanding the relationship between the picturesque and the Instagrammable. In Our Aesthetic Categories: Zany, Cute, Interesting (2012), philosopher Sianne Ngai positions the picturesque as a function of visual interest—of variation and compositional unpredictability—which she connects to the enticements of capitalism. For a scene or a picture to be interesting, she argues, it must be judged in relation to others, one of many. According to Ngai, this picturesque habit began “emerging in tandem with the development of markets.” Unlike beauty, which exalts, or the sublime, which terrifies, Ngai suggests that the picturesque produces an affect somewhere between excitement and boredom. It is a feeling tied to amusement and connoisseurship, like letting one’s eyes wander over a series of window displays. (...)

The picturesque was ultimately about situating oneself within the class structure by demonstrating a heightened aesthetic appreciation of the natural world, during a period when land was becoming increasingly commodified. By contrast, the Instagrammable is a product of the neoliberal turn toward the individual. It is therefore chiefly concerned with bringing previously non-commodifiable aspects of the self into the marketplace by turning leisure and lifestyle into labor and goods. Though the two aesthetics share a similar image-making methodology and prize notions of authenticity, the Instagrammable is perhaps even more capacious than its predecessor. Through the alchemy of social media, everything you post, whether it is a self-portrait or not, is transformed into a monetized datapoint and becomes an exercise in personal branding.

It almost goes without saying the selfie is by far the most popular kind of image on Instagram. Photos of faces receive 38 percent more engagement than other kinds of content. Indeed, one could argue that all images on the platform are imbued with the selfie’s metaphysical logic: I was here, this is me. Following this structure, mirrors and shiny surfaces on Instagram abound, with the photographer reflected in still ponds, shop windows, and Anish Kapoor sculptures. Sometimes a body part or an inanimate object will stand in for the self: fingers cradling a puppy, hot-dog legs by the beach, a doll in the shadow of the Eiffel Tower. Other times, the presence of the Instagrammer is suggested through a shadow cast against a scenic backdrop, or merely implied by the very existence of the photograph itself, which says, This was an Instagrammable moment I recorded. Although rarely figural, picturesque images could also be said to have possessed the qualities of the selfie avant la lettre, given what they were often meant to signal: I went here, I am the kind of person who has traveled and decorates my home with this kind of art.

This all-encompassing logic of the selfie clarifies itself when you type “#Instagrammable” into the platform’s search bar. Foamy lattes, tourist selfies, old jeeps, women in teeny bikinis, and the phrase “namaste bitches” written in neon lights. On first glance, these photos seem to share nothing but a hashtag, yet when taken together, they represent an emergent worldview. Whereas British travelers of the picturesque era set their newly trained gazes upon rugged vistas and ruined abbeys and then recreated them on their own properties, Instagrammers are instead retooling their own lives—the most obvious medium of our neoliberal age. In short, the project of the Instagrammer is not to find interesting things to photograph, but to become the interesting thing.

At its core, Instagram is powered by a careful balance of desire: every commodity (including the Instagrammer) must be desirable to the consumer, but no consumer can seem unsettled by desire for the commodity. Like the measured interest at the core of the picturesque—a display of world-wise connoisseurship that signaled class belonging—“thirst,” and its careful suppression, is what drives Instagram. Thirst is an affect that combines envy, erotic desire, and visual attention. However, if you are obviously thirsty, it means that your persona as a sanguine consumer has slipped, which is considered bad or embarrassing. One has revealed too much about one’s real desires. In this way, Instagram influencers are like dandies, whose greatest accomplishment was the control of their emotions, and more importantly control over the ways their faces and bodies performed those emotions. “It is the joy of astonishing others,” writes Charles Baudelaire in The Painter of Modern Life (1863), but “never oneself being astonished.” (...)

It is this obsession with looking natural that appeals to advertisers, because unlike a magazine ad or television commercial, the line on Instagram between the real and the make-believe is much more porous. People scroll for hours on their phones because of the pictures’ ability to simultaneously conjure fantasy and ground that fantasy in the suggestion of documented experience. Contemporary audiences know that television ads are fake, but on an Instagram feed, mixed with family snapshots and close-ups of birthday parties, sponsored posts of cerulean waters on the shores of Greece look real enough—achievable, or at a minimum, something one should hope to achieve.

by Daniel Penny, Boston Review |  Read more:
Image: Getty

The Enchanted Loom

The light of the sun and moon cannot be outdistanced, yet mind reaches beyond them. Galaxies are as infinite as grains of sand, yet mind spreads outside them.
—Eisai 
Biology gives you a brain, life turns it into a mind.
—Jeffrey Eugenides
All brains gather intelligence; to lesser or greater extents, some brains acquire a state of mind. How and where they find the means to do so is the question raised by poets and philosophers, doctors of divinity and medicine who have been fooling around with it for the past five thousand years and leave the mystery intact. It’s been a long time since Adam ate of the apple, but about the metaphysical composition of the human mind, all we can say for certain is that something unknown is doing we don’t know what.

Our gathering of intelligence about the physical attributes and behaviors of the brain has proved more fruitful. No small feat. The brain is the most complicated object in the known universe, housing 86 billion neurons, no two alike and each connected to thousands of other neurons, passing signals to one another across as many as 100 trillion synaptic checkpoints. Rational study of the organism (its chemistries, mechanics, and cellular structure) has led to the development of the Human Genome Project, yielded astonishing discoveries in medicine and biotechnology—the CT scan and the MRI, gene editing and therapy, advanced diagnostics, surgical and drug treatment of neurological disorder and disease. All triumphs of the intellect but none of them answering the question as to whether the human mind is flesh giving birth to spirit or spirit giving birth to flesh.

Mind is consciousness, and although a fundamental fact of human existence, consciousness is subjective experience as opposed to objective reality and therefore outdistances not only the light of the sun and the moon but also the reach of the scientific method. It doesn’t lend itself to trial by numbers. Nor does it attract the major funding (public and private, civilian and military) that in China, Europe, and the Americas expects the brain sciences to produce prompt and palpable reward and relief.

The scientific-industrial complex focuses its efforts on the creation of artificial intelligence—computer software equipped with functions of human cognition giving birth to machines capable of visual perception, speech and pattern recognition, decision making and data management. Global funding for AI amounted to roughly $30 billion in 2016, the fairest share of the money aimed at stepping up the commercial exploitations of the internet. America’s military commands test drones that decide for themselves which targets to destroy; Google assembles algorithms that monetize online embodiments of human credulity and desire, ignorance and fear.

We live in an age convinced that technology is the salvation of the human race, and over the past fifty years, we’ve learned to inhabit a world in which it is increasingly the thing that thinks and the man reduced to the state of a thing. We have machines to scan the flesh and track the blood, game the stock market, manufacture our news and social media, tell us where to go, what to do, how to point a cruise missile or a toe shoe. Machines neither know nor care to know what or where is the human race, why or if it is something to be deleted, sodomized, or saved. Watson and Alexa can access the libraries of Harvard, Yale, and Congress, but they can’t read the books. They process words as objects, not as subjects. Not knowing what the words mean, they don’t hack into the vast cloud of human consciousness (history, art, literature, religion, philosophy, poetry, and myth) that is the making of once and future human beings. (...)

History is not what happened two hundred or two thousand years ago. It is a story about what happened two hundred or two thousand years ago. The stories change, as do the sight lines available to the tellers of the tales. To read three histories of the British Empire, one of them published in 1800, the others in 1900 and 2000, is to discover three different British Empires on which the sun eventually sets. The must-see tourist attractions remain intact—Napoleon still on his horse at Waterloo, Queen Victoria enthroned in Buckingham Palace, the subcontinent fixed to its mooring in the Indian Ocean—but as to the light in which Napoleon, the queen, or India are to be seen, accounts differ.

It’s been said that over the span of nine months in the womb, the human embryo ascends through a sequence touching on over three billion years of evolution, that within the first six years of life, the human mind stores subjective experience gathered in what is now believed to be the nearly 200,000 years of its existence. How subjective gatherings of consciousness pass down from one generation to the next, collect in the pond of awareness that is every newly arriving human being, is another lobe of the mystery the contributors to this issue of the Quarterly leave intact. It doesn’t occur to Marilynne Robinson, twenty-first-century essayist and novelist, to look the gift horse in the mouth. “We all live in a great reef of collective experience, past and present, that we receive and preserve and modify. William James says data should be thought of not as givens but as gifts…History and civilization are an authoritative record the mind has left, is leaving, and will leave.”

by Lewis Lapham, Lapham's Quarterly |  Read more:
Image: "The Weeders, by Jules Breton, 1868

Sean Costello



Repost

Friday, January 19, 2018

Thursday, January 18, 2018

The Long-Term Care Crisis: Premiums Exploding, Leaving Seniors With “An Awful Choice”

A Wall Street Journal article gives an overview of a topic we discussed briefly before: an escalating crisis in the long-term care business. As we explain in more detail below, the entire industry massively underpriced policies that cover nursing home and other types of long-term care for the elderly. They are now playing catch-up with hefty rate increases. This puts those who paid for coverage in a terrible fix: do they abandon the policies entirely, when they have paid in substantial amounts of money and the policies do have real economic value should they need them? Do they reduce the amount of coverage so they can keep the premiums affordable? Or do they take the hit, when it might mean serious cutbacks to spending or retirement savings.

7.3 million individuals, equal to nearly 20% of the people over 65, are grappling with the dilemma of what to do about long-term care sticker shock. Some examples from the Journal’s story:
In the past two years, CNA Financial Corp. has increased the annual long-term-care insurance bill for Ms. Wylie and her husband by more than 90% to $4,831. They bought the policies in 2008, which promise future benefits of as much as $268,275 per person. The Wylies are bracing for more increases. 
To make their budget work, she has taken on a part-time landscaping job. The couple has delayed home maintenance, travels less and sometimes rents out their house. “We feel like we are out on a limb here, and these policies are supposed to be our safety net,” she says…
And there is an additional set of issues even for those who can afford to pay for pricier coverage. As we described in our earlier post, even the long-term care insurer that is supposedly the best in terms of paying claims, Genworth, now has many complaints on consumer websites. My own experience, when my mother was injured long enough that she might be submitting a claim, was that I got a HAMP-level run-around: they would not deal with me over the Internet and I was repeatedly disconnected when put on hold. When I finally reached a rep, I was told that they would need to send a nurse to evaluate my mother…but the phone rep could not schedule that visit and claimed they could not reach the unit responsible, and would have someone call back instead. We never got that call.

In other words, the typical behavior of insurers under duress is to engage in insurance fraud, which is to refuse to pay for valid claims. One way to do that is to put customers to go through so many hoops that some give up. One consumer complaint I read more than once regarding Genworth was that they would repeatedly claim that they had not received powers of attorney from children trying to get claims processed on behalf of aged parent. In evaluating whether it was worth it to renew my mother’s Genworth policy, I have assumed she will have to spend $10,000 in nastygrams to get claims processed.

Many adults don’t realize until they approach retirement age that Medicare does not provide for coverage in what are called long-term care facilities, such as assisted living (where the residents aren’t hospitalized but need help with some activities, like bathing or getting dressed) and nursing homes. Medicaid does, but it has strict financial eligibility limits (among other things, you will effectively be required to exhaust your financial assets). And based on reports by readers, facilities that accept Medicaid patients often do not provide a high standard of care.

Insurers rushed to fill this gap in a serious way about 40 years ago. Long-term care policies will reimburse the cost of care in approved facilities, or in many cases with approved home health care services, up to a daily maximum amount. Policies typically also have a maximum total payout amount (which may be expressed in other terms but amounts to the same thing).

The problem was twofold. One was that the insurers had no experience in offering this sort of policy and made unduly optimistic assumptions, such as how many people would lapse (stop paying before they used the policy), how long they would live, and how many days of care they would consume if and when they needed care. Ironically, when my father was looking at whether to get a long term care policy over 30 years ago, he was frustrated at his inability to get data to make any sort of an informed decision. It turns out the insurers themselves didn’t have enough of a track record to be doing any better than guessing, and they guessed wrong.

To give an idea of how seriously off some of their basic assumptions were, they assumed a lapse rate of 5% a year. Remember, when someone lapses, that’s a boon to the insurer and everyone else in the pool, because the funds they paid into the policy they abandoned goes to everyone else. It turns out that everyone who purchased these policies understood how valuable they could be and paid like clockwork. The Wall Street Journal reports that the lapse rate was 1%.

The Journal described other assumptions that proved to be wildly off the mark:
It turned out that nearly everyone underestimated how long policyholders would live and claims would last. For example, actuaries, insurers and regulators didn’t anticipate a proliferation of assisted-living facilities. And they assumed families would do whatever they could to avoid moving loved ones into nursing homes, holding down policy claims. 
By the late 1990s, assisted-living facilities were widely popular. Especially at well-run ones, staff members looked after policyholders so well that they lived years longer than actuaries had projected. 
Residents “are taking their medications; they are not falling,” says Mr. Bodnar, now a senior executive at Genworth.
A possibly even more fundamental modeling issue is that insurers arguably have and continue to be using the wrong type of model.

The second big problem is years of super low interest rates. This has meant that investment returns for year have fallen vastly short, on an inflation-adjusted basis, of what they had forecast. The results have been devastating for all long-term investors: life insurers, pension funds, and ordinary people trying to save for retirement.

On top of that, some insurers like Genworth offered policies that promised no rate increases over the life of the policy. Consumers who purchased them and made plans assuming the payments were fixed, have been hit in the last few years with massive increases.

The rate of individuals buying new policies has plunged now that insures are charging more for them. Again from the Journal:
Fewer than 100,000 long-term-care insurance policies were sold in the U.S. in 2016, and sales fell to about 34,000 in the first half of 2017, the industry-funded research firm Limra says. Both those totals are the lowest in more than 25 years. The business peaked in 2002 with about 750,000 sales. 
The latest policies typically cover less and cost more. According to the insurance agents’ trade group, a 60-year-old couple can expect to spend about $3,490 in combined annual premium for a typical policy that starts out with a maximum payout of $164,050 per person and then grows 3% a year to $333,000 when the couple is 85.
People approaching 60, which is usually the latest these insurers will write policies, don’t have good options unless they have a net worth of over $2 million, since then it makes more sense to roll the dice and self insure. By contrast, some financial advisers argue that if you have less than $300,000 in assets, you are better off spending them down and going into a Medicaid home if need be. But there are plenty of people with more than $300,000 in assets (say some accumulated home equity plus some savings) who would find it hard to pay the premiums, even if they kept working past normal retirement age.

Given the plunge in people buying these policies, the US is clearly moving towards the neoliberal answer of “Die faster!” Better approaches for providing in-home care could reduce the overall burden, but there seems to be no policy will to move in that direction.

by Yves Smith, Naked Capitalism |  Read more:

Noel McKenna, Dog Begging Breakfast, 2014
via:

Julien Ratouin-Lefevre, Jackson Pollock Special.
via:

The Literature of Bad Sex

[ed. 'Cat Person' excerpt can be found here (on Duck Soup), full story here (New Yorker).]

In a roiling climate of grievances and exhumed pain, at the end of a dull and lurid year, something astonishing happened. After the unmasking of Weinstein and amid the whole sorry cavalcade of them—powerful men who’d done unconscionable things to people, mostly women, mostly young women—a short story published in the New Yorker went viral. That confluence of words (“short story,” “New Yorker,” “viral”) constituted a sort of absurdity, although one that was salutary rather than disastrous, the latter being the type that had defined American political life since November 2016. Kristen Roupenian’s “Cat Person,” a stunningly calibrated and sophisticated story of a bad date, generated the sort of meme-proliferation more usually associated with a new Beyoncé album. (At the time of writing it is still at the top of the “most read” on the New Yorker’s site, despite being published over a month ago.) It is also, however, a work of fiction, which, like all good works of fiction, is travestied by being reduced to a unit of social media currency.

At its most cursory level, the story is an account of predation and dubious consent, played out largely over text. As such, it was hailed as some kind of dispatch from the zeitgeist. The underlying notion, that literature is in service to the zeitgeist, or even that a story’s value resides in how loudly and righteously it speaks to the prevailing political wind, is a troubling fallacy. The power of that story, as with its forebears—by which I mean precise depictions of imprecise heterosexual relations—was in its fine-tuned ambivalence. This sort of ambivalence is the opposite of a cop-out: it’s generative, rather than reductive, and it comes from time, on the writer’s part, spent dwelling in uncertainty. Though many apparently received the story as an extended version of a #MeToo social media post, unable to grasp it as fiction, it did what the blunt tool of a hashtag cannot: specifically, provoke empathy for both parties.

It’s a subtle oscillation of sympathies that Roupenian enacts: we feel for Margo, a 24-year-old student, who may or may not be attracted to Robert, her slightly overweight, 34-year-old date, and we also feel for him, at least until our sympathy is savagely revoked by the last word of the story. “Cat Person” then, is not about how guys are pigs and younger women are victims, but rather about the curious mechanics and currencies of desire, its necessary deceptions, both mutual and individual.

Margo wills herself into arousal by imagining Robert’s own desire for her. When she mentally tests out the idea of having sex with him—imagination is a prerequisite of action—she thinks: “Probably it would be like that bad kiss, clumsy and excessive, but imagining how excited he would be, how hungry and eager to impress her, she felt a twinge of desire pluck at her belly, as distinct and painful as the snap of an elastic band against her skin.” Does this mean the desire is less hers? Is wanting to be wanted as valid as wanting for oneself? (...)

The state of not-knowing is intrinsic to love and sex and it’s also intrinsic in good fiction. Faking it, of course, can operate in the inverse too, not as an attempt to suppress real desire, as Frances does, but as an inability to prevent real desire arising out of something feigned. In Catherine Lacey’s terrifyingly incisive novel The Answers, Mary is a physically unwell young woman, desperate for money to afford a specialist treatment called PAKing. She answers a curious ad promising generous remuneration and finds herself employed by a famous movie actor and his team. Kurt has the sort of untrammeled wealth and solipsism that allows him to orchestrate something called The Girlfriend Experiment, in which all his needs will be met by an array of women paid to perform certain roles. Mary is assigned the role of Emotional Girlfriend. Obedient, if dispassionate, she meets her cues, recites her lines and follows all protocols. Soon, however, “it was unclear to her if she was just impersonating affection, or if this impersonation had changed her from within, synthesized a kind of love in her.” However clearly defined human schema are, their participants are subjective beings, subject, specifically, to uncertainty. Is a synthesized love any less valid, perturbing and painful than a “natural” love?

Synthesis takes other, less legible forms. When, in “Cat Person” Margo kisses Robert, she finds herself, “carried away by a fantasy of such pure ego that she could hardly admit even to herself that she was having it. Look at this beautiful girl, she imagined him thinking. She’s so perfect, her body is perfect, everything about her is perfect, she’s only twenty years old, her skin is flawless, I want her so badly, I want her more than I’ve ever wanted anyone else, I want her so bad I might die.” Ego, too, clouds the narrator of Emma Cline’s bewitching The Girls. Russell, the simultaneously charismatic and pathetic Charles Manson-like figure at the center of the novel, coerces teenage Evie into giving him a blowjob moments after she’s been introduced to him. Perhaps a more accurate term is “offered up to him.” Afterwards, she experiences the rest of the night, “as fated, me as the center of a singular drama.” The tepid, flat coke he hands her after coming in her 14-year-old mouth is, “as intoxicating as champagne.” A thing can be both disgusting and delicious, its value not intrinsic, but rather dependent on the person experiencing it. Because then there’s this, the sentence which ends the chapter and the preceding description of Russell’s calculated ways of “breaking down boundaries”: “But maybe the strangest part—I liked it, too.” As Margo notes in “Cat Person”: “humiliation […] was a kind of perverse cousin to arousal.”

by Hermione Hoby, LitHub | Read more:
Image: "Yawning cat", uncredited

Bitcoin Mania

The first time I bought virtual money, in October 2017, bitcoins, the cryptocurrency everyone by now has heard of, were trading at $5,919.20. A month later, as I started writing this, a single coin sold for $2,000 more. “Coin” is a metaphor. A cryptocurrency such as bitcoin is purely digital: it is a piece of code—a string of numbers and letters—that uses encryption techniques and a decentralized computer network to process transactions and generate new units. Its value derives entirely from people’s perception of what it is worth. The same might be said of paper money, now divorced from gold and silver, or of gold and silver for that matter. Money is a human invention. It has value because we say it does. (...)

The central obstacle to a fully automated monetary system run exclusively by computers is validation: how to ensure that the transactions on the network are legitimate. The bitcoin software devised by Nakamoto employs a number of features to deal with this. The first is basic encryption. A bitcoin is nothing more than a record of value—you have seven bitcoin, I have five bitcoin, and so on—encoded and stored on the bitcoin system as an address. To release that bitcoin to buy something or to cash it out, its owner must use a private encryption key, known only to him or her, which is associated with that account. Matching the private key with the address is done automatically by the decentralized network of computers. If they don’t match up, or if the owner of the private key is attempting to spend his or her bitcoin more than once, the computers reject the transaction.

The “miners” who verify and collect these transactions into a block—“miners” being a term for those who run the computers on the network—are also required by the bitcoin software to perform an additional validating function before the block can be added to the bitcoin ledger. Called “proof of work,” it is essentially a computational lottery in which all the mining computers vie to guess an algorithmically generated number between zero and 4,294,967,296 with the correct number of zeros preceding it. Finding the target number takes trillions of guesses and a tremendous amount of computing power.

The idea behind “proof of work,” according to Daniel Krawisz, of the Satoshi Nakamoto Institute, is that it is “an added complication, like a ritual, so as to make blocks more difficult to generate…. [It] is…a means for a group of self-interested people, none of whom is subordinate to any other, to establish a consensus against a considerable incentive to resist it.” Because it takes so much computing power to find this number, miners are motivated to ensure that the transactions they are processing are valid and nonconflicting. But they are motivated to participate in the first place because the software generates a reward: the miner who finds the “proof of work” number first is paid in (an algorithmically determined number of) bitcoins. Though that is how new bitcoins are created, or “mined,” and added to the system, as the Tapscotts point out, mining is
an awkward analogy because it conjures images of experts whose talent might confer some competitive advantage…. It doesn’t. Each miner is running the software like a utility function in the background, and the software is doing all the computations…. There’s no skill involved.
When the bitcoin network began operating in 2009, people could run the validation program on their personal computers and earn bitcoins if their computer solved the puzzle first. As demand for bitcoin increased, and more people were vying to find the random, algorithmic proof of work validation number, speed became essential. Mining began to require sophisticated graphics cards and, when those proved too slow, special, superfast computers built specifically to validate transactions and mine bitcoins. Individual miners have dropped out for the most part, and industrial operators have moved in. These days, mining is so computer-intensive that it takes place in huge processing centers in countries with low energy costs, like China and Iceland. One of these, in the town of Ordos, in Inner Mongolia, has a staff of fifty who oversee 25,000 computers in eight buildings that run day and night. A company called BitFury, which operates mining facilities in Iceland and the Republic of Georgia and also manufactures and sells specialized, industrial processing rigs, is estimated to have mined at least half a million bitcoins so far. At today’s price, that’s worth around $7.5 billion.

Still, it’s not exactly free money. Marco Streng, the cofounder of Genesis Mining, estimates that it costs his company around $400 in electricity alone to mine each bitcoin. That’s because bitcoin mining is not only computationally intensive, it is energy-intensive. By one estimate, the power consumption of bitcoin mining now exceeds that of Ireland and is growing so exponentially that it will surpass that of the entire United States by July 2019. A year ago, the CEO of BitFury, Valery Vavilov, reckoned that energy accounted for between 90 and 95 percent of his company’s bitcoin-mining costs. According to David Gerard—whose new book, Attack of the Fifty Foot Blockchain, is a sober riposte to all the upbeat forecasts about cryptocurrency like the Tapscotts’—“By the end of 2016,” a single mining facility in China was using “over half the estimated power used by all of Google’s data centres worldwide at the time.”

One way bitcoin miners offset these costs is by collecting the very thing digital money, traded peer-to-peer, was supposed to make obsolete: transaction fees. By one estimate, these fees have risen 1,289 percent since March 2015. On any given day, the fees will be in the millions of dollars and now cost upward of twenty dollars per transaction. While transaction fees are not mandatory, they are a way for users to attempt to jump the queue in a system rife with bottlenecks, since those who offer miners a fee to have their transactions included in a block have a better chance of that happening. With so many transactions lined up, waiting to be processed, miners have discretion over which will make it to the head of the line; the higher the fee, the more likely it is to be chosen. As the explanatory website Unlock Blockchain puts it: “when miners mine a block, they become temporary dictators of that block. If you want your transactions to go through, you will have to pay a toll to the miner in charge…. The higher the transaction fees, the faster the miners will put [the transactions] up in their block.” As a consequence, transactions can be held up for hours or days or dropped altogether.

Bitcoin’s high transaction fees and slow transaction times were two of the reasons I chose to buy ether. But there was another reason as well: while bitcoin was invented to bypass traditional currency by tendering a new kind of money, ether, another cryptocurrency that can be bought, sold, and used to purchase goods and services, was created to raise capital to fund a project called the Ethereum network. The principals behind it are building out what is being trumpeted as the next iteration of the Internet, Web 3.0, also known as “the blockchain.”

A blockchain is, essentially, a way of moving information between parties over the Internet and storing that information and its transaction history on a disparate network of computers. Bitcoin, for example, operates on a blockchain: as transactions are aggregated into blocks, each block is assigned a unique cryptographic signature called a “hash.” Once the validating cryptographic puzzle for the latest block has been solved by a mining computer, three things happen: the result is timestamped, the new block is linked irrevocably to the blocks before and after it by its unique hash, and the block and its hash are posted to all the other computers that were attempting to solve the puzzle. This decentralized network of computers is the repository of the immutable ledger of bitcoin transactions. As the Tapscotts observe, “If you wanted to steal a bitcoin, you’d have to rewrite the coin’s entire history on the blockchain in broad daylight.”

While bitcoin operates on a blockchain, it is not the blockchain. The insight of Vitalik Buterin, the young polymath who created Ethereum, was that in addition to exchanging digital money, the blockchain could be used to facilitate transactions of other kinds of digitized data, such as property registrations, birth certificates, medical records, and bills of lading. Because the blockchain is decentralized and its ledger immutable, those transactions would be protected from hacking; and because the blockchain is a peer-to-peer system that lets people and businesses interact directly with each other, it is inherently more efficient and also cheaper than systems that are burdened with middlemen such as lawyers and regulators.

A company that aims to reduce drug counterfeiting is using the blockchain to follow pharmaceuticals from provenance to purchase. Another outfit is doing something similar with high-end sneakers. Yet another start-up, this one called Paragon, is currently raising money to create a blockchain that “registers everything that has happened to a cannabis product, from seed to sale, letting consumers, retailers and the government know where everything came from.” “We are treating cannabis as a normal crop,” Paragon’s founder and CEO Jessica VerSteeg, a former Miss Iowa, told a reporter for the website Benzinga. “So, the same way that you would want to know where the corn on your table came from, or the apple that you had at lunch came from, you want to know where the weed you’re consuming came from.”

While a blockchain is not a full-on solution to fraud or hacking, its decentralized infrastructure ensures that there are no “honeypots” of data available for criminals to exploit. Still, touting a bitcoin-derived technology as the answer to cybercrime may seem a stretch in light of the high-profile—and lucrative—thefts of cryptocurrency over the past few years. Gerard notes that “as of March 2015, a full third of all Bitcoin exchanges”—where people stored their bitcoin—“up to then had been hacked, and nearly half had closed.” There was, most famously, the 2014 pilferage of Mt. Gox, a Japanese-based digital coin exchange, in which 850,000 bitcoins worth $460,000,000 disappeared. Two years later another exchange, Bitfinex, was hacked and around $60 million in bitcoin was taken; the company’s solution was to spread the loss to all its customers, including those whose accounts had not been drained. Then there was the theft via malware of $40 million by a man in Pennsylvania earlier this year. He confessed, but the other thieves slipped away, leaving victims with no way to retrieve their funds.

Unlike money kept in a bank, cryptocurrencies are uninsured and unregulated. That is one of the consequences of a monetary system that exists—intentionally—beyond government control or oversight. It may be small consolation to those who were affected by these thefts that neither the bitcoin network nor the Ethereum network itself has been breached, which perhaps proves the immunity of the blockchain to hacking.

by Sue Halpern, NY Review of Books |  Read more:
Image: Yoshikazu Tsuno/AFP/Getty Images

Nick Cave


Repost

Like Ducks? Thank a Hunter.

If you don’t have a positive opinion of hunting it’s because you don’t know enough about it. Nowhere does that ring more true than in the case of ducks. These animals thrive in North America today for one reason: Hunters.

Ducks Were Saved By a Cartoonist


In the early 1900s, Americans realized they had a environmental crisis on their hands. Wild animals had initially been viewed by European settlers as an unlimited resource, but America’s exploding population, its westward expansion, industrialized agriculture, and unregulated hunting had all combined to decimate the population of wilfdlife—from deer to ducks.

This gave rise to the modern conservation movement, which used sport hunting as a tool to successfully save our wildlife. (I detail how that worked for elk here.)

Ducks and other waterfowl found themselves in a particularly tight spot due to massive losses of suitable habitat. The wetlands they depend on were disappearing across the continent. And that wasn’t going to be an easy trend to reverse. Cities had sprung up in marshes that were once home to millions of birds. The wheat we now grew filled in wetlands throughout the Mississippi River basin. The grasslands had become other crops, and those were filled with chemical fertilizers and pesticides. If their homes weren’t destroyed outright, ducks found their food supply diminished and their habitat poisoned.

And unlike hoofed mammals, protecting small, specific areas would do ducks no good. Their migrations span the continent.

The result of all this degradation: the estimated population of waterfowl on this continent fell as low as 27 million by the early 1930s. Efforts to reverse this trend had begun two decades previously. In 1916, the U.K. signed a treaty on behalf of Canada banning market hunting of waterfowl and protecting their habitats. In 1918, Congress ratified that treaty with the Migratory Bird Treaty Act. Mexico signed on in 1936.

Those efforts were well meant, but the treaty didn’t provide significant funding for protecting, restoring, or reclaiming habitats. Without a source of funding, ducks were still screwed.

Enter Ding Darling, a Pulitzer Prize-winning editorial cartoonist and passionate duck hunter. In addition to ducks, Darling often drew President Franklin Roosevelt, whose policies and politics he ridiculed. By 1934, Roosevelt had had enough, and in an effort to eliminate two problems at once (Darling’s criticism and the waterfowl-population crisis), he appointed Darling as head of the U.S. Biological Survey, a predecessor to the Fish and Wildlife Service.

It wasn’t an easy job. Smack in the middle of both the Great Depression and the Dust Bowl, there wasn’t extra money to be spent on wildlife. If he was going to save the duck’s habitat, Darling needed to find a new source of funding from outside the government. He came up with a law called the Duck Stamp Act, which was passed almost immediately by Congress. It levied an additional $1 fee on waterfowl hunters, in addition to the licenses they already purchased from state and local governments. The proceeds would go toward reclaiming and protecting duck habitat. Darling designed the first stamp himself.

Today, the federal duck stamp costs $25, and its design is chosen from an annual art contest that receives hundreds of entries. 98 percent of revenue from duck stamps goes to protecting duck habitat, of which it’s purchased and protected 6.5 million acres to-date. Duck stamps are largely responsible for financing our nation’s Wildlife Refuge system, and purchase of one grants you access to those protected lands for the year. Many people buy two stamps—one to sign up for hunting, and one to keep as an appreciating collectible.

If you care about waterfowl conservation, there is no better way to help than by purchasing a duck stamp. You don’t need to be a hunter to buy one, but 1.1 million of the 1.6 million people who do get one each year are.

Darling has been called, “the best friend a duck ever had.” He would go on to found the National Wildlife Federation.

Skies Filled with Waterfowl

Darling may have been the pioneer, but he was soon joined by other hunters, who saw a need to go even further than the federal government could. Treaties could stretch across borders, but federal funding couldn't.

Enter Ducks Unlimited. With a mission to create “wetlands sufficient to fill the skies with waterfowl today, tomorrow and forever,” Ducks Unlimited is a non-profit dedicated to wetlands conservation. Founded in 1937, it now counts 600,000 members, 90 percent of whom are duck hunters. Last year, it raised $224 million—83 percent of which was spent directly to protect wetlands. In its 80-year history, Ducks Unlimited has protected 14 million acres of wetlands spanning North America.

Additionally, Ducks Unlimited is $1.82 billion in to raising $2 billion for a campaign it's calling Rescue Our Wetlands. Expected to meet its goal this year, it will be the largest wetlands restoration program in world history.

Combined, the duck stamp, and Ducks Unlimited have roughly doubled the population of waterfowl on this continent to 50 million. (Heck, it’s the duck stamp that pays for the birds to be counted.) It’s impossible to tabulate the benefit hunter-funded wetlands conservation has had on the over 900 non-game species that also rely on wetlands habitat—including 96 percent of all our birds.

by Wes Siler, Outside |  Read more:
Image: DOI
[ed. In my career as a habitat biologist in Alaska I frequently applied for grants from DU, Duck Stamp and FWS funds to purchase thousands of acres of wetlands for habitat protection and restoration. They're great programs.]

Wednesday, January 17, 2018

Bundles of Joy

On December’s survey, I asked readers who had children whether they were happy with that decision. Here are the results, from 1 (very unhappy) to 5 (very happy):


The mean was 4.43, and the median 5. People are really happy to have kids!

This was equally true regardless of gender. The male average (4.43, n = 1768) and female average (4.49, n = 177) were indistinguishable.

To double-check this, I compared the self-reported life satisfaction of people with and without kids. People with kids were much more satisfied – but also did much better on lots of other variables like financial situation, romantic satisfaction, etc. So probably at least some of the effect was because people with kids tend to be older people in stable relationships who have their life more figured out, and maybe also more religious.

In order to compare apples to apples, I limited the comparison to married atheist men 25 or older. There was no longer a consistent trend for people with at least one child to be more satisfied. But there was a trend for increasing satisfaction with increasing number of children:

NUMBER OF CHILDREN : AVERAGE LIFE SATISFACTION ON 1-10 SCALE (total n = 1491):
0: 7.06
1: 7.09
2: 7.24
3: 7.31
4+: 7.43

This doesn’t make a lot of sense to me, since I would expect the biggest life change to be going from zero children to one child. Probably some residual confounders remain in the analysis – and commenter “meh” points out that people who are happiest with their existing children will be most likely to have more. But at the very least, people with children don’t seem to be less happy.

These results broadly match existing research, which usually finds that parents report being very happy to have children, but that this is not reflected in life satisfaction numbers. The main difference is that existing research usually claims parents have lower life satisfaction than non-parents. But this is different in different countries, either for cultural or for policy reasons. The survey respondents form a culturally unusual group and are of a higher socioeconomic status; they may be more similar to countries like Norway (where parents are happier) than to countries like the United States (where they are less happy).

(also, we should at least consider the Caplanian perspective that people more informed about genetics will be happier parents, since they’ll be less neurotic about the effect of their parenting styles.)

The View From Hell blog argues that the discrepancy between the direct question (“Are you happy to have kids?”) and the indirect one (“How happy are you?”, compared across parents vs. childless people) is pure self-deception; children suck, but parents refuse to admit it. I haven’t looked in depth at the study they cite, which purports to show that the more you prime parents with descriptions of the burdens of parenthood, the more great they insist everything is. But I wonder about the philosophical foundations we should be using here. There’s happiness, and there’s happiness: I am happy to be giving money to charity and making the world a better place, but I don’t think my self-reported life satisfaction would be noticeably higher after a big donation. It might even be lower if it cut into my luxury consumption. The wanting/liking/approving trichotomy may also be relevant.

People were happier with their decision to have children if they were (all results are binomial correlations and highly significant even after correction): more gender-conforming (0.14), had fewer thoughts about maybe being transgender (0.20), were more right-wing (0.10), considered themselves more moral people (0.15), were less autistic (0.12), were less extraverted (0.10), were more emotionally stable (0.15), and were more agreeable (0.13). All of these effects were very small compared to the generally high level of happiness at having children, no matter who you were and what your personality was like.

I included this survey question because I’m considering whether or not to have kids. Even though the survey only reinforced the (confusing) results of past research, I still find it helpful. After all, a lot of the survey-takers here are pretty skeptical of other aspects of traditional lifestyles: monogamy, gender norms, religion, etc. It’s impressive how strongly approval of parenting survives even in this weird a population; I consider this a new and exciting fact beyond the ones established by previous studies.

by Scott Alexander, Slate Star Codex |  Read more:
Image: SSC
[ed. I asked a friend recently: if you could do it over again, would you still marry the same person? The answer (ambiguously enough) was: yes, if it meant having the children they have now.]

How to Fix Facebook—Before It Fixes Us

Facebook and Google are the most powerful companies in the global economy. Part of their appeal to shareholders is that their gigantic advertising businesses operate with almost no human intervention. Algorithms can be beautiful in mathematical terms, but they are only as good as the people who create them. In the case of Facebook and Google, the algorithms have flaws that are increasingly obvious and dangerous.

Thanks to the U.S. government’s laissez-faire approach to regulation, the internet platforms were able to pursue business strategies that would not have been allowed in prior decades. No one stopped them from using free products to centralize the internet and then replace its core functions. No one stopped them from siphoning off the profits of content creators. No one stopped them from gathering data on every aspect of every user’s internet life. No one stopped them from amassing market share not seen since the days of Standard Oil. No one stopped them from running massive social and psychological experiments on their users. No one demanded that they police their platforms. It has been a sweet deal.


Facebook and Google are now so large that traditional tools of regulation may no longer be effective. The European Union challenged Google’s shopping price comparison engine on antitrust grounds, citing unfair use of Google’s search and AdWords data. The harm was clear: most of Google’s European competitors in the category suffered crippling losses. The most successful survivor lost 80 percent of its market share in one year. The EU won a record $2.7 billion judgment—which Google is appealing. Google investors shrugged at the judgment, and, as far as I can tell, the company has not altered its behavior. The largest antitrust fine in EU history bounced off Google like a spitball off a battleship.

It reads like the plot of a sci-fi novel: a technology celebrated for bringing people together is exploited by a hostile power to drive people apart, undermine democracy, and create misery. This is precisely what happened in the United States during the 2016 election. We had constructed a modern Maginot Line—half the world’s defense spending and cyber-hardened financial centers, all built to ward off attacks from abroad—never imagining that an enemy could infect the minds of our citizens through inventions of our own making, at minimal cost. Not only was the attack an overwhelming success, but it was also a persistent one, as the political party that benefited refuses to acknowledge reality. The attacks continue every day, posing an existential threat to our democratic processes and independence.

We still don’t know the exact degree of collusion between the Russians and the Trump campaign. But the debate over collusion, while important, risks missing what should be an obvious point: Facebook, Google, Twitter, and other platforms were manipulated by the Russians to shift outcomes in Brexit and the U.S. presidential election, and unless major changes are made, they will be manipulated again. Next time, there is no telling who the manipulators will be.

Awareness of the role of Facebook, Google, and others in Russia’s interference in the 2016 election has increased dramatically in recent months, thanks in large part to congressional hearings on October 31 and November 1. This has led to calls for regulation, starting with the introduction of the Honest Ads Act, sponsored by Senators Mark Warner, Amy Klobuchar, and John McCain, which attempts to extend current regulation of political ads on networks to online platforms. Facebook and Google responded by reiterating their opposition to government regulation, insisting that it would kill innovation and hurt the country’s global competitiveness, and that self-regulation would produce better results.

But we’ve seen where self-regulation leads, and it isn’t pretty. Unfortunately, there is no regulatory silver bullet. The scope of the problem requires a multi-pronged approach.

First, we must address the resistance to facts created by filter bubbles. Polls suggest that about a third of Americans believe that Russian interference is fake news, despite unanimous agreement to the contrary by the country’s intelligence agencies. Helping those people accept the truth is a priority. I recommend that Facebook, Google, Twitter, and others be required to contact each person touched by Russian content with a personal message that says, “You, and we, were manipulated by the Russians. This really happened, and here is the evidence.” The message would include every Russian message the user received.

This idea, which originated with my colleague Tristan Harris, is based on experience with cults. When you want to deprogram a cult member, it is really important that the call to action come from another member of the cult, ideally the leader. The platforms will claim this is too onerous. Facebook has indicated that up to 126 million Americans were touched by the Russian manipulation on its core platform and another twenty million on Instagram, which it owns. Together those numbers exceed the 137 million Americans who voted in 2016. What Facebook has offered is a portal buried within its Help Center where curious users will be able to find out if they were touched by Russian manipulation through a handful of Facebook groups created by a single troll farm. This falls far short of what is necessary to prevent manipulation in 2018 and beyond. There’s no doubt that the platforms have the technological capacity to reach out to every affected person. No matter the cost, platform companies must absorb it as the price for their carelessness in allowing the manipulation.

Second, the chief executive officers of Facebook, Google, Twitter, and others—not just their lawyers—must testify before congressional committees in open session. As Senator John Kennedy, a Louisiana Republican, demonstrated in the October 31 Senate Judiciary hearing, the general counsel of Facebook in particular did not provide satisfactory answers. This is important not just for the public, but also for another crucial constituency: the employees who keep the tech giants running. While many of the folks who run Silicon Valley are extreme libertarians, the people who work there tend to be idealists. They want to believe what they’re doing is good. Forcing tech CEOs like Mark Zuckerberg to justify the unjustifiable, in public—without the shield of spokespeople or PR spin—would go a long way to puncturing their carefully preserved cults of personality in the eyes of their employees.

These two remedies would only be a first step, of course. We also need regulatory fixes. Here are a few ideas.

First, it’s essential to ban digital bots that impersonate humans. They distort the “public square” in a way that was never possible in history, no matter how many anonymous leaflets you printed. At a minimum, the law could require explicit labeling of all bots, the ability for users to block them, and liability on the part of platform vendors for the harm bots cause.

Second, the platforms should not be allowed to make any acquisitions until they have addressed the damage caused to date, taken steps to prevent harm in the future, and demonstrated that such acquisitions will not result in diminished competition. An underappreciated aspect of the platforms’ growth is their pattern of gobbling up smaller firms—in Facebook’s case, that includes Instagram and WhatsApp; in Google’s, it includes YouTube, Google Maps, AdSense, and many others—and using them to extend their monopoly power.

This is important, because the internet has lost something very valuable. The early internet was designed to be decentralized. It treated all content and all content owners equally. That equality had value in society, as it kept the playing field level and encouraged new entrants. But decentralization had a cost: no one had an incentive to make internet tools easy to use. Frustrated by those tools, users embraced easy-to-use alternatives from Facebook and Google. This allowed the platforms to centralize the internet, inserting themselves between users and content, effectively imposing a tax on both sides. This is a great business model for Facebook and Google—and convenient in the short term for customers—but we are drowning in evidence that there are costs that society may not be able to afford.

by Roger McNamee, Washington Monthly | Read more:
Image: Chris Matthews 

[ed. If ever there were a flashing red signal... See also: When Speculation Has No Limits and
Beware the $500 Billion Bond Exodus.]