Tuesday, January 17, 2017

How Designers Engineer Luck Into Video Games

The responsibilities and challenges of programmed luck.

On Sept. 16, 2007, a Japanese YouTuber who goes by the handle “Computing Aesthetic” uploaded a forty-eight-second-long video with the deafening title, “ULTRA MEGA SUPER LUCKY SHOT.” The video shows a high-scoring shot in Peggle, a vastly popular video game, loosely based on Japanese pachinko machines, in which a ball bearing clatters down the screen, accruing points as it bounces through a crowd of candy-colored pegs, which disappear shortly after being touched; more bounces, more points. Although Peggle involves some skill—before firing the ball, the player must carefully aim the launcher that dangles at the top of the screen—you are principally at the mercy of the luck of the bounce. In Computing Aesthetic’s footage, the points pile up as the ball bounces fortuitously between pegs. To underscore the seemingly miraculous shot, Beethoven’s “Ode to Joy” blares euphorically until, in the video’s final moments, the ball bearing sinks into the bucket at the base of the screen and the words “FEVER SCORE” flash onscreen. The description on the video, which has been watched nearly a quarter of a million times, reads, “I couldn’t balieve this when it happened!!!!!!!!!”

Computing Aesthetic’s video is just one of nearly 20,000 such YouTube clips labelled with the words “Peggle” and “Lucky,” uploaded by players so amazed at their good fortune in the game that they were moved to share the achievement with the world. But these players may not be as lucky as they’ve been led to believe. “In Peggle, the seemingly random bouncing of the balls off of pegs is sometimes manipulated to give the player better results,” Jason Kapalka, one of the game’s developers, admitted to me. “The Lucky Bounce that ensures that a ball hits a target peg instead of plunking into the dead ball zone is used sparingly. But we do apply a lot of extra ‘luck’ to players in their first half-dozen levels or so to keep them from getting frustrated while learning the ropes.” Tweaking the direction of any given bounce by just a few compass degrees—but not so much that the ball swerves unrealistically in mid-air—is enough to encourage beginners and not make the game too unbelievable, Kapalka said.

Fairness is the unspoken promise of most video games. Controlled by an omniscient and omnipotent designer, a video game has the capacity to be ultimately just, and players expect that it will be so. (Designers also have an incentive to be even-handed: A game that always beats you is a game you’ll soon stop playing.) And yet, when video games truly play by the rules, the player can feel cheated. Sid Meier, the designer of the computer game Civilization, in which players steer a nation through history, politics, and warfare, quickly learned to modify the game’s odds in order to redress this psychological wrinkle. Extensive play-testing revealed that a player who was told that he had a 33 percent chance of success in a battle but then failed to defeat his opponent three times in a row would become irate and incredulous. (In Civilization, you can replay the same battle over and over until you win, albeit incurring costs with every loss.) So Meier altered the game to more closely match human cognitive biases; if your odds of winning a battle were 1 in 3, the game guaranteed that you’d win on the third attempt—a misrepresentation of true probability that nevertheless gave the illusion of fairness. Call it the Lucky Paradox: Lucky is fun, but too lucky is unreal. The resulting, on-going negotiation among game players and designers must count as one of our most abstract collective negotiations. (...)

In mechanical games, luck is the player’s saving grace against the mechanism itself. In the early 1950s, the Chicago-based pinball manufacturer Gottlieb noticed that novice pinball players would occasionally lose a ball in the first few moments of a game. So it introduced an inverted V-shaped metal wall that, during a game’s opening seconds, would rise between the flippers at the base of the machine in order to keep an errant ball from disappearing down the gulley. In newer pinball machines, the blocking gate, known as a “ball saver” (a phrase invented by Chicago Coin for its 1968 pinball machine, Gun Smoke), is controlled by software; whether the wall rises or not is a matter of luck, of a kind that has been engineered into the algorithm.

In fully digital video games, luck is even more deeply baked into the experience, and must be actively simulated. When the soccer ball sails past the goalkeeper in FIFA, or when, inexplicably, a herd of race cars slows down to allow you to catch up, a game designer’s hand has just acted to provide some ghostly rigging. The effect of this manipulation is to flatter you and thereby keep you engaged. But it’s a trick that must be deployed subtly. A player who senses that he’s secretly being helped by the game will feel patronized; after all, luck is only luck if it’s truly unpredictable.

Which is where the problems begin.

by Simon Parkin, Nautilus |  Read more:
Image: Wren McDonald

The Death of the Tunnel Tree

Early last Monday morning, a friend of mine sent news that a tree we knew, a sequoia, had collapsed in a winter mountain storm. I was in New York, where two inches of hard snow sat on cars and tree branches that themselves looked like death. He was in Northern California, near the place where we grew up. No one is certain of the fallen tree’s age, but it is thought to have lived at least a thousand years. Any tribute I could give it would be fatuous; the tree was older than the language in which I can write.

The tree meant something more time-bound to humans, though, and, like a playboy worn down by the party circuit, bore the traces of a personable past. Giant sequoias are believed to be the largest living thing on Earth by volume. They are tall with short branches, and wear mantles of thick, russet bark that feels like Styrofoam and has the soft curves of poured wax. This one had a huge hole in its base—about ten feet tall, and even wider—that was carved in the eighteen-eighties. The idea was to let you walk not just around the tree but through it, making it a kind of skyscraper, a place in the forest where people could dwell. Over the years, the hollowed-out sequoia came to be called the Pioneer Cabin Tree, like a built thing, or the tunnel tree, like an essential piece of infrastructure. What was really meant was that it was our tree, our human tree, the one we singled out and marked with the illusions of our time. Its hollow had been razored with initials, and its wood had the polish of frequent touch. When the dusty, ferny mountain forest became Calaveras Big Trees State Park, in 1931, the tunnel tree emerged as a centerpiece, the California mountains’ Tour Eiffel.

In death, it was more. The A section of the Times, a paper not traditionally much concerned with California flora, gave the tree more than ten inches of space. The Los Angeles Times called it “iconic.” I watched the coverage with the media-age awkwardness of someone trying to feel the touch of death from a great distance. No one knew quite what to say, it seemed, and, although we all felt some vague measure of loss, it was unclear what to think about a life that had lasted longer than all memory. In the way of human grief, I want, instead of honoring the tree directly, to conjure up the world in which it was a monument for me. (...)

Most summers, as long as I remember, my family has rented, for a week, a cabin in the middle Sierras just off Highway 4. It’s quiet there, and inexpensive, and there aren’t a lot of Jet Skis on the water. When we started going up to Calaveras—that’s the family expression, “up to Calaveras”—it was because that’s where my mother’s parents took her. Later, my mother began urging other families to take cabins nearby. We arranged big dinners on creaking wood decks and ate grilled chicken in the light of citronella candles. When the summer meteor showers came, we’d lie on empty roads and watch the stars. The rented cabins would invariably be A-frames in the style of the high Carter Administration (ski-lodge shag carpet, macramé owls on the walls), and you would will yourself to sleep despite fears that a giant spider was about to leap down from the eaves. These unfamiliar terrors made the weeks seem long and sweet. One August, we were nearly washed off some boulders and downriver in an unexpected thunderstorm; the next July, I floated with the special harmony of adolescent lassitude across a lake, on an air mattress, with friends. I was fifteen, and it was the night when, by the prophecies of Nostradamus, the world was sure to end. That it didn’t end then, or the evening after, taught me something about wise men. That I’d felt at peace with the apocalypse—I was confident my fifteen-year-old life had a pleasant roundness, even a fulfillment—teaches me today how poorly we can see beyond the near horizon of experience. Visiting a place again and again, year after year, annunciates the slow progress of human growth. A kid that you recall shows up, abruptly, with the problems and the powers of a woman or a man.

That’s what the tunnel tree in Big Trees State Park meant to me: the function of eternity to graduate the progress of a life. The first time I saw the tree, I was about five, and my family took a photo in its hollow. We took another photo the next time we visited, and again after that. Over the years, I’ve been back probably twenty times, and a catalogue of imagery—first film, then digital—marks my family’s slow, peculiar progress. We look heartbreakingly small. The tree is really very big. That record ended this week, and I cannot shake the feeling that a certain vector of our history ended then as well.(...)

The temptation is to herald the tunnel tree’s death as an emblem. (The Times, in a second piece, presented its collapse as a symbol of this dire American season.) It is also easy, maybe just, for humans to take blame. Although trees often fall in storms, sequoias are equipped for the long haul—their stance is wide; their bark is fire-resistant—and a spell of winter weather is unlikely to have felled the tunnel tree without the huge, destabilizing chasm near its roots. We made the tree our own and, in the process, took away its immortality. It experienced time as few sequoias can, through human eyes: with friendship, wounds, some fame, and death.

by Nathan Heller, New Yorker |  Read more:
Image: Calif. State Parks/AFP/Getty

Monday, January 16, 2017

Forgiveness Is Not a Binary State

In the summer of 2015, less than a week after Dylann Roof murdered nine black churchgoers in Charleston, South Carolina — a crime for which, yesterday, he was sentenced to death — the relatives of his victims arrived at his first court appearance with a surprising message: forgiveness.

“We have no room for hating, so we have to forgive,” said one woman whose brother was killed in the attack. Another, whose mother was among the deceased, expressed similar sentiments: “You took something very precious from me,” she said, “but I forgive you.”

And a few days after that, writer Roxane Gay published a moving op-ed in the New York Times explaining why she did not. “I have no immediate connection to what happened in Charleston, S.C., last week beyond my humanity and my blackness,” she wrote, “but I do not foresee ever forgiving his crimes, and I am wholly at ease with that choice.”

Together, these two reactions — both powerful, both valid, diametrically opposed to each other — raise an important question about the slippery concept at their center. Forgiveness, clearly, is a highly personal choice, speeding healing for some and precluding healing for others. But what does it even mean to forgive, anyway?

It’s something we haven’t been asking ourselves for very long — it wasn’t until 1989 that psychologists even started to really study forgiveness — but psychologist Harriet Lerner believes we’ve been too hasty to rush into an answer. In her new book Why Won’t You Apologize?: Healing Big Betrayals and Everyday Hurts, Lerner argues that we’re flying blind: Academic research and conventional wisdom alike emphasize the positive effects of forgiveness without having reached any clear consensus as to what the act of forgiving really looks like.

“When I read the literature on forgiveness, I found myself confused. And it took me a while to sort out that the confusion was not mine, and had more to do with the way that forgiveness is talked about and written about,” she says. “What I began to be aware of is that the forgiveness experts were collapsing the messy complexity of human emotions into simplistic dichotomous equations, like you either forgive the wrongdoer or you’re a prisoner of your own anger and hate. Either you forgive, or your life will be mired down in corrosive emotions and you’ll never move forward.” The reality, she says, is that forgiveness is rarely so tidy — and that placing too much faith in its powers can actually harm, rather than help.

Forgiveness isn’t always a good idea.

Scientific literature is chock-full of ways that forgiveness can improve your mental and physical health: It can ease anxiety and depression, cut down on your risk of heart attack, even help you live longer. Letting go of a grudge, it seems, may be up there with exercising and getting enough sleep as one of the best things you can do for yourself.

But the problem with this framing, Lerner says, is that it can push people into extending the olive branch before they’re ready, turning forgiveness from a personal choice into something closer to an obligation: the emotional equivalent of eating your vegetables. And if you can’t bring yourself to do it, you’re going to feel all the worse.

“It’s a terribly hurtful thing to put forth the notion, which is everywhere, that there can be no peace or healing without forgiveness,” she says. “To suggest that the only way out of their unhappiness is that they have to transcend their legitimate anger and pain … It’s not anybody’s place — not your therapist, or your minister, or your coach, or Facebook, or whatever — it’s no one else’s job to tell you to forgive or not to.”

Lerner offers an example of what happens when they do: “[Let’s say] the hurt party opens a conversation with their mother about some earlier neglect or injustice. And the mother says, ‘I’m really sorry, what I did was wrong, do you forgive me?’” Most of the time, she says, “The impulse is to say, ‘I forgive you,’ because they’re so relieved the mother has acknowledged the harm. But the problem is that forgiveness takes its own time to hold.”

And if the hurt party can’t actually bring themselves to forgive, one of two things happens: On the one hand, they could power through, accept the apology anyway, and then grapple with lingering feelings of anger that now feel invalidated. Or, on the other hand, “If the hurt party says, ‘I don’t forgive you, I need more time,’ very often the hurt party becomes the bad guy. And the wrongdoer feels self-righteous because they’re angry the other person isn’t saying ‘I forgive you,’ and blame is shifted to the one who doesn’t forgive.” True forgiveness, she says, is something you earn, and something you wait for. It isn’t something you can request — because if you have to ask, odds are you won’t be getting the real thing.

by Cari Romm, Science of Us | Read more:
Image: Photo: Dennis Hallinan/Getty Images

All Bets Are Off

The bizzare saga of potential Russian interference in the 2016 presidential election has created a genuine emergency in American politics. This isn’t necessarily because of Russia’s actual actions — unless the most peculiar allegations turn out to be accurate — but because of Donald Trump’s response, and what this indicates about how he’ll govern.

Ignore the Trump “dossier” for the moment and forget the baseless conjecture about Russia hacking the U.S. voting process itself. All we need to know about Trump and the Republican Party can be found in their position on the simplest, most plausible part of the story: that Russia was behind the hacked emails from the Democratic National Committee, Democratic Congressional Campaign Committee and John Podesta.

Is this in fact what happened? Certainly the Obama administration did itself no favors by failing to release any of the evidence underlying the strong conclusions in the the Office of the Director of National Intelligence’s report. But Trump himself said at last week’s press conference, presumably based on a classified briefing, that “I think it was Russia.” Mike Pompeo, Trump’s nominee to run the Central Intelligence Agency, agreed during his confirmation hearings. There’s also the crucial dog that hasn’t barked: Unlike during the lead up to the Iraq War, no one from the intelligence agencies has been leaking doubts or claims that they’re being leaned on by the White House to provide the desired conclusion.

Under these circumstances, the reaction of anyone who actually cares about the United States has to be: We must investigate this with great seriousness and impartiality and find out exactly what happened. This requires an independent commission with sufficient funding, a broad mandate and legal authority that Congress creates but then can no longer influence.

Nothing should be less controversial than this. Whatever a nation’s political disagreements, in any functioning democracy there’s just one position on this issue: Only citizens can participate in deciding who governs it.

In every other circumstance Republicans love wrapping themselves in the flag and vowing to protect us from dastardly foreigners, even if this requires renaming the french fries in the congressional cafeteria. Few do this more than Trump himself, whose entire campaign was about the apocalyptic danger posed to us by China, Mexico, the freeloaders of NATO, Muslims from anywhere, and so on. Yet on the subject of Russia and this election he’s suddenly indifferent — even though fear of this type of foreign influence doesn’t require jingoistic xenophobia but just a rational, healthy belief in small-d democratic self-determination.

This is one of the key topics of George Washington’s 1796 Farewell Address, the most famous political rhetoric in American history until the Gettysburg Address. “Against the insidious wiles of foreign influence,” Washington warned, “the jealousy of a free people ought to be constantly awake, since history and experience prove that foreign influence is one of the most baneful foes of republican government.”

Washington was particularly concerned by the “common and continual mischiefs of the spirit of party” – that is, loyalty to your own faction within the country above the country overall. This, he said, “opens the door to foreign influence and corruption, which finds a facilitated access to the government itself through the channels of party passions” and allows other countries to “practice the arts of seduction, to mislead public opinion, to influence or awe the public councils.”

Trump and the GOP are now busy proving how prescient Washington was. Trump has not endorsed an independent investigation of any Russian actions aimed at the election, nor released the financial information that would clarify any business relationships he has with Russians or Russian banks. Moreover, he can’t even bring himself to pretend in public that any of it matters much (although it’s hard to tell whether this is because he fears we’ll find out something nefarious he did or simply because his ego can’t bear his victory being thrown into doubt). Of all of Trump’s violations of basic democratic norms, his indifference to this most basic principle of self-government is the most shocking of all.

by Jon Schwarz, The Intercept | Read more:
Image: Andrew Harrer/Bloomberg via Getty Images

Physicists Will Soon Rule Silicon Valley

It's a bad time to be a physicist.

At least, that’s what Oscar Boykin says. He majored in physics at the Georgia Institute of Technology and in 2002 he finished a physics PhD at UCLA. But four years ago, physicists at the Large Hadron Collider in Switzerland discovered the Higgs boson, a subatomic particle first predicted in the 1960s. As Boykin points out, everyone expected it. The Higgs didn’t mess with the theoretical models of the universe. It didn’t change anything or give physcists anything new to strive for. “Physicists are excited when there’s something wrong with physics, and we’re in a situation now where there’s not a lot that’s wrong,” he says. “It’s a disheartening place for a physicist to be in.” Plus, the pay isn’t too good.

Boykin is no longer a physicist. He’s a Silicon Valley software engineer. And it’s a very good time to be one of those.

Boykin works at Stripe, a $9-billion startup that helps businesses accept payments online. He helps build and operate software systems that collect data from across the company’s services, and he works to predict the future of these services, including when, where, and how the fraudulent transactions will come. As a physicist, he’s ideally suited to the job, which requires both extreme math and abstract thought. And yet, unlike a physicist, he’s working in a field that now offers endless challenges and possibilities. Plus, the pay is great.

If physics and software engineering were subatomic particles, Silicon Valley has turned into the place where the fields collide. Boykin works with three other physicists at Stripe. In December, when General Electric acquired the machine learning startup Wise.io, CEO Jeff Immelt boasted that he had just grabbed a company packed with physicists, most notably UC Berkeley astrophysicist Joshua Bloom. The open source machine learning software H20, used by 70,000 data scientists across the globe, was built with help from Swiss physicist Arno Candel, who once worked at the SLAC National Accelerator Laboratory. Vijay Narayanan, Microsoft’s head of data science, is an astrophysicist, and several other physicists work under him.

It’s not on purpose, exactly. “We didn’t go into the physics kindergarten and steal a basket of children,” says Stripe president and co-founder John Collison. “It just happened.” And it’s happening across Silicon Valley. Because structurally and technologically, the things that just about every internet company needs to do are more and more suited to the skill set of a physicist.

The Naturals

Of course, physicists have played a role in computer technology since its earliest days, just as they’ve played a role in so many other fields. John Mauchly, who helped design the ENIAC, one of the earliest computers, was a physicist. Dennis Ritchie, the father of the C programming language, was too.

But this is a particularly ripe moment for physicists in computer tech, thanks to the rise of machine learning, where machines learn tasks by analyzing vast amounts of data. This new wave of data science and AI is something that suits physicists right down to their socks.

by Cade Metz, Wired |  Read more:
Image: Einstein's Zurich Notebook

Thursday, January 12, 2017

Resistance to the Antibiotic of Last Resort Is Silently Spreading

[ed. I've been keeping track of mcr-1 for a while now and surprised it hasn't gotten more coverage. There's a possible world pandemic in the making and no one seems to have noticed.]

The alarm bells sounded on November 18, 2015.

Antibiotic resistance is usually a slow-moving crisis, one of the reasons its danger can be hard to convey. One by one, over the years, the drugs used to fight the most stubborn infections have fallen by the wayside as bacteria have evolved resistance to them. For certain infections, the only drug left is colistin. Then on November 18, 2015, scientists published a report in the British medical journal The Lancet: A single, easily spreadable gene makes the bacteria that carry it resistant to colistin, our antibiotic of last resort.

Chinese scientists had found this gene, called mcr-1, in pig farms and on meat in supermarkets. Why pigs? Herein lies in the irony. Colistin is an old drug and, by modern standards, not a great one. It can cause severe kidney damage. As scientists developed better antibiotics over the decades, colistin fell out of human use. So in China, farmers started using it by the tons in animals, where low doses of antibiotics can promote growth.

Now it’s come full circle. Bacteria have evolved resistance to so many of those “better” antibiotics that colistin is critical for human health again. China didn’t use colistin in humans, but many countries including the U.S. do as a last resort.

Even more worrisome in the Lancet report was evidence that mcr-1 had already leapt from pigs to humans. Out of 1322 patient samples from hospitals in China, the team found 16 containing mcr-1. And, of course, drug-resistant bacteria don’t respect national borders. As the team was writing up its report, it noticed other researchers had uploaded genomes of bacteria in Malaysia containing the mcr-1 gene sequence to an online database. “The possibility that mcr-1-positive E. coli have spread outside China and into other countries in southeastern Asia is deeply concerning,” the authors warned.

To be clear, these E. coli with mcr-1 found in China were still susceptible to antibiotics other than colistin, but if a bacterium with genes that make it resistant to every other drug then picks up mcr-1, you get the nightmare scenario: a pan-resistant bacteria. These drug- resistant infections usually happen in people who are already sick or have weakened immune systems. (...)

The story of mcr-1’s silent spread is, by now, a familiar one. Over and over, scientists have identified genes conferring resistance to a class of antibiotics, only to find the gene had circled the globe. Another recent example is ndm-1, a gene found in 2009 that confers resistance to class of antibiotics called carbapenems. “It’s very rare to catch something at the very beginning,” says Alexander Kallen, a medical epidemiologist with the Centers for Disease Control and Prevention. Looking for resistance is a constant game of catch-up. You don’t notice anything until there is something to notice; by the time there is something to notice, something bad has already happened. And you have to have eyes everywhere: Resistance initially found on a Chinese pig farm could repercussions all over the world.

When Timothy Walsh, a microbiologist at Cardiff University first heard of mcr-1’s existence from his colleague Yang Wang of China Agricultural University, he didn’t believe it. “It’s like the holy grail of resistance,” says Walsh. He was also skeptical because of the way colistin works. The antibiotic binds to molecules on the surface of bacteria—and modifying those molecules typically requires mutations in several different genes. Instead of getting lucky just once, that bacteria would have to get lucky several times to beat off colistin. In fact, other researchers had identified colistin-resistant bacteria before, which had multiple mutations in the DNA in their chromosomes.

But mcr-1 was just one gene. And more importantly, it didn’t live on chromosomes, which are tightly wound pieces of DNA. The mcr-1 gene sits on a little loop of free-floating DNA called a plasmid, which bacteria—even bacteria of different species—can easily swap like bracelets. That makes mcr-1 much easier to spread. A single bacteria might collect multiple plasmids with multiple genes for resistance to multiple antibiotics. Scientist have not yet found bacteria with mcr-1 that is resistant to all antibiotics, but don’t make the mistake of optimism. “It’s not a case of ‘if.’ It’s a case of ‘when,’” says Walsh.

by Sarah Zhang, The Atlantic |  Read more:
Image: Ho New / Reuters

The xx

Monday, January 9, 2017

Taking a Break


[ed. I'm traveling so posts will be sporadic for a while. It also seems like a good time to share a personal milestone here at Duck Soup - half a million pageviews. I know, nothing like what the Death Star (Facebook) probably gets in 5 seconds. But still, a satisfying achievement nonetheless, and a good reason to thank this blog's loyal readers and contributors. The best way I can describe blogging is that it's like having your own little radio station in the middle of nowhere. There's this compulsion to put things out that you find interesting, regardless of who's listening. Hopefully, someone will get something out of it.

Clive Thompson wrote a book a while ago called Smarter Than You Think: How Technology is Changing Our Minds for the Better.  Talking about the satisfactions of blogging, one of the observations he made was: "Many people have told me that they feel the dynamic kick in with even a tiny handful of viewers. I’d argue that the cognitive shift in going from an audience of zero (talking to yourself) to an audience of 10 (a few friends or random strangers checking out your online post) is so big that it’s actually huger than going from 10 people to a million."  I can certainly agree with that. Half a million is a nice metric, but the biggest motivation is still just providing some value to someone regardless of the numbers. So, thanks again to everyone and I hope to be back soon. In the mean time, check out the archives.]

Image via:

What’s Killing the World’s Shorebirds?

Four gun-toting biologists scramble out of a helicopter on Southampton Island in northern Canada. Warily scanning the horizon for polar bears, they set off in hip waders across the tundra that stretches to the ice-choked coast of Hudson Bay.

Helicopter time runs at almost US$2,000 per hour, and the researchers have just 90 minutes on the ground to count shorebirds that have come to breed on the windswept barrens near the Arctic Circle. Travel is costly for the birds, too. Sandpipers, plovers and red knots have flown here from the tropics and far reaches of the Southern Hemisphere. They make these epic round-trip journeys each year, some flying farther than the distance to the Moon over the course of their lifetimes.

The birds cannot, however, outfly the threats along their path. Shorebird populations have shrunk, on average, by an estimated 70% across North America since 1973, and the species that breed in the Arctic are among the hardest hit1. The crashing numbers, seen in many shorebird populations around the world, have prompted wildlife agencies and scientists to warn that, without action, some species might go extinct.

Although the trend is clear, the underlying causes are not. That’s because shorebirds travel thousands of kilometres a year, and encounter so many threats along the way that it is hard to decipher which are the most damaging. Evidence suggests that rapidly changing climate conditions in the Arctic are taking a toll, but that is just one of many offenders. Other culprits include coastal development, hunting in the Caribbean and agricultural shifts in North America. The challenge is to identify the most serious problems and then develop plans to help shorebirds to bounce back.

“It’s inherently complicated — these birds travel the globe, so it could be anything, anywhere, along the way,” says ecologist Paul Smith, a research scientist at Canada’s National Wildlife Research Centre in Ottawa who has come to Southampton Island to gather clues about the ominous declines. He heads a leading group assessing how shorebirds are coping with the powerful forces altering northern ecosystems. (...)

Shorebirds stream north on four main flyways in North America and Eurasia, and many species are in trouble. The State of North America’s Birds 2016 report1, released jointly by wildlife agencies in the United States, Canada and Mexico, charts the massive drop in shorebird populations over the past 40 years.

The East Asian–Australasian Flyway, where shorelines and wetlands have been hit hard by development, has even more threatened species. The spoon-billed sandpiper (Calidris pygmaea) is so “critically endangered” that there may be just a few hundred left, according to the International Union for Conservation of Nature.

Red knots are of major concern on several continents. The subspecies that breeds in the Canadian Arctic, the rufa red knot, has experienced a 75% decline in numbers since the 1980s, and is now listed as endangered in Canada. “The red knot gives me that uncomfortable feeling,” says Rausch, a shorebird biologist with the Canadian Wildlife Service in Yellowknife. She has yet to find a single rufa-red-knot nest, despite spending four summers surveying what has long been considered the bird’s prime breeding habitat.

The main problem for the rufa red knots is thought to lie more than 3,000 kilometres to the south. During their migration from South America, the birds stop to feed on energy-rich eggs laid by horseshoe crabs (Limulus polyphemus) in Delaware Bay (see ‘Tracking trouble in the Arctic’). Research suggests that the crabs have been so overharvested that the red knots have become deprived of much-needed fuel.

by Margaret Munro, Nature | Read more:
Image:Malkolm Boothroyd

A Trip of One’s Own

[ed. See also: Want More Productivity? Be Careful What You Wish For, and Micro-dosing: The Drug Habit Your Boss Is Gonna Love]

One day, while driving home to Berkeley after a poorly attended reading in Marin County, Ayelet Waldman found herself weighing the option of pulling the steering wheel hard to the right and plunging off the Richmond Bridge. “The thought was more than idle, less than concrete,” she recalls, “and though I managed to make it across safely, I was so shaken by the experience that I called a psychiatrist.” The doctor diagnosed her with a form of bipolar disorder, and Waldman began a fraught, seven-year journey to alter her mood through prescription drugs, a list so long that she was “able to recite symptoms and side effects for anything … shrinks might prescribe, like the soothing voice-over at the end of a drug commercial.” She was on a search for something, anything, that would quiet the voices, the maniac creativity, the irritable moods that caused her to melt down over the smallest mistakes. That’s when she began taking LSD.

Lysergic acid diethylamide is in the midst of a renaissance of sorts, a nonprescription throwback for an overmedicated generation. As pot goes mainstream—the natural solution to a variety of ills—LSD is close behind, in popularity if not legality. By 1970, two years after possession of LSD became illegal, an estimated two million Americans had used the drug; by 2015, more than 25 million had. In A Really Good Day: How Microdosing Made a Mega Difference in My Mood, My Marriage, and My Life, Waldman explores her own experience of taking teeny, “subtherapeutic” doses of the drug. This “microdose,” about a tenth of your typical trip-inducing tab, is “low enough to elicit no adverse side effects, yet high enough for a measurable cellular response.” Her book is both a diatribe and diary. She offers a polemic on a racist War on Drugs that allows her, a middle-class white woman, to use illegal substances with ease, as well as a daily record of the improved mood and increased focus she experiences each time she takes two drops of acid under the tongue. Microdosing advocates argue that LSD is a safer and more reliable alternative to many prescription drugs, particularly those intended to treat mood disorders, depression, anxiety, and ADHD. Respite is what Waldman is chasing, a gradual tempering, drop by drop, of our fractured, frazzled selves. If the 1960s were about touching the void, microdosing is about pulling back from it.

I’d been on prescription antidepressants for about a year when I opened Waldman’s book. To say that mental illness runs in my own family would be an understatement. After listening to a very abridged version of my family medical history, my psychiatrist called me the “poster child for mental health screenings before marriage.” My sister, gripped with undiagnosed postpartum psychosis, once fantasized, as Waldman did, about driving off a bridge with her infant daughter in the car, and my mother killed herself by overdosing on OxyContin and other legal drugs a month before I graduated from college. Battle is the stock verb of illness—we battle cancer, depression, and addiction. But I cannot in good conscience say I battle my depression and anxiety. Rather, my madness and I are conjoined twins, fused at the head and hip: Together always, we lurch along in an adequate, improvised shuffle.

Like Waldman, I worry about the negative effects of taking an SSRI long-term. The daughter of hippies, a flower grandchild, I don’t trust the pharmaceutical industry to prioritize my wellness over their profits. I’ve long agreed with Waldman that “practitioners, even the best ones, still lack a complete understanding of the complexity and nuance both of the many psychological mood disorders and of the many pharmaceuticals available to treat them.” So when I finished the prologue to A Really Good Day, I set the book down and left my therapist a voicemail announcing my plan to wean myself off Celexa. Then I went on reading. I did not mention the new-old mystic’s medicine beckoning me—the third eye, the open door.

It’s surprisingly simple to get LSD. I asked a few friends, who asked a few of their friends, and the envelope arrived just a few days later with a friendly, letter-pressed postcard. Spliced into the card, via some impressive amateur surgery, was a tiny blue plastic envelope. Inside that was a piece of plain white paper divided with black lines into ten perfect squares: ten tabs of acid, 100 microdoses at a dollar each. (...)

If cocaine kept Wall Street humming at all hours in the 1980s, LSD today keeps the ideas flowing in Silicon Valley’s creative economy, solving problems that require both concentration and connectedness. Microdosing is offered as an improvement over Adderall and Ritalin, the analog ancestors of modern-day smart drugs. Old-school ADHD methamphetamines, it would seem, clang unpleasantly against Silicon Valley’s namaste vibe. Today’s microdosers “are not looking to have a trip with their friends out in nature,” an anonymous doser recently explained to Wired. “They are looking at it as a tool.” One software developer speaks of microdosing as though it were a widget one might download for “optimizing mental activities.” The cynic’s working definition might read, “microdose (noun): the practice of ingesting a small dose of a once-countercultural drug that made everyone from Nixon to Joan Didion flinch in order to make worker bees more productive; Timothy Leary’s worst nightmare; a late-capitalist miracle.”

Productivity is not Waldman’s purpose—pre-LSD, she could write a book in a matter of weeks—but neither is non-productivity, the glazed-over stoner effect. Waldman is instead insistent on the therapeutic value of microdosing. There is nothing, it seems, that LSD isn’t good for, no worry it can’t soothe, no problem it can’t solve. Once an afternoon delight of recreational trippers and high-school seniors, LSD has become a drug of power users: engineers, salesmen, computer scientists, entrepreneurs, writers, the anxious, the depressed. The trip isn’t the thing; instead, microdosing helps maintain a fragmented, frenzied order, little by little, one day at a time.

by Claire Vaye Watkins, TNR |  Read more:
Image: Tran Nguyen

Sunday, January 8, 2017

Since It Can't Sue Us All, Getty Images Embraces Embedded Photos

[ed. I see Getty stock images every day. They're all over the net, so I began wondering about the company's business model. See also: Since It Can't Sue Us All, Getty Images Embraces Embedded Photos and Photographer Suing Getty for 1 Billion.]

Many companies are founded on inspiration or imitation. But Mark Getty, a grandson of the billionaire oilman J. Paul Getty, and Jonathan Klein, an investment banker who had been Getty’s boss, started theirs with a checklist. Tired of crafting deals for others, the pair came up with strict criteria for their own dream business. It had to be global, operating in a fragmented industry ready for consolidation, and on the cusp of change. And the less risk the better. “We didn’t want to fix something that was broken,” says Klein.

Although the business they started in 1995, Getty Images, didn’t have a big idea behind it like a Twitter or Facebook, it’s proven just as revolutionary. If Google is essential to navigating the Web, Getty has become essential to visualizing it. Cobbled together through acquisitions, Getty is the world’s largest photo and video agency, and its database of 80 million images is the raw material from which many of the Web’s slide shows and photo galleries are made. A search for its images of happy people, for instance, turns up 626,317 results. That depth allows Getty to license its image trove online to all manner of bloggers and websites, businesses small and large, advertisers, newspapers, and magazines (including this one).

With annual revenue approaching $1 billion, according to Getty, it’s become a media business too important to ignore. Carlyle Group certainly took notice: On Aug. 15, the private equity firm agreed to buy majority ownership of Getty from another private equity investor, Hellman & Friedman, in a deal that values the company at $3.3 billion. The Getty family and Klein will own the rest.

Mark Getty and Klein, a native of South Africa, were working at Hambros Bank in London when they stumbled upon the stock photo business. “It was a cottage industry, it didn’t have any business discipline,” says Klein, now chief executive officer (Getty is chairman). “It was run by and for photographers.”

They began by acquiring the premier photo library in Britain, Tony Stone Images, for $30 million. Since then, Getty has bought more than 100 other photo collections and companies. It went public in 1996, and three years later moved its headquarters to Seattle to be closer to the tech community and many of its customers. “They came in with a super-simple strategy and held to that,” says Stephen Mayes, who had worked at Tony Stone, stayed at Getty until 1998, and is now managing director of the photographer-owned VII Photo Agency. “They are not inherently entrepreneurial. They take good businesses and make them better.” Giorgio Psacharopulo, CEO of Magnum Photos, the cooperative started by Henri Cartier-Bresson and others, says: “We’re in awe of what Getty Images has been able to accomplish. But, like Wal-Mart, they operate at a scale that makes it difficult for smaller agencies to exist.”

Getty was early in recognizing the digital revolution’s impact on photography. In 1998, Getty acquired PhotoDisc, the first company to figure out how to sell photos in a digital format. “They always recognized the business eating their lunch and bought it,” says David Walker, executive editor of Photo District News, a trade publication. By 2001, Getty’s entire business was digital.

The bigger disruption came once digital cameras, and then mobile phones, began producing high-quality images—making everybody a potential photographer. By 2005, a company called iStockphoto emerged as the leading source of crowdsourced (otherwise known as amateur) images. Its average price for a photo was $2 to $3. Getty bought it for $50 million in 2006. “A lot of people thought we were cannibalizing our business,” Klein says. “But sometimes it’s perfectly legitimate to use a $5 picture.”

Getty’s move into the microstock business, as it’s called, came as the media and advertising industries were contracting and the Web was expanding. Soon Getty’s business model was turned upside down: While it started out providing expensive images for limited use to a small group of customers, now it also provides cheaper images for broad use to a big group of customers. Explains Klein: “The fundamental change is how and where pictures are being used.” Before Getty bought iStockphoto, it had some 150,000 customers a year. Now it has 1.3 million. “About 900,000 of them are small and medium-sized businesses, many of whom weren’t using images legally or at all,” Klein says. Fifteen years ago, Getty uploaded a few hundred photos a day; now it uploads tens of thousands. Getty used to license or sell 100,000 images a year; today it’s 30 million to 40 million.

by Susan Berfield, Bloomberg | Read more:
Image:  via:

Why We Can't Fix Twitter

Amtrak once asked a focus group what kind of food they wanted in the train’s cafe car. One participant requested more healthy choices, like salad and fruit. The person running the focus group said something like, “People always say they want the salad. Then they buy the cheeseburger.”

Today’s social media environment faces a similar paradox. It’s fashionable to complain about the low quality of Twitter conversations. We bemoan trolls, flame wars and the lack of nuance inherent in 140-character statements. Occasionally some high-profile tweeter will publicly declare that they are done with the platform, as the writer Lindy West did this week in an article titled: “I’ve left Twitter. It is unusable for anyone but trolls, robots and dictators.”

Twitter CEO Jack Dorsey recently asked his followers to suggest ideas for improvement. He got plenty of recommendations, such as an edit button so users could fix erroneous or ill-considered tweets. Other suggestions included a bookmark button and improved reporting options for bullying.

It’s unclear if these kinds of changes will improve the quality of Twitter discourse. What they won’t fix is the company’s cafe car problem. We say that we want more civil, thoughtful dialogue. But do we really?

Imagine that a Silicon Valley start-up created an online discussion platform precisely to address this problem. There would be no trolls or shouting matches. Shrill sound bites would be replaced by measured conversations. Users would span the political spectrum, allowing for civil exchanges among people with different views.

“Wow,” you’d probably say. “The world needs a platform like that, especially right now!” You’d sign up. Then, you’d go right back to Twitter.

How do I know this? Because we created that alternative platform. It was an online discussion forum called Parlio, and its chief purpose was to host civil, thoughtful conversations. Parlio was founded by Wael Ghonim, the former Google executive best known for running the Facebook page that helped spark the 2011 Egyptian revolution. As the euphoria over the revolution faded, Ghonim found that social media only amplified polarization. “The same tool that united us to topple dictators eventually tore us apart,” he said. In 2015 he and Osman Osman, another former Googler, launched Parlio, and I became chief strategy officer.

The user experience was straightforward. A member would post a short piece of writing, or maybe a link to an article. Then, other members would discuss it. We also hosted Q&As. But Parlio’s culture was markedly different from other social media platforms. It was intended for conversation, not mass broadcasting. You had to be invited to post, but anyone could be a reader. New members signed a civility pledge, and we had a zero-tolerance policy toward trolls.

Parlio built a small but devoted following, including thought leaders from media, academia and business. We hosted remarkably civil conversations about divisive issues like race, terrorism, refugees, sexism and even Donald Trump’s candidacy for president. Author Max Boot wrote in Commentary: “I find that I’m using Parlio more because I can find a more reasoned engagement there than I do on Twitter. Parlio is not, of course, going to threaten Twitter’s business anytime soon, but it is an augury of what can happen if Twitter doesn’t address the problem of anonymous hate-speech that is poisoning its user community.” Tom Friedman penned a New York Times column about Parlio’s attempt to create a new social media experience, writing, “I participated in a debate on Parlio and found it engaging and substantive.”

While people loved the idea of Parlio, we weren’t sure how quickly we could bring it to scale. Last year we joined forces with Quora, which had just reached 100 million monthly users. I am proud of what we created at Parlio, and I also learned a lot about user behavior. The main takeaway is that the social media experience that people say they want is often different from the one that they actively pursue. Here are some of the main challenges to building a civil, thoughtful social media platform:

We’re addicted to the promise of going viral

Say you’re a journalist, and you just published a big article. You have two options for engagement. The first is to receive a relatively small number of comments and questions from informed and influential people, including top thinkers in your field. Option two is a flood of Twitter mentions. Some will be smart, but many will be rants from complete strangers. We might think that we want option one. But deep down, we can’t give up the thrill of option two.

Any Twitter pundit with a large following is familiar with that thrill. It’s that moment right after your provocative statement starts ricocheting across the internet. Your feed explodes with new mentions and your followers dramatically increase. You have no idea who most of these people are, or even if they are real people, but you feel like a rock star. If your tweet goes really viral, you might get on TV. Maybe you will be invited to write an op-ed expanding on your tweet, even though 140 characters were all you had to say on the matter.

Generally speaking, Parlio couldn’t offer that experience. In part because we didn’t have the numbers, but also because our content was not particularly conducive to virality. Often what go viral are antagonistic declarations that are unburdened by nuance. Our president-elect is a master of such statements, which is why Twitter has been such a powerful tool for spreading his message.

Parlio did a decent job of delivering option one, however. Authors would come to Parlio to discuss articles they had written elsewhere. Some of those posts attracted high-quality engagement that is very difficult to find in online commenting sections, and the authors would be delighted. But the next time they wrote an article, sometimes those same authors would skip Parlio and post it on Twitter. The next section helps explain why.

by Emily Parker, Politico |  Read more:
Image: Getty

Utopian Capitalism

The system we know as Capitalism is both wondrously productive and hugely problematic. On the downside, capitalism promotes excessive inequality; it valorises immediate returns over long-term benefits; it addicts us to unnecessary products and it encourages excessive consumption of the world’s resources with potentially disastrous consequences – and that’s just a start. We are now deeply familiar with what can go wrong with Capitalism. But that is no reason to stop dreaming about some of the ways in which Capitalism could one day operate in a Utopian future:

In the Utopia, we’d spend less time thinking about the Dow Jones.

The Dow Jones, which is the world’s most prestigious financial index, takes a daily temperature reading of the US, assigning it a very precise number, which is widely reported in the news and which we tend to treat with a high degree of reverence. Such financial data seems to be telling us something of immense importance. It hints at an answer to the great questions of existence: are things going well or badly, is the world doing OK? How is life on earth?

It’s really worth asking such questions and reflecting heavily upon them. This is what philosophers traditionally like to do. But the numbers do not actually answer our questions, for the links between the Dow Jones figures and what is actually going on in human lives (their rise or fall) is far more elusive. It’s not that there is no connection whatsoever. The financial health of major US companies does have indirect, distant links to the economic side of everyone’s life. Yet the quality and character of daily life is powerfully affected by a great many things which the financial data does not recognise, for example, your health, the view from your window, the quality of your relationship, the amount of time you have to spend commuting, the connections you have with the neighbours, the state of your ambitions, your degree of envy, how your kids are doing. These may, indeed, be rather more important in determining ‘how things are going’ than the Dow Index. But the Dow doesn’t entirely admit this. It seems to be making a larger claim: to know how your life is going – and it brings to this claim a panoply of impressive arrows, charts and incomprehensible acronyms which cow us into believing in its authority, rather as our ancestors might have trusted in the confused mumblings of a priest sitting on top of an altar in a darkened temple.

For all our expertise, we have not yet learned how to devise reliable indicators of the state of nations and individuals. We do not have a daily set of figures to record what truly matters. It might help, for example, to know the incidence of unnecessary embarrassment or whether arrogance is becoming 0.1% more or less common. We don’t have figures measuring supplies of patience, tact and forbearance. We don’t have indices around envy, infidelity and fury.

In the absence of these vital indicators, we cling to the signals offered to us by Wall Street. We use words like depression and exuberance, terms well known from personal life, to describe the movements of stocks and shares. To ask for better indices of national well being sounds whimsical. Yet it ought not to, for we need data that homes in on things that matter greatly for what our lives are actually like. Issues like jealousy, boredom, beauty, frustration or anger shape our destinies just as much – if not more – than the fortunes of 3M (the Minnesota Mining Company) and the twenty nine other corporations whose trading forms the basis for calculating the Dow Jones figures.

The big issue is how we can get a diversity of indicators on our national dashboards. We are not suggesting the suppression of the Dow Jones Industrial Average. What we want to see is the rise of other – equally important – figures that report on a regular basis on elements of psychological and sociological life and which could form part of the consciousness of thoughtful and serious people. Today, a government cannot get rewarded, or chastised, for the impact its policies have on the frequency of domestic rows because rows are not recorded. When we measure things – and give the figures a regular public airing – we start the long process of collectively doing something about them.

In the Utopia, we wouldn’t just care about unemployment, we’d also worry about misemployment.

Employment means being, generically, in work. But misemployment means being in work but of a kind that fails to tackle with any real sincerity the true needs of other people: merely exciting them to unsatisfactory desires and pleasures instead. Like this fellow, dressing up as a hotdog to entice customers.

A man employed by the casino chain Las Vegas Sands to hand out flyers to tourists so as to entice them to use slot machines is clearly ‘employed’ in the technical sense. He’s marked as being off the unemployment registers. He is receiving a wage in return for helping to solve some (small) puzzle of the human condition of interest to his employers: that not enough tourists might otherwise leave the blue skies and cheerful bustle of a south Nevada city’s main street to enter the dark air-conditioned halls of an Egyptian-themed casino lined with ranks of ringing consoles.

The man is indeed employed, but in truth, he belongs to a large subsection of those in work we might term the ‘misemployed’. His labour is generating capital, but it is making no contribution to human welfare and flourishing. He is joined in the misemployment ranks by people who make cigarettes, addictive but sterile television shows, badly designed condos, ill-fitting and shoddy clothes, deceptive advertisements, artery-clogging biscuits and highly-sugared drinks (however delicious). The rate of misemployment in the economy is very high.

And while we may be genuinely grateful for a job and give our best to do it well, at the back of our minds we do – as employees – nurture the hope that our work contributes in some real way to the common good; that we are making, modestly, a difference.

It’s not just the most dramatically harmful kinds of work that register as misemployment. We intuitively recognise it when we think of work as ‘just a job’; when we sense that far too much of our time, effort and intelligence is spent on meetings that resolve little, on chivying people to sign up for products that – in our heart of hearts we don’t admire.

Economists and governments have, with moderate success, been learning techniques to reduce the overall rate of unemployment. Central to their strategy has been the lowering of interest rates and the printing of money. In the language of the field, the key to bringing down unemployment has been to ‘stimulate demand.’

Though technically effective, this method fails to draw any distinction between good and bad demand and therefore between employment and misemployment.

Fortunately, there are real solutions to bringing down the rate of misemployment. The trick isn’t just to stimulate demand per se, the trick is to stimulate the right demand: to excite people to buy the constituents of true satisfaction, and therefore to give individuals and businesses a chance to direct their labour, and make profits, in meaningful areas of the economy.

In a nation properly concerned with misemployment, the taste of the audience would be educated to demand and pay for the most important things. 20 per cent of the adult population might therefore be employed in mental health and flourishing. At least another 30 per cent would be employed in building an environment that could satisfy the soul. People would be taught to respect good furniture, healthy food, wholesome clothes, fruitful holidays…

To achieve such a state, it isn’t enough to print money. The task is to excite people to want to spend it on the right things. This requires public education so that audiences will recognise the value of what is truly valuable and walk past what fails to address their true needs.

This isn’t to suggest that the employment figures are irrelevant – they matter a great deal. They are the first thing to be attended to. All the same the raw figures mask a more ambitious index – and a central question: are we deploying human capital admirably?

by The Book of Life |  Read more:
Images:© Flickr/Scott Beale and uncredited

Friday, January 6, 2017


Caitlyn Murphy, Hallam Corner Store (2016)
via:

Sally West, The Beach
via:

The 401(k) Problem We Refuse to Solve

There’s a perpetual pundit debate over the best way to provide for retirement: defined benefit plans (pensions), defined contribution plans (401(k)s, IRAs and the like) or pay-as-you-go social insurance schemes (Social Security). Most retirement experts I’ve talked to prefer a mix of these, a “three-legged stool.” But as I’ve written before, this is a bit like arguing whether the Titanic would have survived the iceberg if only its hull had been painted green. All three types of retirement savings have different costs and benefits. But these costs and benefits are not the primary reason that people in Western countries have to worry about an impoverished old age.

The funny thing is that, for all the people arguing that some dire problem in one of these three retirement systems urgently requires that we switch to another kind at once, the major problem with all three is exactly the same. It’s even a problem that’s easy to state and easy to fix -- no need for extensive blue-ribbon commissions or elaborate white papers. Here’s the solution: Pick whichever system you prefer; it really doesn’t matter. Now slap a 10 to 15 percent surcharge on a worker's wage income, and divert that money into the system for the worker’s future use. Problem basically solved, because in all three cases, the only flaw that actually matters is that they’re badly underfunded.

If you expect to spend 40 years of your life working, and then another 20 or 30 years living off the money you made during that time, then you need to save a large portion of your salary. Imagine yourself storing up food for the last 30 years of your life from the harvests made during the first 40. You might hope that when you're older, and no longer toiling in the fields, you won’t need to eat so much. Nonetheless, you’d understand that you would need to put aside a considerable portion of your harvest -- something close to what you're eating each day -- to ensure that you don’t starve to death in your old age.

Somehow, we imagine that modern society can make the math different for all the other stuff we consume, from cars to televisions to little paper umbrellas to stick in the cocktails at our retirement parties. And to be fair, to some extent, it has. If productivity is growing quickly, then it is easier to maintain our pre-retirement lifestyles with a smaller pool of savings, because that savings will buy more.

Alternatively, we can have a lot of kids. No matter how you manage your retirement system, you are ultimately expecting to depend on the labor of people younger than you. Whether that labor comes to you in the form of a dividend check or a government benefit or a saintly daughter-in-law building you a new annex in the backyard, you are still expecting someone else younger than you to make stuff, then give it to you without expecting more than gratitude in return. The more workers there are relative to retirees, the smaller the fraction of their income each worker has to give up to support each retiree, and the easier it will be to get them to do so.

Unfortunately, productivity isn’t growing rapidly, and we didn’t have a lot of kids. That leaves plowing a great deal of money into savings and investment, in the hopes that productivity will start to grow again. There is no substitute, no neat transformation we can enact to make that fundamental problem go away.

by Megan McArdle, Bloomberg | Read more:
Image: uncredited via:

Get Your Loved Ones Off Facebook

[ed. I know. Broken record...].

I wrote this for my friends and family, to explain why the latest Facebook privacy policy is really harmful. Maybe it’ll help you too. External references – and steps to get off properly – at the bottom.

A few factual corrections have been brought to my attention, so I’ve fixed them. Thanks everyone!


“Oh yeah, I’ve been meaning to ask you why you’re getting off Facebook,” is the guilty and reluctant question I’m hearing a lot these days. Like we kinda know Facebook is bad, but don’t really want to know.

I’ve been a big Facebook supporter - one of the first users in my social group who championed what a great way it was to stay in touch, way back in 2006. I got my mum and brothers on it, and around 20 other people. I’ve even taught Facebook marketing in one of the UK’s biggest tech education projects, Digital Business Academy. I’m a techie and a marketer – so I can see the implications – and until now, they hadn’t worried me. I’ve been pretty dismissive towards people who hesitate with privacy concerns.

Just checking…

Over the holidays, I thought I’d take a few minutes to check on the upcoming privacy policy change, with a cautious “what if” attitude. With our financial and location information on top of everything else, there were some concerning possibilities. Turns out what I suspected already happened 2 years ago! That few minutes turned into a few days of reading. I dismissed a lot of claims that can be explained as technically plausible (or technically lazy), based on a bit of investigation, like the excessive Android app permissions. But there was still a lot left over, and I considered those facts with techniques that I know to be standard practice in data-driven marketing.

With this latest privacy change on January 30th, I’m scared.

Facebook has always been slightly worse than all the other tech companies with dodgy privacy records, but now, it’s in it’s own league. Getting off isn’t just necessary to protect yourself, it’s necessary to protect your friends and family too. This could be the point of no return – but it’s not too late to take back control.

A short list of some Facebook practices

It’s not just what Facebook is saying it’ll take from you and do with your information, it’s all the things it’s not saying, and doing anyway because of the loopholes they create for themselves in their Terms of Service and how simply they go back on their word. We don’t even need to click “I agree” anymore. They just change the privacy policy and by staying on Facebook, you agree. Oopsy!

Facebook doesn’t keep any of your data safe or anonymous, no matter how much you lock down your privacy settings. Those are all a decoy. There are very serious privacy breaches, like selling your product endorsement to advertisers and politicians, tracking everything you read on the internet, or using data from your friends to learn private things about you - they have no off switch.

Facebooks gives your data to “third-parties” through your use of apps, and then say that’s you doing it, not them. Everytime you use an app, you’re allowing Facebook to escape it’s own privacy policy with you and with your friends. It’s like when my brother used to make me punch myself and ask, “why are you punching yourself?” Then he’d tell my mum it wasn’t his fault.

As I dug in, I discovered all the spying Facebook does – which I double-checked with articles from big reputable news sources and academic studies that were heavily scrutenised. It sounds nuts when you put it all together!
  • They have and continue to create false endorsements for products from you to your friends - and they never reveal this to you.
  • When you see a like button on the web, Facebook is tracking that you’re reading that page. It scans the keywords on that page and associates them to you. It knows much time you spend on different sites and topics.
  • They read your private messages and the contents of the links you send privately.
  • They’ve introduced features that turn your phone’s mic on – based on their track-record changing privacy settings, audio surveillance is likely to start happening without your knowledge.
  • They can use face recognition to track your location through pictures , even those that aren’t on Facebook. (Pictures taken with mobile phones have time, date and GPS data built into them.)
  • They’ve used snitching campaigns to trick people’s friends into revealing information about them that they chose to keep private.
  • They use the vast amount of data they have on you, from your likes, things you read, things you type but don’t post, to make highly accurate models about who you are – even if you make it a point of keeping these things secret. There are statistical techniques, which have been used in marketing for decades, that find correlating patterns between someone’s behaviour and their attributes. Even if you never posted anything, they can easily work out your age, gender, sexual orientation and political views. When you post, they work out much more. Then they reveal it to banks, insurance companies, governments, and of course, advertisers.
“I have nothing to hide”

A lot of people aren’t worried about this, feeling they have nothing to hide. Why would they care about little old me? Why should I worry about this when I’m not doing anything wrong?

One of the more obvious problems here is with insurance companies. The data they have on you is mined to predict your future. The now famous story of the pregnant teenager being outed by the store Target, after it mined her purchase data – larger handbags, headache pills, tissues – and sent her a “congratulations” message as marketing, which her unknowing father got instead. Oops!

The same is done about you, and revealed to any company without your control.

by Salim Varani |  Read more:
Image: uncredited

What Scientific Term or Concept Ought to be More Widely Known?

Of course, not everyone likes the idea of spreading scientific understanding. Remember what the Bishop of Birmingham’s wife is reputed to have said about Darwin’s claim that human beings are descended from monkeys: "My dear, let us hope it is not true, but, if it is true, let us hope it will not become generally known."

Introduction: Scientia

Of all the scientific terms or concepts that ought to be more widely known to help to clarify and inspire science-minded thinking in the general culture, none are more important than “science” itself.

Many people, even many scientists, have traditionally had a narrow view of science as controlled, replicated experiments performed in the laboratory—and as consisting quintessentially of physics, chemistry, and molecular biology. The essence of science is conveyed by its Latin etymology: scientia, meaning knowledge. The scientific method is simply that body of practices best suited for obtaining reliable knowledge. The practices vary among fields: the controlled laboratory experiment is possible in molecular biology, physics, and chemistry, but it is either impossible, immoral, or illegal in many other fields customarily considered sciences, including all of the historical sciences: astronomy, epidemiology, evolutionary biology, most of the earth sciences, and paleontology. If the scientific method can be defined as those practices best suited for obtaining knowledge in a particular field, then science itself is simply the body of knowledge obtained by those practices.

Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. Not just the broad observation-based and statistical methods of the historical sciences but also detailed techniques of the conventional sciences (such as genetics and molecular biology and animal behavior) are proving essential for tackling problems in the social sciences. Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.

It is in this spirit of Scientia that Edge, on the occasion of its 20th anniversary, is pleased to present the Edge Annual Question 2017. Happy New Year!

—John Brockman, Editor, January 1, 2017

*****

Richard H. Thaler
Father of Behavioral Economics; Director, Center for Decision Research, University of Chicago Graduate School of Business; Author, Misbehaving

The Premortem

Before a major decision is taken, say to launch a new line of business, write a book, or form a new alliance, those familiar with the details of the proposal are given an assignment. Assume we are at some time in the future when the plan has been implemented, and the outcome was a disaster. Write a brief history of that disaster.

Applied psychologist Gary Klein came up with “The Premortem,” which was later written about by Daniel Kahneman. Of course we are all too familiar with the more common postmortem that typically follows any disaster, along with the accompanying finger pointing. Such postmortems inevitably suffer from hindsight bias, also known as Monday-morning quarterbacking, in which everyone remembers thinking that the disaster was almost inevitable. As I often heard Amos Tversky say, “the handwriting may have been written on the wall all along. The question is: was the ink invisible?”

There are two reasons why premortems might help avert disasters. (I say might because I know of no systematic study of their use. Organizations rarely allow such internal decision making to be observed and recorded.) First, explicitly going through this exercise can overcome the natural organizational tendencies toward groupthink and overconfidence. A devil’s advocate is unpopular anywhere. The premortem procedure gives cover to a cowardly skeptic who otherwise might not speak up. After all, the entire point of the exercise is to think of reasons why the project failed. Who can be blamed for thinking of some unforeseen problem that would otherwise be overlooked in the excitement that usually accompanies any new venture?

The second reason a premortem can work is subtle. Starting the exercise by assuming the project has failed, and now thinking of why that might have happened creates the illusion of certainty, at least hypothetically. Laboratory research shows that by asking why did it fail rather than why might it fail, gets the creative juices flowing. (The same principle can work in finding solutions to tough problems. Assume the problem has been solved, and then ask, how did it happen? Try it!)

An example illustrates how this can work. Suppose a couple years ago an airline CEO invited top management to conduct a premortem on this hypothetical disaster: All of our airline’s flights around the world have been cancelled for two straight days. Why? Of course, many will immediately think of some act of terrorism. But real progress will be made by thinking of much more mundane explanations. Suppose someone timidly suggests that the cause was the reservation system crashed and the backup system did not work properly.

Had this exercise been conducted, it might have prevented a disaster for a major airline that cancelled nearly 2000 flights over a three-day period. During much of that time, passengers could not get any information because the reservation system was down. What caused this fiasco? A power surge blew a transformer and critical systems and network equipment didn’t switch over to backups properly. This havoc was all initiated by the equivalent of blowing a fuse.

This episode was bad, but many companies that were once household names and now no longer exist might still be thriving if they had conducted a premortum with the question being: It is three years from now and we are on the verge of bankruptcy. How did this happen?

And, how many wars might not have been started if someone had first asked: We lost. How? (...)

*****

Joichi Ito
Director, MIT Media Lab; Coauthor (with Jeff Howe), Whiplash: How to Survive Our Faster Future

Neurodiversity

Humans have diversity in neurological conditions. While some, such as autism are considered disabilities, many argue that they are the result of normal variations in the human genome. The neurodiversity movement is an international civil rights movement that argues that autism shouldn’t be “cured” and that it is an authentic form of human diversity that should be protected.

In the early 1900s eugenics and the sterilization of people considered genetically inferior were scientifically sanctioned ideas, with outspoken advocates like Theodore Roosevelt, Margaret Sanger, Winston Churchill and US Supreme Court Justice Oliver Wendell Holmes Jr. The horror of the Holocaust, inspired by the eugenics movement, demonstrated the danger and devastation these programs can exact when put into practice.

Temple Grandin, an outspoken spokesperson for autism and neurodiversity argues that Albert Einstein, Wolfgang Mozart and Nikola Tesla would have been diagnosed on the “autistic spectrum” if they had been alive today. She also believes that autism has long contributed to human development and that “without autism traits we might still be living in caves.” Today, non-neurotypical children often suffer through a remedial programs in the traditional educational system only to be discovered to be geniuses later. Many of these kids end up at MIT and other research institutes.

With the invention of CRISPR the possibility of editing the human genome at scale has suddenly become feasible. The initial applications that are being developed involve the “fixing” of genetic mutations that cause debilitating diseases, but they are also taking us down a path with the potential to eliminate not only autism but much of the diversity that makes human society flourish. Our understanding of the human genome is rudimentary enough that it will be some time before we are able to enact complex changes that involve things like intelligence or personality, but it’s a slippery slope. I saw a business plan a few years ago that argued that autism was just “errors” in the genome that could be identified and “corrected” in the manner of “de-noising” a grainy photograph or audio recording.

Clearly some children born with autism are in states that require intervention and have debilitating issues. However, our attempts to “cure” autism, either through remediation or eventually through genetic engineering, could result in the eradication of a neurological diversity that drives scholarship, innovation, arts and many of the essential elements of a healthy society.

We know that diversity is essential for healthy ecosystems. We see how agricultural monocultures have created fragile and unsustainable systems.

My concern is that even if we figure out and understand that neurological diversity is essential for our society, I worry that we will develop the tools for designing away any risky traits that deviate from the norm, and that given a choice, people will tend to opt for a neuro-typical child.

As we march down the path of genetic engineering to eliminate disabilities and disease, it’s important to be aware that this path, while more scientifically sophisticated, has been followed before with unintended and possibly irreversible consequences and side-effects.

by Edge.org |  Read more:
Image: "Spiders 2013" by Katinka Matson