Monday, January 9, 2017

What’s Killing the World’s Shorebirds?

Four gun-toting biologists scramble out of a helicopter on Southampton Island in northern Canada. Warily scanning the horizon for polar bears, they set off in hip waders across the tundra that stretches to the ice-choked coast of Hudson Bay.

Helicopter time runs at almost US$2,000 per hour, and the researchers have just 90 minutes on the ground to count shorebirds that have come to breed on the windswept barrens near the Arctic Circle. Travel is costly for the birds, too. Sandpipers, plovers and red knots have flown here from the tropics and far reaches of the Southern Hemisphere. They make these epic round-trip journeys each year, some flying farther than the distance to the Moon over the course of their lifetimes.

The birds cannot, however, outfly the threats along their path. Shorebird populations have shrunk, on average, by an estimated 70% across North America since 1973, and the species that breed in the Arctic are among the hardest hit1. The crashing numbers, seen in many shorebird populations around the world, have prompted wildlife agencies and scientists to warn that, without action, some species might go extinct.

Although the trend is clear, the underlying causes are not. That’s because shorebirds travel thousands of kilometres a year, and encounter so many threats along the way that it is hard to decipher which are the most damaging. Evidence suggests that rapidly changing climate conditions in the Arctic are taking a toll, but that is just one of many offenders. Other culprits include coastal development, hunting in the Caribbean and agricultural shifts in North America. The challenge is to identify the most serious problems and then develop plans to help shorebirds to bounce back.

“It’s inherently complicated — these birds travel the globe, so it could be anything, anywhere, along the way,” says ecologist Paul Smith, a research scientist at Canada’s National Wildlife Research Centre in Ottawa who has come to Southampton Island to gather clues about the ominous declines. He heads a leading group assessing how shorebirds are coping with the powerful forces altering northern ecosystems. (...)

Shorebirds stream north on four main flyways in North America and Eurasia, and many species are in trouble. The State of North America’s Birds 2016 report1, released jointly by wildlife agencies in the United States, Canada and Mexico, charts the massive drop in shorebird populations over the past 40 years.

The East Asian–Australasian Flyway, where shorelines and wetlands have been hit hard by development, has even more threatened species. The spoon-billed sandpiper (Calidris pygmaea) is so “critically endangered” that there may be just a few hundred left, according to the International Union for Conservation of Nature.

Red knots are of major concern on several continents. The subspecies that breeds in the Canadian Arctic, the rufa red knot, has experienced a 75% decline in numbers since the 1980s, and is now listed as endangered in Canada. “The red knot gives me that uncomfortable feeling,” says Rausch, a shorebird biologist with the Canadian Wildlife Service in Yellowknife. She has yet to find a single rufa-red-knot nest, despite spending four summers surveying what has long been considered the bird’s prime breeding habitat.

The main problem for the rufa red knots is thought to lie more than 3,000 kilometres to the south. During their migration from South America, the birds stop to feed on energy-rich eggs laid by horseshoe crabs (Limulus polyphemus) in Delaware Bay (see ‘Tracking trouble in the Arctic’). Research suggests that the crabs have been so overharvested that the red knots have become deprived of much-needed fuel.

by Margaret Munro, Nature | Read more:
Image:Malkolm Boothroyd

A Trip of One’s Own

[ed. See also: Want More Productivity? Be Careful What You Wish For, and Micro-dosing: The Drug Habit Your Boss Is Gonna Love]

One day, while driving home to Berkeley after a poorly attended reading in Marin County, Ayelet Waldman found herself weighing the option of pulling the steering wheel hard to the right and plunging off the Richmond Bridge. “The thought was more than idle, less than concrete,” she recalls, “and though I managed to make it across safely, I was so shaken by the experience that I called a psychiatrist.” The doctor diagnosed her with a form of bipolar disorder, and Waldman began a fraught, seven-year journey to alter her mood through prescription drugs, a list so long that she was “able to recite symptoms and side effects for anything … shrinks might prescribe, like the soothing voice-over at the end of a drug commercial.” She was on a search for something, anything, that would quiet the voices, the maniac creativity, the irritable moods that caused her to melt down over the smallest mistakes. That’s when she began taking LSD.

Lysergic acid diethylamide is in the midst of a renaissance of sorts, a nonprescription throwback for an overmedicated generation. As pot goes mainstream—the natural solution to a variety of ills—LSD is close behind, in popularity if not legality. By 1970, two years after possession of LSD became illegal, an estimated two million Americans had used the drug; by 2015, more than 25 million had. In A Really Good Day: How Microdosing Made a Mega Difference in My Mood, My Marriage, and My Life, Waldman explores her own experience of taking teeny, “subtherapeutic” doses of the drug. This “microdose,” about a tenth of your typical trip-inducing tab, is “low enough to elicit no adverse side effects, yet high enough for a measurable cellular response.” Her book is both a diatribe and diary. She offers a polemic on a racist War on Drugs that allows her, a middle-class white woman, to use illegal substances with ease, as well as a daily record of the improved mood and increased focus she experiences each time she takes two drops of acid under the tongue. Microdosing advocates argue that LSD is a safer and more reliable alternative to many prescription drugs, particularly those intended to treat mood disorders, depression, anxiety, and ADHD. Respite is what Waldman is chasing, a gradual tempering, drop by drop, of our fractured, frazzled selves. If the 1960s were about touching the void, microdosing is about pulling back from it.

I’d been on prescription antidepressants for about a year when I opened Waldman’s book. To say that mental illness runs in my own family would be an understatement. After listening to a very abridged version of my family medical history, my psychiatrist called me the “poster child for mental health screenings before marriage.” My sister, gripped with undiagnosed postpartum psychosis, once fantasized, as Waldman did, about driving off a bridge with her infant daughter in the car, and my mother killed herself by overdosing on OxyContin and other legal drugs a month before I graduated from college. Battle is the stock verb of illness—we battle cancer, depression, and addiction. But I cannot in good conscience say I battle my depression and anxiety. Rather, my madness and I are conjoined twins, fused at the head and hip: Together always, we lurch along in an adequate, improvised shuffle.

Like Waldman, I worry about the negative effects of taking an SSRI long-term. The daughter of hippies, a flower grandchild, I don’t trust the pharmaceutical industry to prioritize my wellness over their profits. I’ve long agreed with Waldman that “practitioners, even the best ones, still lack a complete understanding of the complexity and nuance both of the many psychological mood disorders and of the many pharmaceuticals available to treat them.” So when I finished the prologue to A Really Good Day, I set the book down and left my therapist a voicemail announcing my plan to wean myself off Celexa. Then I went on reading. I did not mention the new-old mystic’s medicine beckoning me—the third eye, the open door.

It’s surprisingly simple to get LSD. I asked a few friends, who asked a few of their friends, and the envelope arrived just a few days later with a friendly, letter-pressed postcard. Spliced into the card, via some impressive amateur surgery, was a tiny blue plastic envelope. Inside that was a piece of plain white paper divided with black lines into ten perfect squares: ten tabs of acid, 100 microdoses at a dollar each. (...)

If cocaine kept Wall Street humming at all hours in the 1980s, LSD today keeps the ideas flowing in Silicon Valley’s creative economy, solving problems that require both concentration and connectedness. Microdosing is offered as an improvement over Adderall and Ritalin, the analog ancestors of modern-day smart drugs. Old-school ADHD methamphetamines, it would seem, clang unpleasantly against Silicon Valley’s namaste vibe. Today’s microdosers “are not looking to have a trip with their friends out in nature,” an anonymous doser recently explained to Wired. “They are looking at it as a tool.” One software developer speaks of microdosing as though it were a widget one might download for “optimizing mental activities.” The cynic’s working definition might read, “microdose (noun): the practice of ingesting a small dose of a once-countercultural drug that made everyone from Nixon to Joan Didion flinch in order to make worker bees more productive; Timothy Leary’s worst nightmare; a late-capitalist miracle.”

Productivity is not Waldman’s purpose—pre-LSD, she could write a book in a matter of weeks—but neither is non-productivity, the glazed-over stoner effect. Waldman is instead insistent on the therapeutic value of microdosing. There is nothing, it seems, that LSD isn’t good for, no worry it can’t soothe, no problem it can’t solve. Once an afternoon delight of recreational trippers and high-school seniors, LSD has become a drug of power users: engineers, salesmen, computer scientists, entrepreneurs, writers, the anxious, the depressed. The trip isn’t the thing; instead, microdosing helps maintain a fragmented, frenzied order, little by little, one day at a time.

by Claire Vaye Watkins, TNR |  Read more:
Image: Tran Nguyen

Sunday, January 8, 2017

Since It Can't Sue Us All, Getty Images Embraces Embedded Photos

[ed. I see Getty stock images every day. They're all over the net, so I began wondering about the company's business model. See also: Since It Can't Sue Us All, Getty Images Embraces Embedded Photos and Photographer Suing Getty for 1 Billion.]

Many companies are founded on inspiration or imitation. But Mark Getty, a grandson of the billionaire oilman J. Paul Getty, and Jonathan Klein, an investment banker who had been Getty’s boss, started theirs with a checklist. Tired of crafting deals for others, the pair came up with strict criteria for their own dream business. It had to be global, operating in a fragmented industry ready for consolidation, and on the cusp of change. And the less risk the better. “We didn’t want to fix something that was broken,” says Klein.

Although the business they started in 1995, Getty Images, didn’t have a big idea behind it like a Twitter or Facebook, it’s proven just as revolutionary. If Google is essential to navigating the Web, Getty has become essential to visualizing it. Cobbled together through acquisitions, Getty is the world’s largest photo and video agency, and its database of 80 million images is the raw material from which many of the Web’s slide shows and photo galleries are made. A search for its images of happy people, for instance, turns up 626,317 results. That depth allows Getty to license its image trove online to all manner of bloggers and websites, businesses small and large, advertisers, newspapers, and magazines (including this one).

With annual revenue approaching $1 billion, according to Getty, it’s become a media business too important to ignore. Carlyle Group certainly took notice: On Aug. 15, the private equity firm agreed to buy majority ownership of Getty from another private equity investor, Hellman & Friedman, in a deal that values the company at $3.3 billion. The Getty family and Klein will own the rest.

Mark Getty and Klein, a native of South Africa, were working at Hambros Bank in London when they stumbled upon the stock photo business. “It was a cottage industry, it didn’t have any business discipline,” says Klein, now chief executive officer (Getty is chairman). “It was run by and for photographers.”

They began by acquiring the premier photo library in Britain, Tony Stone Images, for $30 million. Since then, Getty has bought more than 100 other photo collections and companies. It went public in 1996, and three years later moved its headquarters to Seattle to be closer to the tech community and many of its customers. “They came in with a super-simple strategy and held to that,” says Stephen Mayes, who had worked at Tony Stone, stayed at Getty until 1998, and is now managing director of the photographer-owned VII Photo Agency. “They are not inherently entrepreneurial. They take good businesses and make them better.” Giorgio Psacharopulo, CEO of Magnum Photos, the cooperative started by Henri Cartier-Bresson and others, says: “We’re in awe of what Getty Images has been able to accomplish. But, like Wal-Mart, they operate at a scale that makes it difficult for smaller agencies to exist.”

Getty was early in recognizing the digital revolution’s impact on photography. In 1998, Getty acquired PhotoDisc, the first company to figure out how to sell photos in a digital format. “They always recognized the business eating their lunch and bought it,” says David Walker, executive editor of Photo District News, a trade publication. By 2001, Getty’s entire business was digital.

The bigger disruption came once digital cameras, and then mobile phones, began producing high-quality images—making everybody a potential photographer. By 2005, a company called iStockphoto emerged as the leading source of crowdsourced (otherwise known as amateur) images. Its average price for a photo was $2 to $3. Getty bought it for $50 million in 2006. “A lot of people thought we were cannibalizing our business,” Klein says. “But sometimes it’s perfectly legitimate to use a $5 picture.”

Getty’s move into the microstock business, as it’s called, came as the media and advertising industries were contracting and the Web was expanding. Soon Getty’s business model was turned upside down: While it started out providing expensive images for limited use to a small group of customers, now it also provides cheaper images for broad use to a big group of customers. Explains Klein: “The fundamental change is how and where pictures are being used.” Before Getty bought iStockphoto, it had some 150,000 customers a year. Now it has 1.3 million. “About 900,000 of them are small and medium-sized businesses, many of whom weren’t using images legally or at all,” Klein says. Fifteen years ago, Getty uploaded a few hundred photos a day; now it uploads tens of thousands. Getty used to license or sell 100,000 images a year; today it’s 30 million to 40 million.

by Susan Berfield, Bloomberg | Read more:
Image:  via:

Why We Can't Fix Twitter

Amtrak once asked a focus group what kind of food they wanted in the train’s cafe car. One participant requested more healthy choices, like salad and fruit. The person running the focus group said something like, “People always say they want the salad. Then they buy the cheeseburger.”

Today’s social media environment faces a similar paradox. It’s fashionable to complain about the low quality of Twitter conversations. We bemoan trolls, flame wars and the lack of nuance inherent in 140-character statements. Occasionally some high-profile tweeter will publicly declare that they are done with the platform, as the writer Lindy West did this week in an article titled: “I’ve left Twitter. It is unusable for anyone but trolls, robots and dictators.”

Twitter CEO Jack Dorsey recently asked his followers to suggest ideas for improvement. He got plenty of recommendations, such as an edit button so users could fix erroneous or ill-considered tweets. Other suggestions included a bookmark button and improved reporting options for bullying.

It’s unclear if these kinds of changes will improve the quality of Twitter discourse. What they won’t fix is the company’s cafe car problem. We say that we want more civil, thoughtful dialogue. But do we really?

Imagine that a Silicon Valley start-up created an online discussion platform precisely to address this problem. There would be no trolls or shouting matches. Shrill sound bites would be replaced by measured conversations. Users would span the political spectrum, allowing for civil exchanges among people with different views.

“Wow,” you’d probably say. “The world needs a platform like that, especially right now!” You’d sign up. Then, you’d go right back to Twitter.

How do I know this? Because we created that alternative platform. It was an online discussion forum called Parlio, and its chief purpose was to host civil, thoughtful conversations. Parlio was founded by Wael Ghonim, the former Google executive best known for running the Facebook page that helped spark the 2011 Egyptian revolution. As the euphoria over the revolution faded, Ghonim found that social media only amplified polarization. “The same tool that united us to topple dictators eventually tore us apart,” he said. In 2015 he and Osman Osman, another former Googler, launched Parlio, and I became chief strategy officer.

The user experience was straightforward. A member would post a short piece of writing, or maybe a link to an article. Then, other members would discuss it. We also hosted Q&As. But Parlio’s culture was markedly different from other social media platforms. It was intended for conversation, not mass broadcasting. You had to be invited to post, but anyone could be a reader. New members signed a civility pledge, and we had a zero-tolerance policy toward trolls.

Parlio built a small but devoted following, including thought leaders from media, academia and business. We hosted remarkably civil conversations about divisive issues like race, terrorism, refugees, sexism and even Donald Trump’s candidacy for president. Author Max Boot wrote in Commentary: “I find that I’m using Parlio more because I can find a more reasoned engagement there than I do on Twitter. Parlio is not, of course, going to threaten Twitter’s business anytime soon, but it is an augury of what can happen if Twitter doesn’t address the problem of anonymous hate-speech that is poisoning its user community.” Tom Friedman penned a New York Times column about Parlio’s attempt to create a new social media experience, writing, “I participated in a debate on Parlio and found it engaging and substantive.”

While people loved the idea of Parlio, we weren’t sure how quickly we could bring it to scale. Last year we joined forces with Quora, which had just reached 100 million monthly users. I am proud of what we created at Parlio, and I also learned a lot about user behavior. The main takeaway is that the social media experience that people say they want is often different from the one that they actively pursue. Here are some of the main challenges to building a civil, thoughtful social media platform:

We’re addicted to the promise of going viral

Say you’re a journalist, and you just published a big article. You have two options for engagement. The first is to receive a relatively small number of comments and questions from informed and influential people, including top thinkers in your field. Option two is a flood of Twitter mentions. Some will be smart, but many will be rants from complete strangers. We might think that we want option one. But deep down, we can’t give up the thrill of option two.

Any Twitter pundit with a large following is familiar with that thrill. It’s that moment right after your provocative statement starts ricocheting across the internet. Your feed explodes with new mentions and your followers dramatically increase. You have no idea who most of these people are, or even if they are real people, but you feel like a rock star. If your tweet goes really viral, you might get on TV. Maybe you will be invited to write an op-ed expanding on your tweet, even though 140 characters were all you had to say on the matter.

Generally speaking, Parlio couldn’t offer that experience. In part because we didn’t have the numbers, but also because our content was not particularly conducive to virality. Often what go viral are antagonistic declarations that are unburdened by nuance. Our president-elect is a master of such statements, which is why Twitter has been such a powerful tool for spreading his message.

Parlio did a decent job of delivering option one, however. Authors would come to Parlio to discuss articles they had written elsewhere. Some of those posts attracted high-quality engagement that is very difficult to find in online commenting sections, and the authors would be delighted. But the next time they wrote an article, sometimes those same authors would skip Parlio and post it on Twitter. The next section helps explain why.

by Emily Parker, Politico |  Read more:
Image: Getty

Utopian Capitalism

The system we know as Capitalism is both wondrously productive and hugely problematic. On the downside, capitalism promotes excessive inequality; it valorises immediate returns over long-term benefits; it addicts us to unnecessary products and it encourages excessive consumption of the world’s resources with potentially disastrous consequences – and that’s just a start. We are now deeply familiar with what can go wrong with Capitalism. But that is no reason to stop dreaming about some of the ways in which Capitalism could one day operate in a Utopian future:

In the Utopia, we’d spend less time thinking about the Dow Jones.

The Dow Jones, which is the world’s most prestigious financial index, takes a daily temperature reading of the US, assigning it a very precise number, which is widely reported in the news and which we tend to treat with a high degree of reverence. Such financial data seems to be telling us something of immense importance. It hints at an answer to the great questions of existence: are things going well or badly, is the world doing OK? How is life on earth?

It’s really worth asking such questions and reflecting heavily upon them. This is what philosophers traditionally like to do. But the numbers do not actually answer our questions, for the links between the Dow Jones figures and what is actually going on in human lives (their rise or fall) is far more elusive. It’s not that there is no connection whatsoever. The financial health of major US companies does have indirect, distant links to the economic side of everyone’s life. Yet the quality and character of daily life is powerfully affected by a great many things which the financial data does not recognise, for example, your health, the view from your window, the quality of your relationship, the amount of time you have to spend commuting, the connections you have with the neighbours, the state of your ambitions, your degree of envy, how your kids are doing. These may, indeed, be rather more important in determining ‘how things are going’ than the Dow Index. But the Dow doesn’t entirely admit this. It seems to be making a larger claim: to know how your life is going – and it brings to this claim a panoply of impressive arrows, charts and incomprehensible acronyms which cow us into believing in its authority, rather as our ancestors might have trusted in the confused mumblings of a priest sitting on top of an altar in a darkened temple.

For all our expertise, we have not yet learned how to devise reliable indicators of the state of nations and individuals. We do not have a daily set of figures to record what truly matters. It might help, for example, to know the incidence of unnecessary embarrassment or whether arrogance is becoming 0.1% more or less common. We don’t have figures measuring supplies of patience, tact and forbearance. We don’t have indices around envy, infidelity and fury.

In the absence of these vital indicators, we cling to the signals offered to us by Wall Street. We use words like depression and exuberance, terms well known from personal life, to describe the movements of stocks and shares. To ask for better indices of national well being sounds whimsical. Yet it ought not to, for we need data that homes in on things that matter greatly for what our lives are actually like. Issues like jealousy, boredom, beauty, frustration or anger shape our destinies just as much – if not more – than the fortunes of 3M (the Minnesota Mining Company) and the twenty nine other corporations whose trading forms the basis for calculating the Dow Jones figures.

The big issue is how we can get a diversity of indicators on our national dashboards. We are not suggesting the suppression of the Dow Jones Industrial Average. What we want to see is the rise of other – equally important – figures that report on a regular basis on elements of psychological and sociological life and which could form part of the consciousness of thoughtful and serious people. Today, a government cannot get rewarded, or chastised, for the impact its policies have on the frequency of domestic rows because rows are not recorded. When we measure things – and give the figures a regular public airing – we start the long process of collectively doing something about them.

In the Utopia, we wouldn’t just care about unemployment, we’d also worry about misemployment.

Employment means being, generically, in work. But misemployment means being in work but of a kind that fails to tackle with any real sincerity the true needs of other people: merely exciting them to unsatisfactory desires and pleasures instead. Like this fellow, dressing up as a hotdog to entice customers.

A man employed by the casino chain Las Vegas Sands to hand out flyers to tourists so as to entice them to use slot machines is clearly ‘employed’ in the technical sense. He’s marked as being off the unemployment registers. He is receiving a wage in return for helping to solve some (small) puzzle of the human condition of interest to his employers: that not enough tourists might otherwise leave the blue skies and cheerful bustle of a south Nevada city’s main street to enter the dark air-conditioned halls of an Egyptian-themed casino lined with ranks of ringing consoles.

The man is indeed employed, but in truth, he belongs to a large subsection of those in work we might term the ‘misemployed’. His labour is generating capital, but it is making no contribution to human welfare and flourishing. He is joined in the misemployment ranks by people who make cigarettes, addictive but sterile television shows, badly designed condos, ill-fitting and shoddy clothes, deceptive advertisements, artery-clogging biscuits and highly-sugared drinks (however delicious). The rate of misemployment in the economy is very high.

And while we may be genuinely grateful for a job and give our best to do it well, at the back of our minds we do – as employees – nurture the hope that our work contributes in some real way to the common good; that we are making, modestly, a difference.

It’s not just the most dramatically harmful kinds of work that register as misemployment. We intuitively recognise it when we think of work as ‘just a job’; when we sense that far too much of our time, effort and intelligence is spent on meetings that resolve little, on chivying people to sign up for products that – in our heart of hearts we don’t admire.

Economists and governments have, with moderate success, been learning techniques to reduce the overall rate of unemployment. Central to their strategy has been the lowering of interest rates and the printing of money. In the language of the field, the key to bringing down unemployment has been to ‘stimulate demand.’

Though technically effective, this method fails to draw any distinction between good and bad demand and therefore between employment and misemployment.

Fortunately, there are real solutions to bringing down the rate of misemployment. The trick isn’t just to stimulate demand per se, the trick is to stimulate the right demand: to excite people to buy the constituents of true satisfaction, and therefore to give individuals and businesses a chance to direct their labour, and make profits, in meaningful areas of the economy.

In a nation properly concerned with misemployment, the taste of the audience would be educated to demand and pay for the most important things. 20 per cent of the adult population might therefore be employed in mental health and flourishing. At least another 30 per cent would be employed in building an environment that could satisfy the soul. People would be taught to respect good furniture, healthy food, wholesome clothes, fruitful holidays…

To achieve such a state, it isn’t enough to print money. The task is to excite people to want to spend it on the right things. This requires public education so that audiences will recognise the value of what is truly valuable and walk past what fails to address their true needs.

This isn’t to suggest that the employment figures are irrelevant – they matter a great deal. They are the first thing to be attended to. All the same the raw figures mask a more ambitious index – and a central question: are we deploying human capital admirably?

by The Book of Life |  Read more:
Images:© Flickr/Scott Beale and uncredited

Friday, January 6, 2017


Caitlyn Murphy, Hallam Corner Store (2016)
via:

Sally West, The Beach
via:

The 401(k) Problem We Refuse to Solve

There’s a perpetual pundit debate over the best way to provide for retirement: defined benefit plans (pensions), defined contribution plans (401(k)s, IRAs and the like) or pay-as-you-go social insurance schemes (Social Security). Most retirement experts I’ve talked to prefer a mix of these, a “three-legged stool.” But as I’ve written before, this is a bit like arguing whether the Titanic would have survived the iceberg if only its hull had been painted green. All three types of retirement savings have different costs and benefits. But these costs and benefits are not the primary reason that people in Western countries have to worry about an impoverished old age.

The funny thing is that, for all the people arguing that some dire problem in one of these three retirement systems urgently requires that we switch to another kind at once, the major problem with all three is exactly the same. It’s even a problem that’s easy to state and easy to fix -- no need for extensive blue-ribbon commissions or elaborate white papers. Here’s the solution: Pick whichever system you prefer; it really doesn’t matter. Now slap a 10 to 15 percent surcharge on a worker's wage income, and divert that money into the system for the worker’s future use. Problem basically solved, because in all three cases, the only flaw that actually matters is that they’re badly underfunded.

If you expect to spend 40 years of your life working, and then another 20 or 30 years living off the money you made during that time, then you need to save a large portion of your salary. Imagine yourself storing up food for the last 30 years of your life from the harvests made during the first 40. You might hope that when you're older, and no longer toiling in the fields, you won’t need to eat so much. Nonetheless, you’d understand that you would need to put aside a considerable portion of your harvest -- something close to what you're eating each day -- to ensure that you don’t starve to death in your old age.

Somehow, we imagine that modern society can make the math different for all the other stuff we consume, from cars to televisions to little paper umbrellas to stick in the cocktails at our retirement parties. And to be fair, to some extent, it has. If productivity is growing quickly, then it is easier to maintain our pre-retirement lifestyles with a smaller pool of savings, because that savings will buy more.

Alternatively, we can have a lot of kids. No matter how you manage your retirement system, you are ultimately expecting to depend on the labor of people younger than you. Whether that labor comes to you in the form of a dividend check or a government benefit or a saintly daughter-in-law building you a new annex in the backyard, you are still expecting someone else younger than you to make stuff, then give it to you without expecting more than gratitude in return. The more workers there are relative to retirees, the smaller the fraction of their income each worker has to give up to support each retiree, and the easier it will be to get them to do so.

Unfortunately, productivity isn’t growing rapidly, and we didn’t have a lot of kids. That leaves plowing a great deal of money into savings and investment, in the hopes that productivity will start to grow again. There is no substitute, no neat transformation we can enact to make that fundamental problem go away.

by Megan McArdle, Bloomberg | Read more:
Image: uncredited via:

Get Your Loved Ones Off Facebook

[ed. I know. Broken record...].

I wrote this for my friends and family, to explain why the latest Facebook privacy policy is really harmful. Maybe it’ll help you too. External references – and steps to get off properly – at the bottom.

A few factual corrections have been brought to my attention, so I’ve fixed them. Thanks everyone!


“Oh yeah, I’ve been meaning to ask you why you’re getting off Facebook,” is the guilty and reluctant question I’m hearing a lot these days. Like we kinda know Facebook is bad, but don’t really want to know.

I’ve been a big Facebook supporter - one of the first users in my social group who championed what a great way it was to stay in touch, way back in 2006. I got my mum and brothers on it, and around 20 other people. I’ve even taught Facebook marketing in one of the UK’s biggest tech education projects, Digital Business Academy. I’m a techie and a marketer – so I can see the implications – and until now, they hadn’t worried me. I’ve been pretty dismissive towards people who hesitate with privacy concerns.

Just checking…

Over the holidays, I thought I’d take a few minutes to check on the upcoming privacy policy change, with a cautious “what if” attitude. With our financial and location information on top of everything else, there were some concerning possibilities. Turns out what I suspected already happened 2 years ago! That few minutes turned into a few days of reading. I dismissed a lot of claims that can be explained as technically plausible (or technically lazy), based on a bit of investigation, like the excessive Android app permissions. But there was still a lot left over, and I considered those facts with techniques that I know to be standard practice in data-driven marketing.

With this latest privacy change on January 30th, I’m scared.

Facebook has always been slightly worse than all the other tech companies with dodgy privacy records, but now, it’s in it’s own league. Getting off isn’t just necessary to protect yourself, it’s necessary to protect your friends and family too. This could be the point of no return – but it’s not too late to take back control.

A short list of some Facebook practices

It’s not just what Facebook is saying it’ll take from you and do with your information, it’s all the things it’s not saying, and doing anyway because of the loopholes they create for themselves in their Terms of Service and how simply they go back on their word. We don’t even need to click “I agree” anymore. They just change the privacy policy and by staying on Facebook, you agree. Oopsy!

Facebook doesn’t keep any of your data safe or anonymous, no matter how much you lock down your privacy settings. Those are all a decoy. There are very serious privacy breaches, like selling your product endorsement to advertisers and politicians, tracking everything you read on the internet, or using data from your friends to learn private things about you - they have no off switch.

Facebooks gives your data to “third-parties” through your use of apps, and then say that’s you doing it, not them. Everytime you use an app, you’re allowing Facebook to escape it’s own privacy policy with you and with your friends. It’s like when my brother used to make me punch myself and ask, “why are you punching yourself?” Then he’d tell my mum it wasn’t his fault.

As I dug in, I discovered all the spying Facebook does – which I double-checked with articles from big reputable news sources and academic studies that were heavily scrutenised. It sounds nuts when you put it all together!
  • They have and continue to create false endorsements for products from you to your friends - and they never reveal this to you.
  • When you see a like button on the web, Facebook is tracking that you’re reading that page. It scans the keywords on that page and associates them to you. It knows much time you spend on different sites and topics.
  • They read your private messages and the contents of the links you send privately.
  • They’ve introduced features that turn your phone’s mic on – based on their track-record changing privacy settings, audio surveillance is likely to start happening without your knowledge.
  • They can use face recognition to track your location through pictures , even those that aren’t on Facebook. (Pictures taken with mobile phones have time, date and GPS data built into them.)
  • They’ve used snitching campaigns to trick people’s friends into revealing information about them that they chose to keep private.
  • They use the vast amount of data they have on you, from your likes, things you read, things you type but don’t post, to make highly accurate models about who you are – even if you make it a point of keeping these things secret. There are statistical techniques, which have been used in marketing for decades, that find correlating patterns between someone’s behaviour and their attributes. Even if you never posted anything, they can easily work out your age, gender, sexual orientation and political views. When you post, they work out much more. Then they reveal it to banks, insurance companies, governments, and of course, advertisers.
“I have nothing to hide”

A lot of people aren’t worried about this, feeling they have nothing to hide. Why would they care about little old me? Why should I worry about this when I’m not doing anything wrong?

One of the more obvious problems here is with insurance companies. The data they have on you is mined to predict your future. The now famous story of the pregnant teenager being outed by the store Target, after it mined her purchase data – larger handbags, headache pills, tissues – and sent her a “congratulations” message as marketing, which her unknowing father got instead. Oops!

The same is done about you, and revealed to any company without your control.

by Salim Varani |  Read more:
Image: uncredited

What Scientific Term or Concept Ought to be More Widely Known?

Of course, not everyone likes the idea of spreading scientific understanding. Remember what the Bishop of Birmingham’s wife is reputed to have said about Darwin’s claim that human beings are descended from monkeys: "My dear, let us hope it is not true, but, if it is true, let us hope it will not become generally known."

Introduction: Scientia

Of all the scientific terms or concepts that ought to be more widely known to help to clarify and inspire science-minded thinking in the general culture, none are more important than “science” itself.

Many people, even many scientists, have traditionally had a narrow view of science as controlled, replicated experiments performed in the laboratory—and as consisting quintessentially of physics, chemistry, and molecular biology. The essence of science is conveyed by its Latin etymology: scientia, meaning knowledge. The scientific method is simply that body of practices best suited for obtaining reliable knowledge. The practices vary among fields: the controlled laboratory experiment is possible in molecular biology, physics, and chemistry, but it is either impossible, immoral, or illegal in many other fields customarily considered sciences, including all of the historical sciences: astronomy, epidemiology, evolutionary biology, most of the earth sciences, and paleontology. If the scientific method can be defined as those practices best suited for obtaining knowledge in a particular field, then science itself is simply the body of knowledge obtained by those practices.

Science—that is, reliable methods for obtaining knowledge—is an essential part of psychology and the social sciences, especially economics, geography, history, and political science. Not just the broad observation-based and statistical methods of the historical sciences but also detailed techniques of the conventional sciences (such as genetics and molecular biology and animal behavior) are proving essential for tackling problems in the social sciences. Science is nothing more nor less than the most reliable way of gaining knowledge about anything, whether it be the human spirit, the role of great figures in history, or the structure of DNA.

It is in this spirit of Scientia that Edge, on the occasion of its 20th anniversary, is pleased to present the Edge Annual Question 2017. Happy New Year!

—John Brockman, Editor, January 1, 2017

*****

Richard H. Thaler
Father of Behavioral Economics; Director, Center for Decision Research, University of Chicago Graduate School of Business; Author, Misbehaving

The Premortem

Before a major decision is taken, say to launch a new line of business, write a book, or form a new alliance, those familiar with the details of the proposal are given an assignment. Assume we are at some time in the future when the plan has been implemented, and the outcome was a disaster. Write a brief history of that disaster.

Applied psychologist Gary Klein came up with “The Premortem,” which was later written about by Daniel Kahneman. Of course we are all too familiar with the more common postmortem that typically follows any disaster, along with the accompanying finger pointing. Such postmortems inevitably suffer from hindsight bias, also known as Monday-morning quarterbacking, in which everyone remembers thinking that the disaster was almost inevitable. As I often heard Amos Tversky say, “the handwriting may have been written on the wall all along. The question is: was the ink invisible?”

There are two reasons why premortems might help avert disasters. (I say might because I know of no systematic study of their use. Organizations rarely allow such internal decision making to be observed and recorded.) First, explicitly going through this exercise can overcome the natural organizational tendencies toward groupthink and overconfidence. A devil’s advocate is unpopular anywhere. The premortem procedure gives cover to a cowardly skeptic who otherwise might not speak up. After all, the entire point of the exercise is to think of reasons why the project failed. Who can be blamed for thinking of some unforeseen problem that would otherwise be overlooked in the excitement that usually accompanies any new venture?

The second reason a premortem can work is subtle. Starting the exercise by assuming the project has failed, and now thinking of why that might have happened creates the illusion of certainty, at least hypothetically. Laboratory research shows that by asking why did it fail rather than why might it fail, gets the creative juices flowing. (The same principle can work in finding solutions to tough problems. Assume the problem has been solved, and then ask, how did it happen? Try it!)

An example illustrates how this can work. Suppose a couple years ago an airline CEO invited top management to conduct a premortem on this hypothetical disaster: All of our airline’s flights around the world have been cancelled for two straight days. Why? Of course, many will immediately think of some act of terrorism. But real progress will be made by thinking of much more mundane explanations. Suppose someone timidly suggests that the cause was the reservation system crashed and the backup system did not work properly.

Had this exercise been conducted, it might have prevented a disaster for a major airline that cancelled nearly 2000 flights over a three-day period. During much of that time, passengers could not get any information because the reservation system was down. What caused this fiasco? A power surge blew a transformer and critical systems and network equipment didn’t switch over to backups properly. This havoc was all initiated by the equivalent of blowing a fuse.

This episode was bad, but many companies that were once household names and now no longer exist might still be thriving if they had conducted a premortum with the question being: It is three years from now and we are on the verge of bankruptcy. How did this happen?

And, how many wars might not have been started if someone had first asked: We lost. How? (...)

*****

Joichi Ito
Director, MIT Media Lab; Coauthor (with Jeff Howe), Whiplash: How to Survive Our Faster Future

Neurodiversity

Humans have diversity in neurological conditions. While some, such as autism are considered disabilities, many argue that they are the result of normal variations in the human genome. The neurodiversity movement is an international civil rights movement that argues that autism shouldn’t be “cured” and that it is an authentic form of human diversity that should be protected.

In the early 1900s eugenics and the sterilization of people considered genetically inferior were scientifically sanctioned ideas, with outspoken advocates like Theodore Roosevelt, Margaret Sanger, Winston Churchill and US Supreme Court Justice Oliver Wendell Holmes Jr. The horror of the Holocaust, inspired by the eugenics movement, demonstrated the danger and devastation these programs can exact when put into practice.

Temple Grandin, an outspoken spokesperson for autism and neurodiversity argues that Albert Einstein, Wolfgang Mozart and Nikola Tesla would have been diagnosed on the “autistic spectrum” if they had been alive today. She also believes that autism has long contributed to human development and that “without autism traits we might still be living in caves.” Today, non-neurotypical children often suffer through a remedial programs in the traditional educational system only to be discovered to be geniuses later. Many of these kids end up at MIT and other research institutes.

With the invention of CRISPR the possibility of editing the human genome at scale has suddenly become feasible. The initial applications that are being developed involve the “fixing” of genetic mutations that cause debilitating diseases, but they are also taking us down a path with the potential to eliminate not only autism but much of the diversity that makes human society flourish. Our understanding of the human genome is rudimentary enough that it will be some time before we are able to enact complex changes that involve things like intelligence or personality, but it’s a slippery slope. I saw a business plan a few years ago that argued that autism was just “errors” in the genome that could be identified and “corrected” in the manner of “de-noising” a grainy photograph or audio recording.

Clearly some children born with autism are in states that require intervention and have debilitating issues. However, our attempts to “cure” autism, either through remediation or eventually through genetic engineering, could result in the eradication of a neurological diversity that drives scholarship, innovation, arts and many of the essential elements of a healthy society.

We know that diversity is essential for healthy ecosystems. We see how agricultural monocultures have created fragile and unsustainable systems.

My concern is that even if we figure out and understand that neurological diversity is essential for our society, I worry that we will develop the tools for designing away any risky traits that deviate from the norm, and that given a choice, people will tend to opt for a neuro-typical child.

As we march down the path of genetic engineering to eliminate disabilities and disease, it’s important to be aware that this path, while more scientifically sophisticated, has been followed before with unintended and possibly irreversible consequences and side-effects.

by Edge.org |  Read more:
Image: "Spiders 2013" by Katinka Matson

Spiritual Sedona: the Arizona Town Bursting With Positive Vibes

[ed. It's bursting with something, for sure.]

Locals call Sedona, Arizona, a cathedral without walls. It’s not just the landscape – those red cliffs, mesas rearing up against a crisp and empty sky, that inspired Hollywood producers of the 1930s and 40s to shoot westerns such as Broken Arrow and Stagecoach in the area. Three million tourists a year come to this town of barely 10,000, nestled among towering rusty sandstone rock formations in the northern Verde valley. Many of these visitors are pilgrims, particularly at this time of year, headed to Arizona in search of spiritual renewal.

Sedona has no major churches, no relics, no established holy sites. But what it does have are “vortexes” – a series of unmarked points around Sedona’s various cliffs that locals and visitors alike imbue with new-age significance.

Where that significance comes from – like the actual number of vortexes in Sedona, which varies from guide to guide – is subject to debate. Locals cite legends about the area’s sanctity to local Native American tribes. However, Sedona didn’t become America’s new age capital until the 1980s, when a US psychic named Page Bryant identified the vortexes after a vision. These vortexes were places where spiritual energy was at its highest point, where you could tap into the frequencies of the universe, where you could, by closing your eyes, start to change your life. Spiritual seekers across the country listened. In 1987, Sedona was host to one of the largest branches of the Harmonic Convergence – a new age synchronised meditation – when 5,000 pilgrims came to get in touch with the universe at the Bell Rock butte, believed by many to be a vortex.

Now, among the juniper trees, you can find strip-malls full of crystal shops, aura-reading stations and psychics. At ChocolaTree Organic Eatery, shiva lingams – statues normally associated with Hindu temples – stand against the walls; next door, a UFO-themed diner called ET Encounter (formerly the Red Planet) serves Roswell-themed burgers and old Star Trek episodes play on the TV. Every other office along the state route running through town offers a “spiritual tour” of the vortexes. The national forests are full of small cairns people have left as spiritual offerings. These are regularly removed by forest service rangers in order to preserve the site’s ecological integrity.

Many of Sedona’s businesses are also geared towards wellbeing and purification, if not enlightenment: the town’s highest-end “hotel”, L’Auberge de Sedona (rooms from $270), which consists of luxury cottages and lodges, supplements traditional spa offerings with an outdoor “creekside massage”, where guests are invited to dip their feet in the river and squelch mud between their toes, washing off the dirt with creek water scented with flower petals. My own hotel, the Sedona Rouge (doubles from $150 B&B), a ranch-inspired boutique hotel near Coffee Pot Rock, which towers over western Sedona, offers guests morning poolside yoga sessions before their turmeric-tofu breakfast scrambles. (...)

It’s easy to be sceptical about Sedona. The relentless barrage of wellness and self-improvement-focused tourism can border on the cloying (after a delicately-spiced breakfast of quinoa and almond milk at ChocolaTree, I find myself all but begging a waitress at a nearby downmarket diner to give me the strongest, worst-quality filter coffee she can find). My vortex tour with Mark Griffon of Sedona Mystical Tours ($135, three hours) – who starts off the morning with a sage cleansing near a stone-circle “medicine wheel” he’s assembled himself in his backyard – is at times uncomfortably intense, as one of the attendees breaks down into sobs during a meditation against a juniper tree called Fred.

by Tara Isabella Burton, The Guardian | Read more:
Image: Alarmy

Thursday, January 5, 2017

Let There Be Light

Two blue flames, each reaching more than one thousand degrees Celsius, converge on a small glass tube. It takes a few seconds before the pinky-width cylinder bursts into an orange flare, like a marshmallow about to char. That’s when Andrew Hibbs begins to work his magic. He spins the glass with his bare fingertips, waving it across the flames to distribute the heat. Then, using a rubber hose that hangs between his lips like a reed, he breathes life into the glass. In one smooth gesture, he curls it up into an arc: the first bend for a neon sign that will eventually read “It was all a dream.” The piece is one of the hundred or so that Hibbs will create this year, each selling for upwards of a thousand dollars.

We are in a nondescript warehouse, tucked away in the scrubby, industrial outskirts of Vancouver. “It’s a bit like a science lab in here, isn’t it?” Hibbs says, offering me a tour. His workbench is covered with sheets of brown tracing paper and archaic-looking drawing tools, which he uses to hand-render patterns for new signs. At the far end is the pumper table: a series of black knobs and dials mounted to a wooden counter with the tops of two neon-filled canisters poking through. In the middle of the shop floor stand three chest-high torches, known as crossfires, where glass tubes are heated and shaped into the sinuous curves neon lights are famous for.

At twenty-nine, Hibbs is an anomaly—a young master of a dying art. He started learning the trade by his father’s side at thirteen, helping out in their backyard workshop. His father showed him how to pump neon into the glass tubes and repair broken signs before slowly teaching his young apprentice the craft of bending. “It takes about five years to get decent at it,” Hibbs explains. He then holds out his hands: scars caused by shattered glass run up and down his fingers. Their tips are polished smooth from repeated burns.

Over the past few years, Hibbs has been leading a neon revival of sorts in Vancouver. His work has been featured by the Juno Awards as well as a host of local media, including Breakfast Television and the Georgia Straight. In 2014, he turned heads with a towering three-storey advertisement for a high-rise beside the Granville Street Bridge that read “Gesamtkunstwerk” (a German phrase meaning “complete work of art”). “It was all a dream,” like much of his work, will be sold to an upscale private buyer.

Hibbs explains that he is one of the few neon sign-makers left in Vancouver. Most, like his father, have reached or are nearing retirement. It’s a far cry from the art’s 1950s glory days, when the city had some 19,000 glowing signs rising above its streets—roughly one for every eighteen residents. At its height, Vancouver reportedly had more neon per capita than New York, Tokyo and even Las Vegas. During that period, dozens of local sign-makers worked overtime to keep up with the demand for bigger, brighter and ever more eye-catching displays. Those days have long since ended.

In recent years, LEDs—cheaper, less finicky and more efficient—have mostly replaced neon in commercial applications. But that’s only part of the story. Neon’s real decline happened decades earlier, when Vancouver’s carnival of lights became the focal point of a bitter aesthetic war that would forever change the city. (...)

By 1940, neon had transformed Vancouver: the city’s dark, wet winters offered a perfect backdrop for its warm, multicoloured glow. Photographs from that era show a metropolis that may look foreign to current residents: gritty streetscapes cluttered with signs and bulletin boards, sidewalks hectic with shoppers and vendors. Granville Street, the heart of the entertainment district, became known as the “Great White Way” for its landing strip of lights that could be seen from blocks away. “As a small city, we were an incredibly urban, vibrant place,” says Atkin. “You would bump into Hollywood stars and all manner of well-known musicians and nightclub performers. The signs encapsulated the exuberance and optimism of that period.”

In the post-war years, Vancouver was home to at least a dozen neon shops, each competing to create ever more outlandish displays: a giant tugboat rocked through waves over the Gulf of Georgia Towing office; the bellows of an antique camera accordioned in and out above a downtown photography shop; a pot-bellied Buddha perched atop the popular Smilin’ Buddha Cabaret nightclub. In those days, neon must have seemed as much a part of the city as the rain itself.

By the early 1960s, anyone driving westbound on Hastings Street would have seen little evidence of that seemingly irrepressible city. Storefronts that previously housed clothiers and jewelry shops were boarded up. Shuttered theatres littered the strip. Streetcars, once the lifeblood of the neighbourhood, were no longer running. Even the storied retailer Eaton’s, the anchor of Hastings’ business district, was struggling—in just a few years, it would move across town to a new mall. One of the only things that hadn’t disappeared were the neon signs.

That decade was a tumultuous time in Vancouver. Middle-class families were moving to the suburbs and other parts of the city, seeking backyards and carports. Plans were being drawn up for an elevated freeway that would slice through the downtown to better serve these new commuters. Neighbourhoods such as the Downtown Eastside became downtrodden. “The life was sucked out of the downtown area,” says Viviane Gosselin, curator of contemporary culture with the Museum of Vancouver. What was left were businesses in seedier areas, she says, and these impoverished pockets soon became associated with neon’s buzz.

Neon, once seen as glamorous, became the emblem of urban decay and was increasingly seen as a beacon for vice. “In a movie, if you wanted to show someone who was down on their luck, you put then in a hotel room, on their bed, in their undershirt, with a flashing red neon sign outside the window,” says Atkin.

Those bright lights had been a way for the young city to assert its prosperity and sophistication. But as Vancouver’s regional population swelled to more than one million residents, its anxieties shifted. Many Vancouverites were less worried about being seen as a big urban centre and more concerned that its man-made excess distracted from the natural beauty of its mountains, ocean and beaches. In 1966, Vancouver Sun writer Tom Ardies opined that the proliferating neon signs were a hideous monstrosity. “They’re outsized, outlandish, and outrageous,” he wrote. “They’re desecrating our buildings, cluttering our streets, and—this is the final indignity—blocking our views to some of the greatest scenery in the world.”

by Brad Badelt, Maisonneuve |  Read more:
Image: Wendy Cutler/Flickr

It May Not Feel Like Anything to Be an Alien

Humans are probably not the greatest intelligences in the universe. Earth is a relatively young planet and the oldest civilizations could be billions of years older than us. But even on Earth, Homo sapiens may not be the most intelligent species for that much longer.

The world Go, chess, and Jeopardy champions are now all AIs. AI is projected to outmode many human professions within the next few decades. And given the rapid pace of its development, AI may soon advance to artificial general intelligence—intelligence that, like human intelligence, can combine insights from different topic areas and display flexibility and common sense. From there it is a short leap to superintelligent AI, which is smarter than humans in every respect, even those that now seem firmly in the human domain, such as scientific reasoning and social skills. Each of us alive today may be one of the last rungs on the evolutionary ladder that leads from the first living cell to synthetic intelligence.

What we are only beginning to realize is that these two forms of superhuman intelligence—alien and artificial—may not be so distinct. The technological developments we are witnessing today may have all happened before, elsewhere in the universe. The transition from biological to synthetic intelligence may be a general pattern, instantiated over and over, throughout the cosmos. The universe’s greatest intelligences may be postbiological, having grown out of civilizations that were once biological. (This is a view I share with Paul Davies, Steven Dick, Martin Rees, and Seth Shostak, among others.) To judge from the human experience—the only example we have—the transition from biological to postbiological may take only a few hundred years.

I prefer the term “postbiological” to “artificial” because the contrast between biological and synthetic is not very sharp. Consider a biological mind that achieves superintelligence through purely biological enhancements, such as nanotechnologically enhanced neural minicolumns. This creature would be postbiological, although perhaps many wouldn’t call it an “AI.” Or consider a computronium that is built out of purely biological materials, like the Cylon Raider in the reimagined Battlestar Galactica TV series.

The key point is that there is no reason to expect humans to be the highest form of intelligence there is. Our brains evolved for specific environments and are greatly constrained by chemistry and historical contingencies. But technology has opened up a vast design space, offering new materials and modes of operation, as well as new ways to explore that space at a rate much faster than traditional biological evolution. And I think we already see reasons why synthetic intelligence will outperform us.

An extraterrestrial AI could have goals that conflict with those of biological life


Silicon microchips already seem to be a better medium for information processing than groups of neurons. Neurons reach a peak speed of about 200 hertz, compared to gigahertz for the transistors in current microprocessors. Although the human brain is still far more intelligent than a computer, machines have almost unlimited room for improvement. It may not be long before they can be engineered to match or even exceed the intelligence of the human brain through reverse-engineering the brain and improving upon its algorithms, or through some combination of reverse engineering and judicious algorithms that aren’t based on the workings of the human brain.

In addition, an AI can be downloaded to multiple locations at once, is easily backed up and modified, and can survive under conditions that biological life has trouble with, including interstellar travel. Our measly brains are limited by cranial volume and metabolism; superintelligent AI, in stark contrast, could extend its reach across the Internet and even set up a Galaxy-wide computronium, utilizing all the matter within our galaxy to maximize computations. There is simply no contest. Superintelligent AI would be far more durable than us.

Suppose I am right. Suppose that intelligent life out there is postbiological. What should we make of this? Here, current debates over AI on Earth are telling. Two of the main points of contention—the so-called control problem and the nature of subjective experience—affect our understanding of what other alien civilizations may be like, and what they may do to us when we finally meet.

Ray Kurzweil takes an optimistic view of the postbiological phase of evolution, suggesting that humanity will merge with machines, reaching a magnificent technotopia. But Stephen Hawking, Bill Gates, Elon Musk, and others have expressed the concern that humans could lose control of superintelligent AI, as it can rewrite its own programming and outthink any control measures that we build in. This has been called the “control problem”—the problem of how we can control an AI that is both inscrutable and vastly intellectually superior to us. (...)

Why would nonconscious machines have the same value we place on biological intelligence?

...Raw intelligence is not the only issue to worry about. Normally, we expect that if we encountered advanced alien intelligence, we would likely encounter creatures with very different biologies, but they would still have minds like ours in an important sense—there would be something it is like, from the inside, to be them. Consider that every moment of your waking life, and whenever you are dreaming, it feels like something to be you. When you see the warm hues of a sunrise, or smell the aroma of freshly baked bread, you are having conscious experience. Likewise, there is also something that it is like to be an alien—or so we commonly assume. That assumption needs to be questioned though. Would superintelligent AIs even have conscious experience and, if they did, could we tell? And how would their inner lives, or lack thereof, impact us?

The question of whether AIs have an inner life is key to how we value their existence. Consciousness is the philosophical cornerstone of our moral systems, being key to our judgment of whether someone or something is a self or person rather than a mere automaton. And conversely, whether they are conscious may also be key to how they value us. The value an AI places on us may well hinge on whether it has an inner life; using its own subjective experience as a springboard, it could recognize in us the capacity for conscious experience. After all, to the extent we value the lives of other species, we value them because we feel an affinity of consciousness—thus most of us recoil from killing a chimp, but not from munching on an apple.

But how can beings with vast intellectual differences and that are made of different substrates recognize consciousness in each other? Philosophers on Earth have pondered whether consciousness is limited to biological phenomena. Superintelligent AI, should it ever wax philosophical, could similarly pose a “problem of biological consciousness” about us, asking whether we have the right stuff for experience.

Who knows what intellectual path a superintelligence would take to tell whether we are conscious. But for our part, how can we humans tell whether an AI is conscious? Unfortunately, this will be difficult. Right now, you can tell you are having experience, as it feels like something to be you. You are your own paradigm case of conscious experience. And you believe that other people and certain nonhuman animals are likely conscious, for they are neurophysiologically similar to you. But how are you supposed to tell whether something made of a different substrate can have experience?

by Susan Schneider, Kurzweil Accelerating Intelligence | Read more:
Image:YouTube/Warner Bros

byung hoon choi, water meditation
via:

Wednesday, January 4, 2017


Shoji Ueda
via:

It's Not Just Blue-Collar Jobs

[ed. I can't stress this enough (and have been harping on it for years): AI is coming for your job. Assembly lines and other forms of manual labor are low hanging fruit. Next come "knowledge workers", anyone who's job relies on retreiving and processing information. Doctors, lawyers, accountants, utilities managers, every form of clerical and managerial worker, mappers, engineers, pilots, weather forecasters, and so on, so on, and so on. I give it ten, fifteen years at the most.]

Manufacturing jobs have already been decimated by robots. White collar workers are next in line.

Fukoku Mutual Life Insurance in Japan is about to replace claim adjusters with a software robot from IBM.

Most of the attention around automation focuses on how factory robots and self-driving cars may fundamentally change our workforce, potentially eliminating millions of jobs. But AI that can handle knowledge-based, white-collar work are also becoming increasingly competent.

One Japanese insurance company, Fukoku Mutual Life Insurance, is reportedly replacing 34 human insurance claim workers with “IBM Watson Explorer,” starting by January 2017.

The AI will scan hospital records and other documents to determine insurance payouts, according to a company press release, factoring injuries, patient medical histories, and procedures administered. Automation of these research and data gathering tasks will help the remaining human workers process the final payout faster, the release says.

Fukoku Mutual will spend $1.7 million (200 million yen) to install the AI system, and $128,000 per year for maintenance, according to Japan’s The Mainichi. The company saves roughly $1.1 million per year on employee salaries by using the IBM software, meaning it hopes to see a return on the investment in less than two years.

Watson AI is expected to improve productivity by 30%, Fukoku Mutual says. The company was encouraged by its use of similar IBM technology to analyze customer’s voices during complaints. The software typically takes the customer’s words, converts them to text, and analyzes whether those words are positive or negative. Similar sentiment analysis software is also being used by a range of US companies for customer service; incidentally, a large benefit of the software is understanding when customers get frustrated with automated systems.

The Mainichi reports that three other Japanese insurance companies are testing or implementing AI systems to automate work such as finding ideal plans for customers. An Israeli insurance startup, Lemonade, has raised $60 million on the idea of “replacing brokers and paperwork with bots and machine learning,” says CEO Daniel Schreiber.

by Mike “Mish” Shedlock, Mish Talk | Read more:
Image: Herman J. Knippertz

Politics 101

Trump and the Batman Effect

He's Making a List

The Republican Party’s Corruption Will Bring Them Down—Again

Forecast 2017: When the Wheels Finally Come Off

WashPost Is Richly Rewarded for False News About Russia Threat While Public Is Deceived

[ed. I think the GOP will realize soon enough (if the Ethics Committee fiasco this week was any indication) that they're being punked just as much as the Dems. In fact, they have a much bigger problem: they actually have to advance whatever hairbrained agenda Trump decides to pursue. There are big egos in Congress, it'll be interesting to see how they deal with being lapdogs in their own party.]

Miroslava Rakovic

via:

Schadenfreude with Bite

Trolls are the self-styled pranksters of the internet, a subculture of wind-up merchants who will say anything they can to provoke unwary victims, then delight in the outrage that follows. When Mitchell Henderson, a 12-year-old boy from Minnesota, killed himself in 2006, trolls descended on his MySpace page, where his friends and relatives were posting tributes. The trolls were especially taken with the fact that Henderson had lost his iPod days before his death. They posted messages implying that his suicide was a frivolous response to consumerist frustration: ‘first-world problems’. One post contained an image of the boy’s gravestone with an iPod resting against it.

What’s so funny about trolling? ‘Every joke calls for a public of its own,’ Freud said, ‘and laughing at the same jokes is evidence of far-reaching psychical conformity.’ To understand a joke is to share a culture or, more precisely, to be on the same side of an antagonism. Trolls do what they do for the ‘lulz’ (a corruption of ‘LOL’, Laughing Out Loud), a form of enjoyment that derives from someone else’s anguish. Whitney Phillips, whose research has involved years of participant-observation of trolls, describes lulz as schadenfreude with more bite. The more furious and upset the Henderson family became, the funnier the trolls found it.

In 2011, one of these ‘RIP trolls’, Sean Duffy, a 25-year-old from Reading, was jailed for posting messages online about dead teenage girls. He called Natasha MacBryde, who had killed herself aged 15, a ‘slut’; on Mothers’ Day he posted a message on the memorial page of 14-year-old Lauren Drew, who had died after an epileptic fit: ‘Help me mummy, it’s hot in hell.’ Often, trolls gang up on their targets. Phillips details the case of a Californian teenager called Chelsea King, who was raped and murdered in February 2010. Her relatives were treated as fair game, and supportive strangers who tried to intervene were themselves tracked down and hounded.

RIP trolling treats grief as an exploitable state. It isn’t that the trolls care one way or another about the person who has died. It’s that they regard caring too much about anything as a fault deserving punishment. You can see evidence of this throughout the trolling subculture, even in more innocuous instances. In one case, participants phoned video-game stores to inquire about the non-existent sequel to an outdated game. They called so persistently that the workers answering the phone would fly into a rage at the mention of the game, to the amusement of the trolls. The supreme currency of trolling is exploitability, and the supreme vice is taking anything too seriously. Grieving parents are among the easiest to exploit – their rage and sorrow are closest to the surface – but no one is invulnerable.

The controlled cruelty of the wind-up didn’t need trolls to invent it. In the pre-internet era, it perhaps seemed more innocent: Candid Camera; Jeremy Beadle duping a hapless member of the public. The ungovernable rage of the unwitting victim is always funny to someone, and invariably there is sadistic detachment in the amusement. The trolls’ innovation has been to add a delight in nonsense and detritus: calculated illogicality, deliberate misspellings, an ironic recycling of cultural nostalgia, sedimented layers of opaque references and in-jokes. Trolling, as Phillips puts it, is the ‘latrinalia’ of popular culture: the writing on the toilet wall.

Trolls are also distinguished from their predecessors by seeming not to recognise any limits. Ridicule is an anti-social force: it tends to make people clam up and stop talking. So there is a point at which, if conversation and community are to continue, the joke has to stop, and the victim be let in on the laughter. Trolls, though, form a community precisely around the extension of their transgressive sadism beyond the limits of their offline personas. That the community consists almost entirely of people with no identifying characteristics – ‘anons’ – is part of the point. It is as if the laughter of the individual troll were secondary; the primary goal is to sustain the pleasure of the anonymous collective. (...)

If the yield of trolling is the outcry of the aggrieved, it depends utterly on the preservation of value. Trolls depend on there being enough people who care about enough things – an indifferent shrug means failure. The choice of victim almost always conveys a moral position on what it is more or less appropriate to care about. RIP trolls are most incensed by the suicides of seemingly privileged white people; they see such deaths as self-indulgent, and public displays of grief over them as a façade, as one troll put it, for ‘boredom and a pathological need for attention’. Other campaigns, such as the trolling of the National Security Agency after the exposure of its extensive wiretapping, suggest that another cardinal sin for trolls is the suppression or misuse of information.

The troll has it both ways. He is magnificently indifferent to social norms, which he transgresses for the lulz, yet often at the same time a vengeful punisher: both the Joker and Batman. The troll acts ‘as a self-appointed cultural critic’ in a tradition of clowns and jesters, according to Benjamin Radford, while simultaneously ‘plausibly maintaining that it’s all in good fun and shouldn’t be taken (too) seriously’. According to John Lindow’s ‘unnatural history’ of trolls, the original trolls of Scandinavian folklore punished improper behaviour and upheld social norms. If you take the behavioural code of lulz seriously and erase any commitment to social norms, what you are left with is the logic of punishment in its distilled form: if even the grieving are punishable, who isn’t? ‘None of us,’ goes the refrain, ‘is as cruel as all of us.’ It is around this principle that the most infamous trolling community forged its identity: ‘We are Anonymous, and we do not forgive.’ And what goes unforgiven is weakness.

Sociological analyses of ‘online deviancy’ tend to focus on such traits as Machiavellianism, narcissism, psychopathy and sadism. Phillips debunks all this. It does little more, she says, than redescribe the phenomena with a particular moral accent, while asking us to take for granted the meaningfulness of the categories (‘deviancy’, ‘personality type’) used. Instead, she stresses the role of mainstream culture, arguing that trolls are ‘agents of cultural digestion’.

by Richard Seymour, LRB |  Read more:
Image: via: