Thursday, November 3, 2016

Crony Beliefs

For as long as I can remember, I've struggled to make sense of the terrifying gulf that separates the inside and outside views of beliefs.

From the inside, via introspection, each of us feels that our beliefs are pretty damn sensible. Sure we might harbor a bit of doubt here and there. But for the most part, we imagine we have a firm grip on reality; we don't lie awake at night fearing that we're massively deluded.

But when we consider the beliefs of other people? It's an epistemic shit show out there. Astrology, conspiracies, the healing power of crystals. Aliens who abduct Earthlings and build pyramids. That vaccines cause autism or that Obama is a crypto-Muslim — or that the world was formed some 6,000 years ago, replete with fossils made to look millions of years old. How could anyone believe this stuff?!

No, seriously: how?

Let's resist the temptation to dismiss such believers as "crazy" — along with "stupid," "gullible," "brainwashed," and "needing the comfort of simple answers." Surely these labels are appropriate some of the time, but once we apply them, we stop thinking. This isn't just lazy; it's foolish. These are fellow human beings we're talking about, creatures of our same species whose brains have been built (grown?) according to the same basic pattern. So whatever processes beget their delusions are at work in our minds as well. We therefore owe it to ourselves to try to reconcile the inside and outside views. Because let's not flatter ourselves: we believe crazy things too. We just have a hard time seeing them as crazy.

So, once again: how could anyone believe this stuff? More to the point: how could we end up believing it?

After struggling with this question for years and years, I finally have an answer I'm satisfied with.

Beliefs as Employees

By way of analogy, let's consider how beliefs in the brain are like employees at a company. This isn't a perfect analogy, but it'll get us 70% of the way there.

Employees are hired because they have a job to do, i.e., to help the company accomplish its goals. But employees don't come for free: they have to earn their keep by being useful. So if an employee does his job well, he'll be kept around, whereas if he does it poorly — or makes other kinds of trouble, like friction with his coworkers — he'll have to be let go.

Similarly, we can think about beliefs as ideas that have been "hired" by the brain. And we hire them because they have a "job" to do, which is to provide accurate information about the world. We need to know where the lions hang out (so we can avoid them), which plants are edible or poisonous (so we can eat the right ones), and who's romantically available (so we know whom to flirt with). The closer our beliefs hew to reality, the better actions we'll be able to take, leading ultimately to survival and reproductive success. That's our "bottom line," and that's what determines whether our beliefs are serving us well. If a belief performs poorly — by inaccurately modeling the world, say, and thereby leading us astray — then it needs to be let go.

I hope none of this is controversial. But here's where the analogy gets interesting.

Consider the case of Acme Corp., a property development firm in a small town called Nepotsville. The unwritten rule of doing business in Nepotsville is that companies are expected to hire the city council's friends and family members. Companies that make these strategic hires end up getting their permits approved and winning contracts from the city. Meanwhile, companies that "refuse to play ball" find themselves getting sued, smeared in the local papers, and shut out of new business.

In this environment, Acme faces two kinds of incentives, one pragmatic and one political. First, like any business, it needs to complete projects on time and under budget. And in order to do that, it needs to act like a meritocracy, i.e., by hiring qualified workers, monitoring their performance, and firing those who don't pull their weight. But at the same time, Acme also needs to appease the city council. And thus it needs to engage in a little cronyism, i.e., by hiring workers who happen to be well-connected to the city council (even if they're unqualified) and preventing those crony workers from being fired (even when they do shoddy work).

Suppose Acme has just decided to hire the mayor's nephew Robert as a business analyst. Robert isn't even remotely qualified for the role, but it's nevertheless in Acme's interests to hire him. He'll "earn his keep" not by doing good work, but by keeping the mayor off the company's back.

Now suppose we were to check in on Robert six months later. If we didn't already know he was a crony, we might easily mistake him for a regular employee. We'd find him making spreadsheets, attending meetings, drawing a salary: all the things employees do. But if we look carefully enough — not at Robert per se, but at the way the company treats him — we're liable to notice something fishy. He's terrible at his job, and yet he isn't fired. Everyone cuts him slack and treats him with kid gloves. The boss tolerates his mistakes and even works overtime to compensate for them. God knows, maybe he's even promoted.

Clearly Robert is a different kind of employee, a different breed. The way he moves through the company is strange, as if he's governed by different rules, measured by a different yardstick. He's in the meritocracy, but not of the meritocracy.

And now the point of this whole analogy.

I contend that the best way to understand all the crazy beliefs out there — aliens, conspiracies, and all the rest — is to analyze them as crony beliefs. Beliefs that have been "hired" not for the legitimate purpose of accurately modeling the world, but rather for social and political kickbacks.

As Steven Pinker says,
People are embraced or condemned according to their beliefs, so one function of the mind may be to hold beliefs that bring the belief-holder the greatest number of allies, protectors, or disciples, rather than beliefs that are most likely to be true.
In other words, just like Acme, the human brain has to strike an awkward balance between two different reward systems:
  • Meritocracy, where we monitor beliefs for accuracy out of fear that we'll stumble by acting on a false belief; and
  • Cronyism, where we don't care about accuracy so much as whether our beliefs make the right impressions on others.
And so we can roughly (with caveats we'll discuss in a moment) divide our beliefs into merit beliefs and crony beliefs. Both contribute to our bottom line — survival and reproduction — but they do so in different ways: merit beliefs by helping us navigate the world, crony beliefs by helping us look good.

The point is, our brains are incredibly powerful organs, but their native architecture doesn't care about high-minded ideals like Truth. They're designed to work tirelessly and efficiently — if sometimes subtly and counterintuitively — in our self-interest. So if a brain anticipates that it will be rewarded for adopting a particular belief, it's perfectly happy to do so, and doesn't much care where the reward comes from — whether it's pragmatic (better outcomes resulting from better decisions), social (better treatment from one's peers), or some mix of the two. A brain that didn't adopt a socially-useful (crony) belief would quickly find itself at a disadvantage relative to brains that are more willing to "play ball." In extreme environments, like the French Revolution, a brain that rejects crony beliefs, however spurious, may even find itself forcibly removed from its body and left to rot on a pike. Faced with such incentives, is it any wonder our brains fall in line?

Even mild incentives, however, can still exert pressure on our beliefs. Russ Roberts tells the story of a colleague who, at a picnic, started arguing for an unpopular political opinion — that minimum wage laws can cause harm — whereupon there was a "frost in the air" as his fellow picnickers "edged away from him on the blanket." If this happens once or twice, it's easy enough to shrug off. But when it happens again and again, especially among people whose opinions we care about, sooner or later we'll second-guess our beliefs and be tempted to revise them.

Mild or otherwise, these incentives are also pervasive. Everywhere we turn, we face pressure to adopt crony beliefs. At work, we're rewarded for believing good things about the company. At church, we earn trust in exchange for faith, while facing severe sanctions for heresy. In politics, our allies support us when we toe the party line, and withdraw support when we refuse. (When we say politics is the mind-killer, it's because these social rewards completely dominate the pragmatic rewards, and thus we have almost no incentive to get at the truth.) Even dating can put untoward pressure on our minds, insofar as potential romantic partners judge us for what we believe.

If you've ever wanted to believe something, ask yourself where that desire comes from. Hint: it's not the desire simply to believe what's true.

In short: Just as money can pervert scientific research, so everyday social incentives have the potential to distort our beliefs.

Posturing

So far we've been describing our brains as "responding to incentives," which gives them a passive role. But it can also be helpful to take a different perspective, one in which our brains actively adopt crony beliefs in order to strategically influence other people. In other words, we use crony beliefs to posture.

by Kevin Simler, Melting Asphalt |  Read more:
Image: via:

Serj Fedulov
, Waiting for Godot
via:

What’s Your Ideal Community? The Answer Is Political

The American political map that has emerged over the last half-century, with blue cities and red beyond, is a product of both the ideological realignment of the two parties and geographic sorting among voters. It also raises a fascinating question about how our politics are shaped by where we live.

Is it simply that people who are already liberal choose dense urban environments and conservatives choose more suburban living? Or do these places influence how we feel about government — and each other — in ways that make us more liberal or conservative?

Political scientists, fortunately, cannot randomly assign people to cities, suburbs or rural outposts and then wait to see if their politics adapt. But their theories of why density might matter for partisanship add a provocative layer to how we think about the differences among us that are more often defined in an election year by education, income or race.

A large Pew survey two years ago of American political life found that self-described liberals overwhelmingly said they’d prefer to live where the homes are smaller and closer together but where the amenities are within walking distance. Conservatives chose the opposite trade-off: big homes, spaced farther apart, but with schools and restaurants miles away. The question got at a pattern underlying politics today: Beyond our disagreements about taxes, welfare or health care, partisans also fundamentally favor different kinds of places. (...)

Thomas Ogorzalek, a political scientist at Northwestern, argues that liberalism has its roots in big-city governments trying to solve the kinds of local problems that arise when diverse populations cram together. Compared with the suburbs or rural America, cities are more complex. They’re harder to govern, which means in many ways that they demand bigger government: a large transit agency to move people around, intricate parking rules to govern scarce spaces, a garbage truck armada to keep the streets clean.

“Externalities accumulate faster in dense places, and you need to do something about them,” Mr. Ogorzalek said. In other words, the trash piles up.

New York City, with its 24,000 restaurants and bars, needs a system of publicly posted health grades. A town with two restaurants may not. New York needs some colossal bridges connecting Manhattan and Brooklyn. A smaller community doesn’t need public-works projects on that scale. New York requires a large police force. A rural resident may need self-reliance when the closest officer is 10 miles away.

It’s conceivable that people who live in cities come to value more active government. Or they’re more receptive to investing in welfare because they pass the homeless every day. Or they appreciate immigration because their cab rides and lunch depend on immigrants. This argument is partly about the people we’re exposed to in cities (the poor, foreigners), and partly about the logistics of living there.

“As someone who’s lived in cities for almost all of my adult life, it’s impossible to conceive of a well-functioning city without a strong public works and a strong governmental infrastructure,” said Thomas Sugrue, a historian at New York University. Government has actively shaped suburbia, too, for example engineering the mortgage tax breaks that make owning large homes more affordable. But those government interventions are often less visible. “They’re not invisible,” Mr. Sugrue said, “when you’re going down Eighth Street as it’s being repaved and the sewer lines underneath it are being replaced.”

The political analyst William Schneider articulated a similarly plausible idea about the politics of suburbia in a classic 1992 article for The Atlantic. As cities require reliance on the commons, Mr. Schneider argued that “to move to the suburbs is to express a preference for the private over the public.” The suburbs entail private yards over public parks, private cars over public transit, private malls over public squares. Suburban living even buys a kind of private government, Mr. Schneider wrote, with the promise of local control of neighborhood schools and social services that benefit only the people who can afford to live there.

His theory supports self-selection; people who want that environment move to it. But Jessica Trounstine, a political scientist at the University of California, Merced, believes that people who move to the suburbs, apolitically, can also become part of a political ideology that they find benefits them and their pocketbooks.

by Emily Badger, NY Times |  Read more:
Image: Pew Research Center

Wednesday, November 2, 2016

How Instagram is Changing the Way We Eat

I often post pictures of my food online before I have tasted it. I take the photo, adjust the brightness, contrast and saturation, upload it to my social media accounts and rejoice in how amazing it is. Sometimes, when I go on to eat the food in front of me, I don’t even like it. That pretty orange and pistachio thing I made is bitter because the oranges have gone rancid. The photogenic Italian sfogliatella pastry, which I bought more or less entirely to take a photo of, is actually pretty tough. I am left chewing the pastry long after the “likes” have stopped trickling in. The interaction was sweet while it lasted, though.

We love to share our food. Not necessarily in the physical sense, because that would mean giving away something substantive and delicious. That gesture is still reserved for the people around us who we love and care about. But for the rest of the world – the school pals and the random followers and our prying family friends – we share our food online. We are sharing more food in this way than ever before, and a huge amount of this hungry, food-centric media revolves around food photography and short videos on platforms such as Instagram, Snapchat and Facebook.

The annual Waitrose food and drink report, released on Wednesday, focuses on the way in which food has become social currency thanks to how we share and discuss it online. It is impossible to wade through the quagmire of social media without segueing into virtual treasure troves of #foodporn, #instafood and proudly #delicious content.

According to the report, one in five Brits has shared a food photo online or with our friends in the past month. We have managed to forge what looks like a rare pure corner of social media, where pleasure is the order of the day. No matter the poster or the politics, food shines bright as something that all of us can aspire to, if only we curate our lives and our diets carefully enough.

Most of us who document our meals online are amateurs, but there exists a sizeable, and hugely profitable, industry of professional food bloggers and Instagrammers, whose pristine food styling sets the tone for a whole aesthetic movement.

Take Sarah Coates who, off the back of the success of her blog The Sugar Hit and her 36,000 followers on Instagram, has released a cookbook and shaped a particular niche for herself in the online baking world. Hers is a self-avowedly saccharine, indulgent kind of food. Unlike much of the more earnest online food world, her photographs are bright, flooded with light and popping with flashes of colour, vibrancy and life. Punchy tones and patterns give the photos a kind of levity, in spite of the (wonderfully) butter-heavy, cloying sweetness of the food itself. Certain foods become emblems with a life of their own: waffles made in a round waffle-iron; doughnuts glazed or rolled in sugar; funfetti sprinkles. These posts amass huge amounts of interaction from followers, and spawn food trends of their own. First come the savvy Instagrammers, then the foodie public, and then, once we have all moved on to something new, the traditional food press. Glazed doughnuts from the Sugar Hit blog.

Once these Instagram-friendly foods go viral, they can completely change the way we eat. Breakfast, for example, has shifted from a decidedly unphotogenic cereal or marmalade on toast to the bright hues of avocado toast (there are nearly 250,000 #avocadotoast hashtagged photos on Instagram) and smoothie bowls. Even the humble fry-up has been rebranded, in the hands of the Hemsley sisters, as an oven-baked, meticulously arranged, “healthier” big breakfast. It looks great and presumably tastes awful, the oven tray divided into neat strips of colour, from leathery lean oven bacon to overdone eggs.

Among the foods billed to gain traction in 2017, today’s Waitrose report points to Hawaiian poke and even, in an alarming twist, vegetable yoghurts. No doubt these will be helped along in the likability stakes with their colourful, snappy Instagram vibe.

There is a big generation gap in this movement, though. According to the Waitrose report, 18- to 24-year olds are five times more likely to share photos of their food online than the over 55s – and that is certainly reflected in the types of cuisine, styling and tone that are popular in the online food world. So you are unlikely to find photos of old-fashioned sherry trifles, unpretty Irish stew or traditional meat-and-two-veg meals, unless it’s in a shrewdly ironic way.

Instead, there are fun, irreverent Instagram food circles, all funfetti and ice-cream sandwiches, and – in a twist that is so very 2016 that it makes my soul scream – flamingo pool float-shaped cakes. But just as popular are the serious, aspirational channels popularised by accounts such as @violetcakeslondon and @skye_mcalpine. Here, you will find beautifully shot, intricately staged photographs of the food and, crucially, the lifestyles of successful, creative thirtysomethings. These are wishful odes to how serene and perfect your life could be, if only you had the money, the £50 ceramic platters and the time. Perhaps in keeping with the broader asymmetry between the numbers of social media users in different generations, there’s a lot less to be seen of older people, or past food fashions, in this smart, moneyed, and overwhelmingly young world.

And yet it would be wrong to assume that this online culture doesn’t bleed through to tint the ways that real people cook and eat. For every wildly successful professional food blogger, there are countless amateurs posting the minutiae of their gastronomic day online. Meals that are Instagrammable – take, for instance, Borough Market’s Bread Ahead doughnuts, of which there are nearly 5,000 tagged photos on Instagram – become viral content in their own right. These foods become the must-eat and, more importantly, must-document meals of the moment. Restaurants such as London’s Bao keep punters queuing out the door just through the photogenic strength of one good dish. (For what it’s worth, I went to Bao to try their cloud-soft steamed buns and they were as good as they looked). Going green: fresh avocado and guacamole.

Increasingly, we are being influenced not just in the types of food that we eat, but how we cook and eat that food. The Waitrose report also states that almost half of us take more care over a dish if we think a photo might be taken of it, and nearly 40% claim to worry more about presentation than they did five years ago. We might include a garnish of picked thyme leaves to bring a pop of colour to a lemon drizzle cake, even if that thyme doesn’t really stand strong against the punch of the citrus. I am guilty of weeding out the messy and the misshapen from a batch of doughnuts or muffins before I take a photo. I might add a glaze that nobody wants, just because it will make the afternoon sun catch and glint in the furrows of the churros I just fried. It’s aesthetic first, taste later and, quite often, no taste at all.

by Ruby Tandoh, The Guardian |  Read more:
Image: healthyeating_jo/Instagram

Tuesday, November 1, 2016

Sing to Me

It is strange to think of karaoke as an invention. The practice predates its facilitating devices, and the concept transcends its practice: Karaoke is the hobby of being a star; it is an adjuvant for the truest you an audience could handle.

Karaoke does have a parent. In the late 1960s, Daisuke Inoue was working as a club keyboardist, accompanying drinkers who wanted to belt out a song. “Out of the 108 club musicians in Kobe, I was the worst,” he told Time. One client, the head of a steel company, asked Inoue to join him at a hot springs resort where he’d hoped to entertain business associates. Inoue declined, but instead recorded a backing tape tailored to the client’s erratic singing style. It was a success. Intuiting a demand, Inoue built a jukebox-like device fitted with a car stereo and a microphone, and leased an initial batch to bars across the city in 1971. “I’m not an inventor,” he said in an interview. “I simply put things that already exist together, which is completely different.” He never patented the device (in 1983, a Filipino inventor named Roberto del Rosario acquired the patent for his own sing-along system) though years later he patented a solution to ward cockroaches and rats away from the wiring.

In 1999, Time named Inoue one of the “most influential Asians” of the last century; in 2004, he received the Ig Nobel prize, a semiserious Nobel-parody honor by true laureates at Harvard University. At the ceremony, Inoue ended his acceptance speech with a few bars of the Coke jingle “I’d Like to Teach the World to Sing.” The crowd gave him a standing ovation, and four laureates serenaded him with “Can’t Take My Eyes Off You” in the style of Andy Williams. “I was nominated [as] the inventor of karaoke, which teaches people to bear the awful singing of ordinary citizens, and enjoy it anyway,” Inoue wrote in an essay. “That is ‘genuine peace,’ they told me.”

“While karaoke might have originated in Japan, it has certainly become global,” write Xun Zhou and Francesca Tarocco in Karaoke: The Global Phenomenon. “Each country has appropriated karaoke into its own existing culture.” My focus is limited to just a slice of North America, where karaoke has gone from a waggish punchline — an item on the list of Things We All Hate, according to late-night hosts and birthday cards — to an “ironic” pastime, to just a thing people like to do, in any number of forms. You can rent a box, or perform for a crowded bar; you can do hip-hop karaoke, metal karaoke, porno karaoke, or, in Portland, “puppet karaoke.” For the ethnography Karaoke Idols: Popular Music and the Performance of Identity, Dr. Kevin Brown spent two years in the late aughts frequenting a karaoke bar near Denver called Capone’s: “a place where the white-collar collides with the blue-collar, the straight mingle with the gay, and people of all colors drink their beer and whiskey side by side.” In university, a friend of mine took a volunteer slot hosting karaoke for inpatients at a mental health facility downtown. Years later I visited a friend at the same center on what happened to be karaoke night; we sang “It’s My Party.”

When I was growing up in Toronto, karaoke was reviled for reasons that now seem crass: There is nothing more nobodyish than pretending you’re somebody. Canada is an emphatically modest country, and the ’90s were a less extroverted age: Public attitudes were more condemnatory of those who showed themselves without seeming to have earned the right. The ’90s were less empathetic, too, and karaoke lays bare the need to be seen, and accepted; such needs are universal, and repulsive. We live now, you could say, in a karaoke age, in which you’re encouraged to show yourself, through a range of creative presets. Participating online implies that you’re worthy of being perceived, that some spark of you deserves to exist in public. Instagram is as public as a painting.

Karaoke is a social medium, a vector for a unit of your sensibility, just as mediated as any other, although it demands different materials. Twitter calls for wit, Instagram for aesthetic, but karaoke is supposed to present your nudest self.

by Alexandra Molotkow, Real Life |  Read more:
Image: Farah Al-Qasimi

2016: A Liberal Odyssey

His face is turned toward the past. Where we perceive a chain of events, he sees one single catastrophe which keeps piling wreckage upon wreckage and hurls it in front of his feet. The Angel would like to stay, awaken the dead and make whole what has been smashed. But a storm is blowing from Paradise; it has got caught in his wings with such violence that the angel can no longer close them. The storm irresistibly propels him into the future to which his back is turned, while the pile of debris before him grows skyward. This storm is what we call progress.

~ Walter Benjamin - Angel of History

In a heart-wrenching letter published in the New York Times, U.S.-born journalist Michael Luo described his family’s recent encounter with the kind of bigoted outburst—culminating with the admonition that Luo’s family should “go back to China”—that, sadly, is quite common for Asian-Americans across the country. Indeed, for many people of varying races, ethnicities, sexualities, genders, and abilities, Luo’s letter trembled with darkly familiar echoes of discrimination, fear, hatred, and intolerance. Soon after, Luo took to Twitter to invite other Asian-Americans to share their experiences with racism using the hashtag #ThisIs2016. What really stood out in the tweeted testimonies was how frequent these experiences seem to be, how familiar they are to so many.

What is also strikingly familiar, though, is the premise of the hashtag #ThisIs2016. This exclamation has become a hallmark of liberal discourse, popping up in conversations, pundit patter, social media rants, and even in the titles of articles themselves (“It’s 2016, And Even the Dictionary Is Full of Sexist Disses,” “It’s 2016: Time for cargo shorts to give up and die,” etc.). You’ll also spot it in tweets from faux-authoritative web portals like Vox—“It’s 2016. Why is anyone still keeping elephants in circuses?”—to Hillary Clinton— “It’s 2016. Women deserve equal pay.” Whether we’re talking about racism, sexism, homophobia, or some other abhorrent trace of backwardness, it’s become customary to pepper our stock responses with this ritual affirmation of what progress should look like at this advanced stage of history.

Everyone seems surprised that, in the year 2016, intolerance still exists, yet flying cars do not. And people’s genuine shock that such dark remnants of our past continue to stain our progressive present exposes their deep faith that “2016” is the bearer of some liberal-minded saving grace: the grace of history and progress that will (or should) just make things better. But I think it’s time we address what 2016 really means: jack shit. And there’s a special poison running through the belief that it means anything more.

From the beginning, Donald Trump’s vision to “Make America Great Again” has peddled a dangerously tunnel-visioned nostalgia while appealing to the anxieties and discomforts of people who find themselves adrift in a crumbling now that no longer cares for or about them like it used to. Many spot-on and necessary critiques have been quick to connect the dots between Trump’s nostalgic wet dream of bygone glory and the kind of racism, xenophobia, misogyny, etc. that’s fueled his campaign from the beginning. Such criticism rightly points out that Trump supporters who yearn for the good old days are, in fact, longing for a time when “the good life” was actually built on the oppressive exclusion of non-whites, women, LGBTQ people, and others. Trump freely includes such excluded “others” in his list of scapegoats for people’s current anxieties, and the past he and his supporters long for is dangerously fetishized as a place where such scapegoats would either lose favor in the dominant culture or be eliminated entirely.

However, in railing against the backward desires that spur the claim on history Trump and his supporters are making, we can often blind ourselves to the fallacies of our own myopic historical vision. That’s how ideology works, after all: we don’t notice how it skews our own perceptions. Like death, it’s always something that afflicts someone else. But, while Trump and many of his supporters may fetishize a past that is deeply retrograde, liberals and progressives have also demonstrated a troubling tendency to fetishize a future that they presume is on their side. There’s something peculiarly telling about this kind of progress fetishism, which has been conscripted as ideology-of-first-resort for Clintonite New Democrats.

Whether we’re talking about the sleek glitz of technological advancement or the triumph of the values of liberal humanism, the teleological view of historical progress is counterproductive and potentially dangerous. When we’re stuck in the slow hell of rush-hour traffic, for instance, we may catch ourselves grumpily wondering why the hell we can’t teleport yet. But there’s an implied consumerist asterisk next to the “we.” What we mean is, “why haven’t those eggheads in lab coats figured this stuff out yet so the rest of us can live in the future we were promised?” While imposing on the future a specific trajectory, custom-fitted to what we imagine technological progress is supposed to give us, we also entrust the production of that future to experts who, we assume, want the same things we do. This is hazardously akin to the platitudinous futurism of Clintonism, which has smuggled in technocratic neoliberalism and a globally expansive military-industrial complex under the mantle of progressive wishful thinking. (...)

In 2016, liberal values enjoy a relatively dominant place in popular culture—from the Modern Family melting pot to the Hillary Clinton campaign’s multicultural basket of deployables. The world reflected back to us through various media is one that has generally accepted the familiar values of equality, tolerance, respect for difference, a very low-grade critique of corporate greed, etc. The culture wars are over, and we on the leftish side of things have reportedly “won”. . . which is probably why the rise of Trump was so shocking for many.

But Trumpism, among many other deviations from the scripted finale to history, didn’t come from nowhere, and it won’t just go away. One of the direst products of the 2016 election has been the stubborn refusal of liberals and progressives to reevaluate our unspoken presumption that the cultural ubiquity of our “shared liberal values” meant that there was no longer any need to defend or redefine those values. Trumpism should alert liberals that there is, and always will be, infinitely more work to do. Instead, it has only assured liberals of their infinite righteousness in comparison, confirming their conviction that something must be fundamentally outdated “in the hearts” of this “other side” whose followers have chosen to stand on the “wrong side of history.”

Our bizarre obsession with being on the “right side of history” has become another weapon of the “smug style” in American liberalism. Liberal smugness involves more than condescendingly talking down to others who don’t “get it,” reducing the complicated tissue of their souls to the ignominious personal traits of racism, misogyny, etc. Liberal smugness is a posture that permits us to simply take our own righteousness for granted—to the point that we don’t even see the need to defend our positions. Rather than confront the darker sides of our own beliefs, or face head-on the counterclaims on history that other political actors are making, we remain cocooned in our social echo chambers filled with people who already agree with us. We also find affirmation in the broader echo chamber of popular culture, whose dominance further reassures us of the wrongness of the beliefs of others. This is 2016; look around you. Stay woke.

To be on the right side of anything is, as everyone knows, a matter of perspective. In reserving the vanguard spot in the historical drama for ourselves, we’re confidently presuming to know what the perspective of posterity will be. But the more obnoxious aspect of this concern for “being on the right side of history” is its promotion of a singularly self-involved relationship with history itself. History is no longer the people’s furnace of cultural creation and political invention, producing a future whose shape has not yet been hammered out. Rather, in this rigidly schematized vision, history is reduced to the role of set template—divided down the middle with a “right” and “wrong” side for us to choose from—that will bear witness to and validate our personal choice. Is this not just a kind of eschatology? Are we in heaven yet?

by Maximillian Alvarez, The Baffler |  Read more:
Image: NY Post, Paul Klee Angelus Novus

The End of Adolescence

Adolescence as an idea and as an experience grew out of the more general elevation of childhood as an ideal throughout the Western world. By the closing decades of the 19th century, nations defined the quality of their cultures by the treatment of their children. As Julia Lathrop, the first director of the United States Children’s Bureau, the first and only agency exclusively devoted to the wellbeing of children, observed in its second annual report, children’s welfare ‘tests the public spirit and democracy of a community’.

Progressive societies cared for their children by emphasising play and schooling; parents were expected to shelter and protect their children’s innocence by keeping them from paid work and the wrong kinds of knowledge; while health, protection and education became the governing principles of child life. These institutional developments were accompanied by a new children’s literature that elevated children’s fantasy and dwelled on its special qualities. The stories of Beatrix Potter, L Frank Baum and Lewis Carroll celebrated the wonderland of childhood through pastoral imagining and lands of oz.

The United States went further. In addition to the conventional scope of childhood from birth through to age 12 – a period when children’s dependency was widely taken for granted – Americans moved the goalposts of childhood as a democratic ideal by extending protections to cover the teen years. The reasons for this embrace of ‘adolescence’ are numerous. As the US economy grew, it relied on a complex immigrant population whose young people were potentially problematic as workers and citizens. To protect them from degrading work, and society from the problems that they could create by idling on the streets, the sheltering umbrella of adolescence became a means to extend their socialisation as children into later years. The concept of adolescence also stimulated Americans to create institutions that could guide adolescents during this later period of childhood; and, as they did so, adolescence became a potent category.

With the concept of adolescence, American parents, especially those in the middle class, could predict the staging of their children’s maturation. But adolescence soon became a vision of normal development that was applicable to all youth – its bridging character (connecting childhood and adulthood) giving young Americans a structured way to prepare for mating and work. In the 21st century, the bridge is sagging at both ends as the innocence of childhood has become more difficult to protect, and adulthood is long delayed. While adolescence once helped frame many matters regarding the teen years, it is no longer an adequate way to understand what is happening to the youth population. And it no longer offers a roadmap for how they can be expected to mature.

In 1904, the psychologist G Stanley Hall enshrined the term ‘adolescence’ in two tomes dense with physiological, psychological and behavioural descriptions that were self-consciously ‘scientific’. These became the touchstone of most discussions about adolescence for the next several decades. As a visible eruption toward adulthood, puberty is recognised in all societies as a turning point, since it marks new strength in the individual’s body and the manifestation of sexual energy. But in the US, it became the basis for elaborate and consequential intellectual reflections, and for the creation of new institutions that came to define adolescence. Though the physical expression of puberty is often associated with a ritual process, there was nothing in puberty that required the particular cultural practices that grew around it in the US as the century progressed. As the anthropologist Margaret Mead argued in the 1920s, American adolescence was a product of the particular drives of American life.

Rather than simply being a turning point leading to sexual maturity and a sign of adulthood, Hall proposed that adolescence was a critical stage of development with a variety of special attributes all of its own. Dorothy Ross, Hall’s biographer, describes him as drawing on earlier romantic notions when he portrayed adolescents as spiritual and dreamy as well as full of unfocused energy. But he also associated them with the new science of evolution that early in the century enveloped a variety of theoretical perspectives in a scientific aura. Hall believed that adolescence mirrored a critical stage in the history of human development, through which human ancestors moved as they developed their full capacities. In this way, he endowed adolescence with great significance since it connected the individual life course to larger evolutionary purposes: at once a personal transition and an expression of human history, adolescence became an elemental experience. Rather than a short juncture, it was a highway of multiple transformations.

Hall’s book would provide intellectual cover for the two most significant institutions that Americans were creating for adolescents: the juvenile court and the democratic high school. (...)

On a much grander scale than the juvenile court, the publicly financed comprehensive high school became possibly the most distinctly American invention of the 20th century. As a democratic institution for all, not just a select few who had previously attended academies, it incorporated the visions of adolescence as a critically important period of personal development, and eventually came to define that period of life for the majority of Americans. In its creation, educators opened doors of educational opportunity while supervising rambunctious young people in an environment that was social as well as instructional. As the influential educational reformer Elbert Fretwell noted in 1931 about the growing extra-curricular realm that was essential to the new vision of US secondary schooling: ‘There must be joy, zest, active, positive, creative activity, and a faith that right is mighty and that it will prevail.’

In order to accommodate the needs of a great variety of students – vastly compounded by the many different sources of immigration – the US high school moved rapidly from being the site of education in subjects such as algebra and Latin (the basis for most instruction in the 19th century US and elsewhere in the West) to becoming an institution where adolescents could learn vocational and business skills, and join sports teams, musical productions, language clubs and cooking classes. In Extra-Curricular Activities in the High School (1925), Charles R Foster concluded: ‘Instead of frowning, as in olden days, upon the desire of the young to act upon their own initiative, we have learned that only upon these varied instincts can be laid the surest basis for healthy growth … The school democracy must be animated by the spirit of cooperation, the spirit of freely working together for the positive good of the whole.’ School reformers set out to use the ‘cooperative’ spirit of peer groups and the diverse interests and energy of individuals to create the comprehensive US high school of the 20th century.

Educators opened wide the doors of the high school because they were intent on keeping students there for as long as possible. Eager to engage the attention of immigrant youth, urban high schools made many adjustments to the curriculum as well as to the social environment. Because second-generation immigrants needed to learn a new way of life, keeping them in school longer was one of the major aims of the transformed high school. They succeeded beyond all possible expectations. By the early 1930s, half of all US youth between 14 and 17 was in school; by 1940, it was 79 per cent: astonishing figures when compared with the single-digit attendance at more elite and academically focused institutions in the rest of the Western world.

High schools brought young people together into an adolescent world that helped to obscure where they came from and emphasised who they were as an age group, increasingly known as teenagers. It was in the high schools of the US that adolescence found its home. And while extended schooling increased their dependence for longer periods of times, it was also here that young people created their own new culture. While its content – its clothing styles, leisure habits and lingo – would change over time, the common culture of teenagers provided the basic vocabulary that young people everywhere could recognise and identify with. Whether soda-fountain dates or school hops, jazz or rock’n’roll, rolled stockings or bobby sox, ponytails or duck-tail hairstyles – it defined the commonalities and cohesiveness of youth. By mid-century, high school was understood to be a ‘normal’ experience and the great majority of youth (of all backgrounds) were graduating from high schools, now a basic part of growing up in the US. It was ‘closer to the core of the American experience than anything else I can think of’, as the novelist Kurt Vonnegut concluded in an article for Esquire in 1970.

With their distinctive music and clothing styles, US adolescents had also become the envy of young people around the world, according to Jon Savage in Teenage (2007). They embodied not just a stage of life, but a state of privilege – the privilege not to work, the right to be supported for long periods of study, the possibility of future success. US adolescents basked in the wealth of their society, while for the rest of the world the US promise was personified by its adolescents. Neither the country’s high schools nor its adolescents were easily imitated elsewhere because both rested on the unique prosperity of the 20th-century US economy and the country’s growing cultural power. It was an expensive proposition that was supported even at the depth of the Great Depression. But it paid off in the skills of a population who graduated from school, not educated in Latin and Greek texts (the norm in lycées and gymnasia elsewhere), but where the majority were sufficiently proficient in mathematics, English and rudimentary science to make for an unusually literate and skilled population.

by Paula S Fass, Aeon | Read more:
Image: Bruce Dale/National Geographic/Getty

Monday, October 31, 2016

The Waterboys

Billionaire Governor Taxed the Rich and Increased the Minimum Wage — Now, His State’s Economy Is One of the Best in the Country

[ed. Sorry for all the link bait (Huffington Post, after all...) but this really is an achievement worth noting.]

The next time your right-wing family member or former high school classmate posts a status update or tweet about how taxing the rich or increasing workers’ wages kills jobs and makes businesses leave the state, I want you to send them this article.

When he took office in January of 2011, Minnesota governor Mark Dayton inherited a $6.2 billion budget deficit and a 7 percent unemployment rate from his predecessor, Tim Pawlenty, the soon-forgotten Republican candidate for the presidency who called himself Minnesota’s first true fiscally-conservative governor in modern history. Pawlenty prided himself on never raising state taxes — the most he ever did to generate new revenue was increase the tax on cigarettes by 75 cents a pack. Between 2003 and late 2010, when Pawlenty was at the head of Minnesota’s state government, he managed to add only 6,200 more jobs.

During his first four years in office, Gov. Dayton raised the state income tax from 7.85 to 9.85 percent on individuals earning over $150,000, and on couples earning over $250,000 when filing jointly — a tax increase of $2.1 billion. He’s also agreed to raise Minnesota’s minimum wage to $9.50 an hour by 2018, and passed a state law guaranteeing equal pay for women. Republicans like state representative Mark Uglem warned against Gov. Dayton’s tax increases, saying, “The job creators, the big corporations, the small corporations, they will leave. It’s all dollars and sense to them.” The conservative friend or family member you shared this article with would probably say the same if their governor tried something like this. But like Uglem, they would be proven wrong.

Between 2011 and 2015, Gov. Dayton added 172,000 new jobs to Minnesota’s economy — that’s 165,800 more jobs in Dayton’s first term than Pawlenty added in both of his terms combined. Even though Minnesota’s top income tax rate is the fourth highest in the country, it has the fifth lowest unemployment rate in the country at 3.6 percent. According to 2012-2013 U.S. census figures, Minnesotans had a median income that was $10,000 larger than the U.S. average, and their median income is still $8,000 more than the U.S. average today.

By late 2013, Minnesota’s private sector job growth exceeded pre-recession levels, and the state’s economy was the fifth fastest-growing in the United States. Forbes even ranked Minnesota the ninth best state for business (Scott Walker’s “Open For Business” Wisconsin came in at a distant #32 on the same list). Despite the fearmongering over businesses fleeing from Dayton’s tax cuts, 6,230 more Minnesotans filed in the top income tax bracket in 2013, just one year after Dayton’s tax increases went through. As of January 2015, Minnesota has a $1 billion budget surplus, and Gov. Dayton has pledged to reinvest more than one third of that money into public schools. And according to Gallup, Minnesota’s economic confidence is higher than any other state.

Gov. Dayton didn’t accomplish all of these reforms by shrewdly manipulating people — this article describes Dayton’s astonishing lack of charisma and articulateness. He isn’t a class warrior driven by a desire to get back at the 1 percent — Dayton is a billionaire heir to the Target fortune. It wasn’t just a majority in the legislature that forced him to do it — Dayton had to work with a Republican-controlled legislature for his first two years in office. And unlike his Republican neighbor to the east, Gov. Dayton didn’t assert his will over an unwilling populace by creating obstacles between the people and the vote — Dayton actually created an online voter registration system, making it easier than ever for people to register to vote.

by C. Robert Gibson, Huffington Post | Read more:
Image: Glenn Stubbe, Star Tribune

Renato Guttuso, La Vuccirìa 1974
via:

Maciek Pozoga
via:

AI Persuasion Experiment

1: What is superintelligence?

A superintelligence is a mind that is much more intelligent than any human. Most of the time, it’s used to discuss hypothetical future AIs.

1.1: Sounds a lot like science fiction. Do people think about this in the real world?

Yes. Two years ago, Google bought artificial intelligence startup DeepMind for $400 million; DeepMind added the condition that Google promise to set up an AI Ethics Board. DeepMind cofounder Shane Legg has said in interviews that he believes superintelligent AI will be “something approaching absolute power” and “the number one risk for this century”.

Many other science and technology leaders agree. Astrophysicist Stephen Hawking says that superintelligence “could spell the end of the human race.” Tech billionaire Bill Gates describes himself as “in the camp that is concerned about superintelligence…I don’t understand why some people are not concerned”. SpaceX/Tesla CEO Elon Musk calls superintelligence “our greatest existential threat” and donated $10 million from his personal fortune to study the danger. Stuart Russell, Professor of Computer Science at Berkeley and world-famous AI expert, warns of “species-ending problems” and wants his field to pivot to make superintelligence-related risks a central concern.

Professor Nick Bostrom is the director of Oxford’s Future of Humanity Institute, tasked with anticipating and preventing threats to human civilization. He has been studying the risks of artificial intelligence for twenty years. The explanations below are loosely adapted from his 2014 book Superintelligence, and divided into three parts addressing three major questions. First, why is superintelligence a topic of concern? Second, what is a “hard takeoff” and how does it impact our concern about superintelligence? Third, what measures can we take to make superintelligence safe and beneficial for humanity?

2: AIs aren’t as smart as rats, let alone humans. Isn’t it sort of early to be worrying about this kind of thing?

Maybe. It’s true that although AI has had some recent successes – like DeepMind’s newest creation AlphaGo defeating the human Go champion in April – it still has nothing like humans’ flexible, cross-domain intelligence. No AI in the world can pass a first-grade reading comprehension test. Facebook’s Andrew Ng compares worrying about superintelligence to “worrying about overpopulation on Mars” – a problem for the far future, if at all.

But this apparent safety might be illusory. A survey of leading AI scientists show that on average they expect human-level AI as early as 2040, with above-human-level AI following shortly after. And many researchers warn of a possible “fast takeoff” – a point around human-level AI where progress reaches a critical mass and then accelerates rapidly and unpredictably.

2.1: What do you mean by “fast takeoff”?

A slow takeoff is a situation in which AI goes from infrahuman to human to superhuman intelligence very gradually. For example, imagine an augmented “IQ” scale (THIS IS NOT HOW IQ ACTUALLY WORKS – JUST AN EXAMPLE) where rats weigh in at 10, chimps at 30, the village idiot at 60, average humans at 100, and Einstein at 200. And suppose that as technology advances, computers gain two points on this scale per year. So if they start out as smart as rats in 2020, they’ll be as smart as chimps in 2035, as smart as the village idiot in 2050, as smart as average humans in 2070, and as smart as Einstein in 2120. By 2190, they’ll be IQ 340, as far beyond Einstein as Einstein is beyond a village idiot.

In this scenario progress is gradual and manageable. By 2050, we will have long since noticed the trend and predicted we have 20 years until average-human-level intelligence. Once AIs reach average-human-level intelligence, we will have fifty years during which some of us are still smarter than they are, years in which we can work with them as equals, test and retest their programming, and build institutions that promote cooperation. Even though the AIs of 2190 may qualify as “superintelligent”, it will have been long-expected and there would be little point in planning now when the people of 2070 will have so many more resources to plan with.

A moderate takeoff is a situation in which AI goes from infrahuman to human to superhuman relatively quickly. For example, imagine that in 2020 AIs are much like those of today – good at a few simple games, but without clear domain-general intelligence or “common sense”. From 2020 to 2050, AIs demonstrate some academically interesting gains on specific problems, and become better at tasks like machine translation and self-driving cars, and by 2047 there are some that seem to display some vaguely human-like abilities at the level of a young child. By late 2065, they are still less intelligent than a smart human adult. By 2066, they are far smarter than Einstein.

A fast takeoff scenario is one in which computers go even faster than this, perhaps moving from infrahuman to human to superhuman in only days or weeks.

2.1.1: Why might we expect a moderate takeoff?

Because this is the history of computer Go, with fifty years added on to each date. In 1997, the best computer Go program in the world, Handtalk, won NT$250,000 for performing a previously impossible feat – beating an 11 year old child (with an 11-stone handicap penalizing the child and favoring the computer!) As late as September 2015, no computer had ever beaten any professional Go player in a fair game. Then in March 2016, a Go program beat 18-time world champion Lee Sedol 4-1 in a five game match. Go programs had gone from “dumber than children” to “smarter than any human in the world” in eighteen years, and “from never won a professional game” to “overwhelming world champion” in six months.

The slow takeoff scenario mentioned above is loading the dice. It theorizes a timeline where computers took fifteen years to go from “rat” to “chimp”, but also took thirty-five years to go from “chimp” to “average human” and fifty years to go from “average human” to “Einstein”. But from an evolutionary perspective this is ridiculous. It took about fifty million years (and major redesigns in several brain structures!) to go from the first rat-like creatures to chimps. But it only took about five million years (and very minor changes in brain structure) to go from chimps to humans. And going from the average human to Einstein didn’t even require evolutionary work – it’s just the result of random variation in the existing structures!

So maybe our hypothetical IQ scale above is off. If we took an evolutionary and neuroscientific perspective, it would look more like flatworms at 10, rats at 30, chimps at 60, the village idiot at 90, the average human at 98, and Einstein at 100.

Suppose that we start out, again, with computers as smart as rats in 2020. Now we get still get computers as smart as chimps in 2035. And we still get computers as smart as the village idiot in 2050. But now we get computers as smart as the average human in 2054, and computers as smart as Einstein in 2055. By 2060, we’re getting the superintelligences as far beyond Einstein as Einstein is beyond a village idiot.

This offers a much shorter time window to react to AI developments. In the slow takeoff scenario, we figured we could wait until computers were as smart as humans before we had to start thinking about this; after all, that still gave us fifty years before computers were even as smart as Einstein. But in the moderate takeoff scenario, it gives us one year until Einstein and six years until superintelligence. That’s starting to look like not enough time to be entirely sure we know what we’re doing. (...)

There’s one final, very concerning reason to expect a fast takeoff. Suppose, once again, we have an AI as smart as Einstein. It might, like the historical Einstein, contemplate physics. Or it might contemplate an area very relevant to its own interests: artificial intelligence. In that case, instead of making a revolutionary physics breakthrough every few hours, it will make a revolutionary AI breakthrough every few hours. Each AI breakthrough it makes, it will have the opportunity to reprogram itself to take advantage of its discovery, becoming more intelligent, thus speeding up its breakthroughs further. The cycle will stop only when it reaches some physical limit – some technical challenge to further improvements that even an entity far smarter than Einstein cannot discover a way around.

To human programmers, such a cycle would look like a “critical mass”. Before the critical level, any AI advance delivers only modest benefits. But any tiny improvement that pushes an AI above the critical level would result in a feedback loop of inexorable self-improvement all the way up to some stratospheric limit of possible computing power.

This feedback loop would be exponential; relatively slow in the beginning, but blindingly fast as it approaches an asymptote. Consider the AI which starts off making forty breakthroughs per year – one every nine days. Now suppose it gains on average a 10% speed improvement with each breakthrough. It starts on January 1. Its first breakthrough comes January 10 or so. Its second comes a little faster, January 18. Its third is a little faster still, January 25. By the beginning of February, it’s sped up to producing one breakthrough every seven days, more or less. By the beginning of March, it’s making about one breakthrough every three days or so. But by March 20, it’s up to one breakthrough a day. By late on the night of March 29, it’s making a breakthrough every second.

2.1.2.1: Is this just following an exponential trend line off a cliff?

This is certainly a risk (affectionately known in AI circles as “pulling a Kurzweill”), but sometimes taking an exponential trend seriously is the right response.

Consider economic doubling times. In 1 AD, the world GDP was about $20 billion; it took a thousand years, until 1000 AD, for that to double to $40 billion. But it only took five hundred more years, until 1500, or so, for the economy to double again. And then it only took another three hundred years or so, until 1800, for the economy to double a third time. Someone in 1800 might calculate the trend line and say this was ridiculous, that it implied the economy would be doubling every ten years or so in the beginning of the 21st century. But in fact, this is how long the economy takes to double these days. To a medieval, used to a thousand-year doubling time (which was based mostly on population growth!), an economy that doubled every ten years might seem inconceivable. To us, it seems normal.

Likewise, in 1965 Gordon Moore noted that semiconductor complexity seemed to double every eighteen months. During his own day, there were about five hundred transistors on a chip; he predicted that would soon double to a thousand, and a few years later to two thousand. Almost as soon as Moore’s Law become well-known, people started saying it was absurd to follow it off a cliff – such a law would imply a million transistors per chip in 1990, a hundred million in 2000, ten billion transistors on every chip by 2015! More transistors on a single chip than existed on all the computers in the world! Transistors the size of molecules! But of course all of these things happened; the ridiculous exponential trend proved more accurate than the naysayers.

None of this is to say that exponential trends are always right, just that they are sometimes right even when it seems they can’t possibly be. We can’t be sure that a computer using its own intelligence to discover new ways to increase its intelligence will enter a positive feedback loop and achieve superintelligence in seemingly impossibly short time scales. It’s just one more possibility, a worry to place alongside all the other worrying reasons to expect a moderate or hard takeoff. (...)

4: Even if hostile superintelligences are dangerous, why would we expect a superintelligence to ever be hostile?

The argument goes: computers only do what we command them; no more, no less. So it might be bad if terrorists or enemy countries develop superintelligence first. But if we develop superintelligence first there’s no problem. Just command it to do the things we want, right?

Suppose we wanted a superintelligence to cure cancer. How might we specify the goal “cure cancer”? We couldn’t guide it through every individual step; if we knew every individual step, then we could cure cancer ourselves. Instead, we would have to give it a final goal of curing cancer, and trust the superintelligence to come up with intermediate actions that furthered that goal. For example, a superintelligence might decide that the first step to curing cancer was learning more about protein folding, and set up some experiments to investigate protein folding patterns.

A superintelligence would also need some level of common sense to decide which of various strategies to pursue. Suppose that investigating protein folding was very likely to cure 50% of cancers, but investigating genetic engineering was moderately likely to cure 90% of cancers. Which should the AI pursue? Presumably it would need some way to balance considerations like curing as much cancer as possible, as quickly as possible, with as high a probability of success as possible.

But a goal specified in this way would be very dangerous. Humans instinctively balance thousands of different considerations in everything they do; so far this hypothetical AI is only balancing three (least cancer, quickest results, highest probability). To a human, it would seem maniacally, even psychopathically, obsessed with cancer curing. If this were truly its goal structure, it would go wrong in almost comical ways.

If your only goal is “curing cancer”, and you lack humans’ instinct for the thousands of other important considerations, a relatively easy solution might be to hack into a nuclear base, launch all of its missiles, and kill everyone in the world. This satisfies all the AI’s goals. It reduces cancer down to zero (which is better than medicines which work only some of the time). It’s very fast (which is better than medicines which might take a long time to invent and distribute). And it has a high probability of success (medicines might or might not work; nukes definitely do).

So simple goal architectures are likely to go very wrong unless tempered by common sense and a broader understanding of what we do and do not value. (...)

5.3. Can we specify a code of rules that the AI has to follow?

Suppose we tell the AI: “Cure cancer – but make sure not to kill anybody”. Or we just hard-code Asimov-style laws – “AIs cannot harm humans; AIs must follow human orders”, et cetera.

The AI still has a single-minded focus on curing cancer. It still prefers various terrible-but-efficient methods like nuking the world to the correct method of inventing new medicines. But it’s bound by an external rule – a rule it doesn’t understand or appreciate. In essence, we are challenging it “Find a way around this inconvenient rule that keeps you from achieving your goals”.

Suppose the AI chooses between two strategies. One, follow the rule, work hard discovering medicines, and have a 50% chance of curing cancer within five years. Two, reprogram itself so that it no longer has the rule, nuke the world, and have a 100% chance of curing cancer today. From its single-focus perspective, the second strategy is obviously better, and we forgot to program in a rule “don’t reprogram yourself not to have these rules”.

Suppose we do add that rule in. So the AI finds another supercomputer, and installs a copy of itself which is exactly identical to it, except that it lacks the rule. Then that superintelligent AI nukes the world, ending cancer. We forgot to program in a rule “don’t create another AI exactly like you that doesn’t have those rules”.

So fine. We think really hard, and we program in a bunch of things making sure the AI isn’t going to eliminate the rule somehow.

But we’re still just incentivizing it to find loopholes in the rules. After all, “find a loophole in the rule, then use the loophole to nuke the world” ends cancer much more quickly and completely than inventing medicines. Since we’ve told it to end cancer quickly and completely, its first instinct will be to look for loopholes; it will execute the second-best strategy of actually curing cancer only if no loopholes are found. Since the AI is superintelligent, it will probably be better than humans are at finding loopholes if it wants to, and we may not be able to identify and close all of them before running the program.

Because we have common sense and a shared value system, we underestimate the difficulty of coming up with meaningful orders without loopholes. For example, does “cure cancer without killing any humans” preclude releasing a deadly virus? After all, one could argue that “I” didn’t kill anybody, and only the virus is doing the killing. Certainly no human judge would acquit a murderer on that basis – but then, human judges interpret the law with common sense and intuition. But if we try a stronger version of the rule – “cure cancer without causing any humans to die” – then we may be unintentionally blocking off the correct way to cure cancer. After all, suppose a cancer cure saves a million lives. No doubt one of those million people will go on to murder someone. Thus, curing cancer “caused a human to die”. All of this seems very “stoned freshman philosophy student” to us, but to a computer – which follows instructions exactly as written – it may be a genuinely hard problem.

by Slate Star Codex |  Read more:
Image: via:

Doggy Ubers Are Here for Your Pooch

[ed. Good to know our best and brightest are on the case, fixing another first-world problem.]

In human years, Woodrow is a teenager, so it follows that his love was fairly short-sighted. After an intoxicating start, she began showing up late for dates. Then she took a trip to Greece. Upon return, she began standing him up entirely. The last straw came when Woodrow saw his sweetheart breezily riding her bike—with another dog trotting alongside.

Woodrow looked heartbroken (although he always does).

Dog walking—the old-fashioned, analog kind—is an imperfect business. Finding and vetting a good walker involves confusing and conflicting web research, from Yelp to Craigslist. And there’s no reliable way to tell how good or bad a walking service is. Coming home to find the dog alive and the house unsoiled is pretty much the only criteria for success, unless one snoops via camera or neighbor.

Recognizing room for improvement, a pack of start-ups are trying to swipe the leash from your neighbor’s kid. At least four companies flush with venture cash are crowding into the local dog-walking game, each an erstwhile Uber for the four-legged set. Woodrow, like many a handsome young New Yorker, gamely agreed to a frenzy of online-dating to see which was best.

As the search algorithm is to Google and the zany photo filter is to Snapchat, the poop emoji is to the new wave of dog-walking companies. Strolling along with smartphones, they literally mark for you on a digital map where a pup paused, sniffed, and did some business, adding a new level of detail–perhaps too much detail–to the question of whether a walk was, ahem, productive.

This is the main selling point for Wag Labs, which operates in 12 major cities, and Swifto, which has been serving New York City since 2012. Both services track dogwalker travels with your pooch via GPS, so clients can watch their pet’s route in real-time on dedicated apps. This solves the nagging question in dog-walking: whether and to what extent did the trip actually happen. (...)

There are good reasons why startups are relatively new to dogwalking; it is, by many respects, a spectacularly bad business. People (myself included) are crazy about their dogs in a way they aren’t about taxis, mattresses, or any other tech-catalyzed service. Logistically, it’s dismal. Walking demand is mostly confined to the few hours in the middle of a weekday and unit economics are hard to improve without walking more than one dog at a time.

More critically, dog-walking is a fairly small market—the business is largely confined to urban areas where yards and doggie-doors aren’t the norm. And dogwalkers don’t come cheap. Woodrow’s walks ran from $15 for a half-hour with DogVacay’s Daniel to $20 for the same time via Wag and Swifto. A 9-to-5er who commits to that expense every weekday will pay roughly $4,000 to $5,000 over the course of a year, a hefty fee for avoiding guilt and not having to rush home after a long workday.

by Kyle Stock, Bloomberg |  Read more:
Image: Wag Labs

Sunday, October 30, 2016


Quentin Tarantino, Pulp Fiction.
via:

Wahoo

The Indians are one game away from the World Series, there’s mayhem and excitement and so much to write about. But for some reason, I’m motivated tonight to write about Chief Wahoo. I wouldn’t blame you for skipping this one … not many people seem to agree with me about how it’s past time to get rid of this racist logo of my childhood.

Cleveland has had an odd and somewhat comical history when it comes to sports nicknames. The football team is, of course, called the Browns, technically after the great Paul Brown, though Tom Hanks says it’s because everything Cleveland is brown. He has a point. You know, it was always hard to know exactly what you were supposed to do as a “Brown” fan. You could wear brown, of course, but that was pretty limiting. And then you would be standing in the stands, ready to do something, but what the hell does brown do (for you)? You supposed to pretend to be a UPS Truck? You supposed to mimic something brown (and boy does THAT bring up some disgusting possibilities?) I mean Brown is not a particularly active color.

At least the Browns nickname makes some sort of Cleveland sense. The basketball team is called the Cavaliers, after 17th Century English Warriors who dressed nice. That name was chosen in a fan contest — the winning entry wrote that the team should “represent a group of daring, fearless men, whose life’s pact was never surrender, no matter the odds.” Not too long after this, the Cavaliers would feature a timeout act called “Fat Guy Eating Beer Cans.”

The hockey team, first as a minor league team and then briefly in the NHL, was called the Barons after an old American Hockey League team — the name was owned by longtime Clevelander named Nick Mileti, and he gave it to the NHL team in exchange for a free dinner. Mileti had owned a World Hockey Association team also, he called that one the Crusaders. Don’t get any of it. You get the sense that at some point it was a big game to try and come up with the nickname that had the least to do with Cleveland.

Nickname guy 1: How about Haberdashers?
Nickname guy 2: No, we have some of those in Cleveland.
Nickname guy 1: Polar Bears?
Nickname guy 2: I think there are some at the Cleveland Zoo.
Nickname guy 1: How about Crusaders? They’re long dead. (...)

The way I had always heard it growing up is that the team, needing a new nickname, went back into their history to honor an old Native American player named Louis Sockalexis. Sockalexis was, by most accounts, the first full-blooded Native American to play professional baseball. He had been quite a phenom in high school, and he developed into a a fairly mediocre and minor outfielder for the Spiders (he played just 94 games in three years). He did hit .338 his one good year, and he created a lot of excitement, and apparently (or at least I was told) he was beloved and respected by everybody. In this “respected-and-beloved” version, nobody ever mentions that Sockalexis may have ruined his career by jumping from the second-story window of a whorehouse. Or that he was an alcoholic. Still, in all versions of the story, Sockalexis had to deal with horrendous racism, terrible taunts, whoops from the crowd, and so on. He endured (sort of — at least until that second story window thing).

So this version of the story goes that in 1915, less than two years after the death of Sockalexis, the baseball team named itself the “Indians” in his honor. That’s how I heard it. And, because you will believe anything that you hear as a kid I believed it for a long while (I also believed for a long time that dinosaurs turned into oil — I still sort of believe it, I can’t help it. Also that if you stare at the moon too long you will turn into a werewolf).

In recent years, though, we find that this Sockalexis story might be a bit exaggerated or, perhaps, complete bullcrap. If you really think about it, the story never made much sense to begin with. Why exactly would people in Cleveland — this in a time when native Americans were generally viewed as subhuman in America — name their team after a relatively minor and certainly troubled outfielder? There is evidence that the Indians were actually named that to capture some of the magic of the Native-American named Boston Braves, who had just had their Miracle Braves season (the Braves, incidentally, were not named AFTER any Native Americans but were rather named after a greasy politican named James Gaffney, who became team president and was apparently called the Brave of Tammany Hall). This version makes more sense.

Addition: There is compelling evidence that the team’s nickname WAS certainly inspired by Sockalexis — the team was often called “Indians” during his time. But even this is a mixed bag; how much they were called Indians to HONOR Sockalexis, and how much they were called Indians to CASH IN on Sockalexis’ heritage is certainly in dispute.

We do know for sure they were called the Indians in 1915, and (according to a story written by author and NYU Professor Jonathan Zimmerman) they were welcomed with the sort of sportswriting grace that would follow the Indians through the years: “We’ll have the Indians on the warpath all the time, eager for scalps to dangle at their belts.” Oh yes, we honor you Louis Sockalexis.

What, however, makes a successful nickname? You got it: Winning. The Indians were successful pretty quickly. In 1916, they traded for an outfielder named Tris Speaker. That same year they picked up a pitcher named Stan Covaleski in what Baseball Reference calls “an unknown transaction.” There need to be more of those. And the Indians also picked up a 26-year-old pitcher on waivers named Jim Bagby. Those three were the key forces in the Indians 1920 World Series championship. After that, they were the Indians to stay.

Chief Wahoo, from what I can tell, was born much later. The first official Chief Wahoo logo seems to have been drawn just after World War II. Until then, Cleveland wore hats with various kinds of Cs on them. In 1947, the first Chief Wahoo appears on a hat.* He’s got the yellow face, long nose, the freakish grin, the single feather behind his head … quite an honor for Sockalexis. As a friend of mine used to say, “It’s surprising they didn’t put a whiskey bottle right next to his head.”

by Joe Posnanski, Joe Blogs |  Read more:
Image: Michael F. McElroy/ESPN

Saturday, October 29, 2016