Monday, March 21, 2016

Traditional Economics Failed. Here’s a New Blueprint.

Politics in democracy can be understood many ways, but on one level it is the expression of where people believe their self-interest lies— that is to say, “what is good for me?” Even when voters vote according to primal affinities or fears rather than economic advantage (as Thomas Frank, in What’s the Matter With Kansas?, lamented of poor whites who vote Republican), it is because they’ve come to define self-interest more in terms of those primal identities than in terms of dollars and cents.

This is not proof of the stupidity of such voters. It is proof of the malleability and multidimensionality of self-interest. While the degree to which human beings pursue that which they think is good for them has not and will probably never change, what they believe is good for them can change and from time to time has, radically.

We assert a simple proposition: that fundamental shifts in popular understanding of how the world works necessarily produce fundamental shifts in our conception of self-interest, which in turn necessarily producefundamental shifts in how we think to order our societies.

Consider for a moment this simple example:

For the overwhelming majority of human history, people looked up into the sky and saw the sun, moon, stars, and planets revolve around the earth. This bedrock assumption based on everyday observation framed our self-conception as a species and our interpretation of everything around us.

Alas, it was completely wrong.

Advances in both observation technology and scientific understanding allowed people to first see, and much later accept, that in fact the earth was not the center of the universe, but rather, a speck in an ever-enlarging and increasingly humbling and complex cosmos. We are not the center of the universe.

It’s worth reflecting for a moment on the fact that the evidence for this scientific truth was there the whole time. But people didn’t perceive it until concepts like gravity allowed us to imagine the possibility of orbits. New understanding turns simple observation into meaningful perception. Without it, what one observes can be radically misinterpreted. New understanding can completely change the way we see a situation and how we see our self-interest with respect to it. Concepts determine, and often distort, percepts.

Today, most of the public is unaware that we are in the midst of a moment of new understanding. In recent decades, a revolution has taken place in our scientific and mathematical understanding of the systemic nature of the world we inhabit.

–We used to understand the world as stable and predictable, and now we see that it is unstable and inherently impossible to predict.

–We used to assume that what you do in one place has little or no effect on what happens in another place, but now we understand that small differences in initial choices can cascade into huge variations in ultimate consequences.

–We used to assume that people are primarily rational, and now we see that they are primarily emotional.

Now, consider: how might these new shifts in understanding affect our sense of who we are and what is good for us?

A Second Enlightenment and the Radical Redefinition of Self-Interest

In traditional economic theory, as in politics, we Americans are taught to believe that selfishness is next to godliness. We are taught that the market is at its most efficient when individuals act rationally to maximize their own self-interest without regard to the effects on anyone else. We are taught that democracy is at its most functional when individuals and factions pursue their own self-interest aggressively. In both instances, we are taught that an invisible hand converts this relentless clash and competition of self-seekers into a greater good.

These teachings are half right: most people indeed are looking out for themselves. We have no illusions about that. But the teachings are half wrong in that they enshrine a particular, and particularly narrow, notion of what it means to look out for oneself.

Conventional wisdom conflates self-interest and selfishness. It makes sense to be self-interested in the long run. It does not make sense to be reflexively selfish in every transaction. And that, unfortunately, is what market fundamentalism and libertarian politics promote: a brand of selfishness that is profoundly against our actual interest.

Let’s back up a step.

When Thomas Jefferson wrote in the Declaration of Independence that certain truths were held to be “self-evident,” he was not recording a timeless fact; he was asserting one into being. Today we read his words through the filter of modernity. We assume that those truths had always been self-evident. But they weren’t. They most certainly were not a generation before Jefferson wrote. In the quarter century between 1750 and 1775, in a confluence of dramatic changes in science, politics, religion, and economics, a group of enlightened British colonists in America grew gradually more open to the idea that all men are created equal and are endowed by their Creator with certain unalienable rights.

It took Jefferson’s assertion, and the Revolution that followed, to make those truths self-evident.

We point this out as a simple reminder. Every so often in history, new truths about human nature and the nature of human societies crystallize. Such paradigmatic shifts build gradually but cascade suddenly.

This has certainly been the case with prevailing ideas about what constitutes self-interest. Self-interest, it turns out, is not a fixed entity that can be objectively defined and held constant. It is a malleable, culturally embodied notion.

Think about it. Before the Enlightenment, the average serf believed that his destiny was foreordained. He fatalistically understood the scope of life’s possibility to be circumscribed by his status at birth. His concept of self-interest extended only as far as that of his nobleman. His station was fixed, and reinforced by tradition and social ritual. His hopes for betterment were pinned on the afterlife. Post-Enlightenment, that all changed. The average European now believed he was master of his own destiny. Instead of worrying about his odds of a good afterlife, he worried about improving his lot here and now. He was motivated to advance beyond what had seemed fated. He was inclined to be skeptical about received notions of what was possible in life.

The multiple revolutions of the Enlightenment— scientific, philosophical, spiritual, material, political— substituted reason for doctrine, agency for fatalism, independence for obedience, scientific method for superstition, human ambition for divine predestination. Driving this change was a new physics and mathematics that made the world seem rational and linear and subject to human mastery.

The science of that age had enormous explanatory and predictive power, and it yielded an entirely new way of conceptualizing self-interest. Now the individual, relying on his own wits, was to be celebrated for looking out for himself— and was expected to do so. As physics developed into a story of zero-sum collisions, as man mastered steam and made machines, as Darwin’s theories of natural selection and evolution took hold, the binding and life-defining power of old traditions and institutions waned. A new belief seeped osmotically across disciplines and domains: Every man can make himself anew. And before long, this mutated into another ethic: Every man for himself.

Compared to the backward-looking, authority-worshipping, passive notion of self-interest that had previously prevailed, this, to be sure, was astounding progress. It was liberation. Nowhere more than in America— a land of wide-open spaces, small populations, and easily erased histories— did this atomized ideal of self-interest take hold. As Steven Watts describes in his groundbreaking history The Republic Reborn, “the cult of the self-made man” emerged in the first three decades after Independence. The civic ethos of the founding evaporated amidst the giddy free-agent opportunity to stake a claim and enrich oneself. Two centuries later, our greed-celebrating, ambition-soaked culture still echoes this original song of self-interest and individualism.

Over time, the rational self-seeking of the American has been elevated into an ideology now as strong and totalizing as the divine right of kings once was in medieval Europe. Homo economicus, the rationalist self-seeker of orthodox economics, along with his cousin Homo politicus, gradually came to define what is considered normal in the market and politics. We’ve convinced ourselves that a million individual acts of selfishness magically add up to a common good. And we’ve paid a great price for such arrogance. We have today a dominant legal and economic doctrine that treats people as disconnected automatons and treats the mess we leave behind as someone else’s problem. We also have, in the Great Recession, painful evidence of the limits of this doctrine’s usefulness.

But now a new story is unfolding.

by Eric Liu and Nick Hanauer, Evonomics | Read more:
Image: Sasquatch Books

Renting a Friend in Tokyo

It's muggy and I'm confused. I don't understand where I am, though it was only a short walk from my Airbnb studio to this little curry place. I don’t understand the lunch menu, or even if it is a lunch menu. Could be a religious tract or a laminated ransom note. I’m new in Tokyo, and sweaty, and jet-lagged. But I am entirely at ease. I owe this to my friend Miyabi. She’s one of those reassuring presences, warm and eternally nodding and unfailingly loyal, like she will never leave my side. At least not for another 90 minutes, which is how much of her friendship I’ve paid for.

Miyabi isn’t a prostitute, or an escort or an actor or a therapist. Or maybe she’s a little of each. For the past five years she has been a professional rent-a-friend, working for a company called Client Partners.

My lunch mate pokes daintily at her curry and speaks of the friends whose money came before mine. There was the head of a prominent company, rich and “very clever” but conversationally marooned at “hello.” Discreetly and patiently, Miyabi helped draw other words out. There was the string of teenage girls struggling to navigate mystifying social dynamics; at their parents’ request, Miyabi would show up and just be a friend. You know, a normal, companionable, 27-year-old friend. She has been paid to cry at funerals and swoon at weddings, lest there be shame over a paltry turnout. Last year, a high schooler hired her and 20 other women just long enough to snap one grinning, peace-sign-flashing, I-totally-have-friends Instagram photo.

When I learned that friendship is rentable in Tokyo, it merely seemed like more Japanese wackiness, in a subset I’d come to think of as interest-kitsch. Every day in Japan, it seems, some weird new appetite is identified and gratified. There are cats to rent, after all, used underwear to purchase, owls to pet at owl bars. Cuddle cafés exist for the uncuddled, goat cafés for the un-goated. Handsome men will wipe away the tears of stressed-out female office workers. All to say I expected something more or less goofy when I lined up several English-speaking rent-a-friends for my week in Tokyo. The agency Miyabi works for exists primarily for lonely locals, but the service struck me as well suited to a solo traveler, too, so I paid a translator to help with the arrangements. Maybe a more typical Japanese business would’ve bristled at this kind of intrusion from a foreigner. But the rent-a-friend world isn’t typical, I would soon learn, and in some ways it wants to subvert all that is.

Contrived Instagram photos aside, Miyabi’s career mostly comprises the small, unremarkable acts of ordinary friendship: Shooting the breeze over dinner. Listening on a long walk. Speaking simple kindnesses on a simple drive to the client’s parents’ house, simply to pretend you two are in love and absolutely on the verge of getting married, so don’t even worry, Mom and Dad.

As a girl, Miyabi longed to be a flight attendant—Continental, for some reason—and that tidy solicitousness still emanates. She wears a smart gray skirt and a gauzy beige blouse over which a sheet of impeccable hair drapes weightlessly. She doesn’t care that I am peccable. She smiles when I smile, touches my arm to make a point. Her graciousness cloaks a demanding job. With an average of 15 gigs a week, Miyabi’s hours are irregular and bleed from day into night. The daughter of a doctor and a nurse, she still struggles to convince her parents that her relatively new field is legitimate. The money is fine but not incredible; I’m paying her roughly $115 for two hours, some percentage of which Client Partners keeps. So why does she do it? Miyabi puts down her chopsticks and explains: It helps people—real and lonesome people in need of, well, whatever ineffable thing friendship means to our species. “So many people are good at life online or life at work, but not real life,” she says, pantomiming someone staring at a phone. For such clients a dollop of emotional contact with a friendly person is powerful, she adds, even with a price tag attached.

So this isn’t secretly about romance? I ask. Not at all, she replies. (...)

During my time in Tokyo I develop a seamless routine of leaving the apartment, drifting vaguely toward the address on my phone, squinting confusedly, doubling back, eating some gyoza, and eventually stumbling onto my destination. On a drizzly Friday morning, my destination is the Client Partners headquarters, a small but airy suite in a nondescript Shibuya district office building. I rope my translator in for this, and we’re met by a round-faced woman in a long robelike garment. Maki Abe is the CEO, and for the next hour we sit across a desk from her and talk not about wacky interest-kitsch but about a nation’s spiritual health.

“We look like a rich country from the outside, but mentally we have problems,” Maki says. She speaks slowly, methodically. “Japan is all about face. We don’t know how to talk from the gut. We can’t ask for help. So many people are alone with their problems, and stuck, and their hearts aren’t touching.”

Maki and I bowed when we met, but we also shook hands. She brings it up later. “There are many people who haven’t been touched for years. We have clients who start to cry when we shake hands with them.”

It’s not that people lack friends, she says. Facebook, Instagram— scroll around and you find a country bursting with mugging, partying companionship. It just isn’t real, that’s all. “There’s a real me and a masked me. We have a word for the lonely gap in between that: kodoku.”

by Chris Colin, Afar |  Read more:
Image: Landon Nordeman

Emojimania

When it comes to emojis, the future is very, very ... Face with Tears of Joy.

If you don't know what that means then you: a) aren't a 14-year-old girl. b) love to hate those tiny pictures that people text you all the time. Or c) are nowhere near a smartphone or online chat.

Otherwise, here in 2016, it's all emojis, all the time. And Face with Tears of Joy, by the way, is a bright yellow happy face with a classic, toothy grin as tears fall.

The Face was chosen by Oxford Dictionaries as its 2015 "word" of the year, based on its popularity and reflecting the rise of emojis to help charitable causes, promote businesses and generally assist oh-so-many-more of us in further expressing ourselves on social media and in texts. (...)

WHERE DID THEY COME FROM?

While there's now a strict definition of emojis as images created through standardized computer coding that works across platforms, they have many, many popular cousins by way of "stickers," which are images without the wonky back end. Kimojis, the invention of Kim Kardashian, aren't technically emojis, for instance, at least in the eyes of purists.

In tech lore, the great emoji explosion has a grandfather in Japan and his name is Shigetaka Kurita. He was inspired in the 1990s by manja and kanji when he and others on a team working to develop what is considered the world's first widespread mobile Internet platform came up with some rudimentary characters. They were working a good decade before Apple developed a set of emojis for the first iPhones.

Emojis are either loads of fun or the bane of your existence. One thing is sure: There's no worry they'll become a "language" in and of themselves. While everybody from Coca-Cola to the Kitten Bowl have come up with little pictographs to whip up interest in themselves, emojis exist mainly to nuance the words regular folk type, standing in for tone of voice, facial expressions and physical gestures - extended middle finger emoji added recently.

"Words aren't dead. Long live the emoji, long live the word," laughed Gretchen McCulloch, a Toronto linguist who, like some others in her field, is studying emojis and other aspects of Internet language.

Emojis have been compared to hieroglyphs, but McCulloch is not on board. That ancient picture-speak included symbols with literal meaning, but others stood in for actual sound.

Emoji enthusiasts have played with telling word-free stories using their little darlings alone and translating song lyrics into the pictures, "but they can't be put together like letters to make a pronounceable word," McCulloch said.

THE EMOJI OVERSEERS

Back when Kurita was creating some of the first emojis, chaos already had ensued in trying to make all the pagers and all the emerging mobile phones and the newfangled thing called email and everything else Internet-ish that was bubbling up speak to each other. And also to allow people in Japan used to a more formal way of communicating make themselves understood in the emerging shorthand.

Enter the Unicode Consortium, on the coding end. It's a volunteer nonprofit industry organization working in collaboration with the International Organization for Standardization, the latter an independent non-governmental body that helps develop specifications for all sorts of things, including emojis, on a global scale.

Unicode, co-founded and headed by Mark Davis in Zurich, has a big, big mission, of which emojis have a place: making sure all the languages in the world are encoded and supported across platforms and devices.

The key word here is volunteer. Davis has a whole other job at Google, but he has dedicated himself to the task above. He also co-chairs the consortium's emoji subcommittee, a cog in a vetting process for new emojis that can take up to two years before new ones are put into the Unicode Standard for the likes of Apple, Google, Microsoft and Facebook to do with what they wish.

Where does Davis sit with the rapid rise of emojis?

"It has been a surprise. We didn't fully understand how popular they were going to be," he said.

At the moment, Unicode has released 1,624 emojis, with more options when you factor in modifiers for such things as skin tone. The emoji subcommittee fields about 100 proposals for new emojis a year. Not all make it through the vetting process.

"We don't encode emoji for movie or fictional people, or for deities. And we're not going to give you a Donald Trump," Davis said.

Gender, he said, is among the next frontiers for emojis. Demand for a female runner, for instance, will be voted on in May as critics have questioned a male-female divide. The consortium is trying to come up with a way to more easily and quickly customize emoji for gender, hair color and other features, Davis said.

"Personally, I am very much looking forward to a face palm emoji," he joked.

by Leanne Italie, AP |  Read more:
Image: via:

The Rest Is Advertising

Recently, I landed the tech-journalism equivalent of a Thomas Pynchon interview: I got someone from Twitter to answer my call. Notorious for keeping its communications department locked up tight, Twitter is not only the psychic bellwether and newswire for the media industry, but also a stingy interview-granter, especially now that it’s floundering with poor profits, executive turnover, and a toxic culture. I’ve tried to get them on the record before. No one has replied.

This time, though, a senior executive from one of Twitter’s key divisions seemed happy—eager, even—to talk with me, and for as long as I wanted. You might even say he prattled. I was a little stunned: I’d been writing about tech matters for years as a freelance journalist, and this was far more access than I was used to receiving. What was different? I was calling as a reporter—but not exactly. I was writing a story for The Atlantic—but not for the news division. Instead, I was working for a moneymaking wing of The Atlantic called Re:think, and I was writing sponsored content.

In case you haven’t heard, journalism is now in perpetual crisis, and conditions are increasingly surreal. The fate of the controversialists at Gawker rests on a delayed jury trial over a Hulk Hogan sex tape. Newspapers publish directly to Facebook, and Snapchat hires journalists away from CNN. Last year, the Pulitzer Prizes doubled as the irony awards; one winner in the local reporting category, it emerged, had left his newspaper job months earlier for a better paying gig in PR. “Is there a future in journalism and writing and the Internet?” Choire Sicha, cofounder of The Awl ,wrote last January. “Haha, FUCK no, not really.” Even those who have kept their jobs in journalism, he explained, can’t say what they might be doing, or where, in a few years’ time. Disruption clouds the future even as it holds it up for worship.

But for every crisis in every industry, a potential savior emerges. And in journalism, the latest candidate is sponsored content.

Also called native advertising, sponsored content borrows the look, the name recognition, and even the staff of its host publication to push brand messages on unsuspecting viewers. Forget old-fashioned banner ads, those most reviled of early Internet artifacts. This is vertically integrated, barely disclaimed content marketing, and it’s here to solve journalism’s cash flow problem, or so we’re told. “15 Reasons Your Next Vacation Needs to Be in SW Florida,” went a recent BuzzFeed headline—just another listicle crying out for eyeballs on an overcrowded homepage, except this one had a tiny yellow sidebar to announce, in a sneaky whisper, “Promoted by the Beaches of Fort Myers & Sanibel.”

Advertorials are what we expect out of BuzzFeed, the ur-source of digital doggerel and the first media company to open its own in-house studio—a sort of mini Saatchi & Saatchi—to build “original, custom content” for brands. But now legacy publishers are following BuzzFeed’s lead, heeding the call of the digital co-marketers and starting in-house sponsored content shops of their own. CNN opened one last spring, and its keepers, with nary a trace of self-awareness, dubbed it Courageous. The New York Times has T Brand Studio (clients include Dell, Shell, and Goldman Sachs), the S. I. Newhouse empire has something called 23 Stories by Condé Nast, and The Atlantic has Re:think. As the breathless barkers who sell the stuff will tell you, sponsored content has something for everyone. Brands get their exposure, publishers get their bankroll, freelancer reporters get some work on the side, and readers get advertising that goes down exceptionally easy—if they even notice they’re seeing an ad at all.

The promise is that quality promotional content will sit cheek-by-jowl with traditional journalism, aping its style and leveraging its prestige without undermining its credibility.

The problem, as I learned all too quickly when I wrote my sponsored story for The Atlantic (paid for by a prominent tech multinational), is that the line between what’s sponsored and what isn’t—between advertising and journalism—has already been rubbed away.

by Jacob Silverman, The Baffler |  Read more:
Image: Eric Hanson

Pierre Koening’s Stahl House


Case Study House #22, aka Stahl House, might be the ultimate mid-Century dream home. The story of the home begins in May 1954 when the Stahl family, who still own the place, invested in a small, rather awkward lot high in the Hollywood Hills. In 1956, Buck Stahl built a model of the home he and wife Carlotta wanted to live in. In 1957, the Stahls showed it to Pierre Koenig (October 17, 1925 – April 4, 2004), who along with other architects of the age sought to bring modernist style and industrial efficiency to affordable suburban residences.

Could life in post-war USA be improved through architecture? Was Le Corbusier’s right when he said that houses were “machines for living”?

On April 8th, 1959 Stahl House was inducted into the Case Study House program by Arts & Architecture magazine, becoming Case Study House # 22. (This followed Pierre Koenig’s Case Study House #21.) The magazine commissioned architects – Richard Neutra, Raphael Soriano, Craig Ellwood, Charles and Ray Eames, Eero Saarinen, Thornton Abell, A. Quincy Jones, Ralph Rapson and others – to design and build inexpensive and efficient model homes for the United States residential housing boom caused by the end of World War II. In all 36 residences were made between 1945 and 1964.(...)

Work on #22 began in May 1959 and was completed a year later in May of 1960.

Pierre once described the process of building Stahl House as “trying to solve a problem – the client had champagne tastes and a beer budget.” Bruce Stahl, Buck and Carlotta’s son, adds: “We were a blue collar family living in a white collar house. Nobody famous ever lived here.”

by Karen Strike, Flashbak |  Read more:
Image: uncredited

KC & The Sunshine Band

RRS Boaty McBoatface


The good news for the Natural Environment Research Council’s decision to crowd-search a name for its latest polar research vessel is unprecedented public engagement in a sometimes niche area of scientific study. The bad news? Sailing due south in a vessel that sounds like it was christened by a five-year-old who has drunk three cartons of Capri-Sun.

Just a day after the NERC launched its poll to name the £200m vessel – which will first head to Antarctica in 2019 – the clear favourite was RRS Boaty McBoatface, with well over 18,000 votes. The RRS stands for royal research ship. (...)

The NERC – which was wise enough to ask that people “suggest” names, giving it future wriggle room – asked for ideas to be inspirational.

Some undoubtedly were, with its website, which kept crashing on Sunday under the weight of traffic, showing dozens of serious suggestions connected to inspiring figures such as Sir David Attenborough, or names such as Polar Dream.

But the bulk of entries were distinctly less sober. Aside from the leading contender, ideas included Its Bloody Cold Here, What Iceberg, Captain Haddock, Big Shipinnit, Science!!! and Big Metal Floaty Thingy-thing.

by Peter Walker, The Guardian |  Read more:
Image: NERC

Saturday, March 19, 2016


André Robé - 47th Street - New York - 1957
via:

Depends On Your Point of View

Last night I came upon a new exhibit in my running critique. I will show it to you, and then try to interpret what it means. It happened on a program where he said, she said and “we’ll have to leave it there” are a kind of house style: The Newshour on PBS. (Link.) Let’s set the scene:

* A big story: the poisoning of Flint, Michigan’s water supply— a major public health disaster.
* Latest news: the House Committee on Oversight and Government Reform held a hearing at which Michigan Governor Rick Snyder, a Republican, and EPA Administrator Gina McCarth, an Obama appointee, both testified.
* Outcome: They were ritualistically denounced and told to the resign by members of Congress in the opposing party. (Big surprise.)
* Cast of characters in the clip I’m about to show you: Judy Woodruff of the Newshour is host and interviewer. David Shepardson is a Reuters reporter in the Washington bureau who has been covering the Flint disaster. (Formerly of the Detroit News and a Michigan native.) Marc Edwards is a civil and environmental engineer and professor at Virginia Tech. (“He’s widely credited with helping to expose the Flint water problems. He testified before the same House committee earlier this week.”)

Now watch what happens when Woodruff asks the Reuters reporter: who bears responsibility for the water crisis in Flint? Which individual or agency is most at fault here? (The part I’ve isolated is 2:22.)

Here is what I saw. What did you see?


The Reuters journalist defaults on the question he was asked. He cannot name a single agency or person who is responsible. The first thing and the last thing he says is “depends on your point of view.” These are weasel words. In between he manages to isolate the crucial moment — when the state of Michigan failed to add “corrosion control” to water drawn from the Flint River — but he cannot say which official or which part of government is responsible for that lapse. Although he’s on the program for his knowledge of a story he’s been reporting on for months, the question of where responsibility lies seems to flummox and decenter him. He implies that he can’t answer because there actually is no answer, just the clashing points of view.

Republicans in Congress scream at Obama’s EPA person: you failed! Democrats in Congress scream at a Republican governor: you failed! Our reporter on the scene shrugs, as if to say: take your pick, hapless citizens! His actual words: “Splitting up the blame depends on your point of view.”

This is a sentiment that Judy Woodruff, who is running the show, can readily understand. He’s talking her language when he says “depends on your point of view.” That is just the sort of the down-the-middle futility that PBS Newshour traffics in. Does she press him to do better? Does she say, “Our viewers want to know: how can such thing a happen in the United States? You’ve been immersed in the story, can you at least tell us where to look if we’re searching for accountability?” She does not. Instead, she sympathizes with David Shepardson. “It’s impossible to separate it from the politics.” But we’ll try!

For the try she has to turn to the academic on the panel, who then gives a little master class in how to answer the question: who is at fault here? Here are the points Marc Edwards of Virginia Tech makes:

* Governor Snyder failed to listen to the people of Flint when they complained about the water.
* Synder trusted too much in the Michigan Department of Environmental Quality and the EPA.
* He has accepted some blame for these failures, calling the Flint water crisis his Katrina.
* EPA, by contrast, has been evading responsibility for its part in the scandal.
* EPA called the report by its own whistleblower “inconclusive” when it really wasn’t.
* The agency hesitated and doubted itself when it came to enforcing federal law. WTF?
* EPA said it had been “strong-armed” by the state officials as if they had more authority than the Federal government.

Who is responsible? That was the question on the PBS table. If we listen to the journalist on the panel we learn: “it depends on which team you’re on,” and “they’re all playing politics,” and “it’s impossible to separate truth from spin.”

Professor Marc Edwards, more confident in his ability to speak truth to power, cuts through all that crap: There are different levels of failure and layers of responsibility here, he says. Some people are further along than others in admitting fault. Yes, it’s complicated — as real life usually is — but that doesn’t mean it’s impossible to assign responsibility. Nor does responsibility lie in one person’s lap or one agency’s hands. Multiple parties are involved. But when people who have some responsibility obfuscate, that’s outrageous. And it has to be called out.

Now I ask you: who’s in the ivory tower here? The journalist or the academic?

I know what you’re thinking, PBS Newshour people. Hey, we’re the ones who booked Marc Edwards on our show and let him run with it. That’s good craft in broadcast journalism! Fair point, Newshour people. All credit to you for having him on. Good move. Full stop.

What interests me here is the losing gambit and musty feel of formulaic, down-the-middle journalism. The misplaced confidence of the correspondent positioning himself between warring parties. The spectacle of a Reuters reporter, steeped in the particulars of the case, defaulting on the basic question of who is responsible. The forfeiture of Fourth Estate duties to other, adjacent professions. The union with gridlock and hopelessness represented in those weasel words: “depends on your point of view.” The failure of nerve when Judy Woodruff lets a professional peer dodge her question— a thing they chortle about and sneer at when politicians do it. The contribution that “not our job” journalists make to unaccountable government, and to public cynicism. The bloodlessness and lack of affect in the journalist commenting on the Flint crisis, in contrast to the academic who is quietly seething.

by Jay Rosen, Press Think |  Read more:
Image: YouTube

Motion Design is the Future of UI

Wokking the Suburbs


As he stepped woozily into the first American afternoon of his life, the last thing my father wanted to do was eat Chinese food. He scanned the crowd for the friend who’d come from Providence (my father would stay with this friend for a few weeks before heading to Amherst to begin his graduate studies). That friend didn’t know how to drive, however, so he promised to buy lunch for another friend in exchange for a ride to the Boston airport. The two young men greeted my father at the gate, exchanged some backslaps, and rushed him to the car, where they stowed the sum total of his worldly possessions in the trunk and folded him into the backseat. Then they gleefully set off for Boston’s Chinatown, a portal back into the world my father (and these friends before him) had just left behind. Camaraderie and goodwill were fine enough reasons to drive hours to fetch someone from the airport; just as important was the airport’s proximity to food you couldn’t get in Providence.

He remembers nothing about the meal itself. He was still nauseous from the journey—Taipei to Tokyo to Seattle to Boston—and, after all, he’d spent every single day of the first twenty-something years of his life eating Chinese food.

“For someone who had just come from Taiwan, it was no good. For someone who came from Providence, it must have been very good!” he laughs.

When my mother came to the United States a few years after my father (Taipei-Tokyo-San Francisco), the family friends who picked her up at least had the decency to wait a day and allow her to find her legs before taking her to a restaurant in the nearest Chinatown.

“I remember the place was called Jing Long, Golden Dragon. Many years later there was a gang massacre in there,” she casually recalls. “I still remember the place. It was San Francisco’s most famous. The woman who brought me was very happy but I wasn’t hungry. “Of course, they always think if you come from Taiwan or China you must be hungry for Chinese food.”

It was the early 1970s, and my parents had each arrived in the United States with only a vague sense of what their respective futures held, beyond a few years of graduate studies. They certainly didn’t know they would be repeating these treks in the coming decades, subjecting weary passengers (namely, me) to their own long drives in search of Chinese food. I often daydream about this period of their lives and imagine them grappling with some sense of terminal dislocation, starving for familiar aromas, and regretting the warnings of their fellow new Americans that these were the last good Chinese spots for the next hundred or so miles. They would eventually meet and marry in Champaign-Urbana, Illinois (where they acquired a taste for pizza), and then live for a spell in Texas (where they were told that the local steak house wasn’t for “their kind”), before settling in suburban California. Maybe this was what it meant to live in America. You could move around. You were afforded opportunities unavailable back home. You were free to go by “Eric” at work and name your children after US presidents. You could refashion yourself a churchgoer, a lover of rum-raisin ice cream, an aficionado of classical music or Bob Dylan, a fan of the Dallas Cowboys because everyone else in the neighborhood seemed to be one. But for all the opportunities, those first days in America had prepared them for one reality: sometimes you had to drive great distances in order to eat well. (...)

Suburbs are seen as founts of conformity, but they are rarely places beholden to tradition. Nobody goes to the suburbs on a vision quest—most are drawn instead by the promise of ready-made status, a stability in life modeled after the stability of neat, predictable blocks and gated communities. And yet, a suburb might also be seen as a slate that can be perpetually wiped clean to accommodate new aspirations.

There remain vestiges of what stood before, and these histories capture the cyclical aspirations that define the suburb: Cherry Tree Lane, where an actual orchard was once the best possible use of free acreage; the distinctive, peaked roof of a former Sizzler turned dim sum spot; the Hallmark retailer, all windows and glass ledges, that is now a noodle shop; and the kitschy railroad-car diner across the street that’s now another noodle shop. But Cupertino was still in transition throughout the 1980s and early 1990s. Monterey Park, hundreds of miles to our south, was the finished article.

All suburban Chinatowns owe something to Frederic Hsieh, a young realtor who regarded Monterey Park and foresaw the future. He began buying properties all over this otherwise generic community in the mid-1970s and blitzed newspapers throughout Taiwan and Hong Kong with promises of a “Chinese Beverly Hills” located a short drive from Los Angeles’s Chinatown. While there had been a steady stream of Chinese immigrants over the previous decade, Hsieh guessed that the uncertain political situation in Asia combined with greater business opportunities in the United States would bring more of them to California. Instead of the cramped, urban Chinatowns in San Francisco or Flushing, Hsieh wanted to offer these newcomers a version of the American dream: wide streets, multicar garages, good schools, minimal culture shock, and a short drive to Chinatown. In 1977, he invited twenty of the city’s most prominent civic and business leaders to a meeting over lunch (Chinese food, naturally) and explained that he was building a “modern-day mecca” for the droves of Chinese immigrants on their way. This didn’t go over so well with some of Monterey Park’s predominantly white establishment, who mistook his bluster for arrogance. As a member of the city’s Planning Commission later told the Los Angeles Times, “Everyone in the room thought the guy was blowing smoke. Then when I got home I thought, what gall. What ineffable gall. He was going to come into my living room and change my furniture?”

Gall was contagious. The following year, Wu Jin Shen, a former stockbroker from Taiwan, opened Diho Market, Monterey Park’s first Asian grocery. Wu would eventually oversee a chain of stores with four hundred employees and $30 million in sales. Soon after, a Laura Scudder potato-chip factory that had been remade into a Safeway was remade into an Asian supermarket. Another grocery store was refitted with a Pagoda-style roof.

Chinese restaurateurs were the shock troops of Hsieh’s would-be conquest. “The first thing Monterey Park residents noticed were the Chinese restaurants that popped up,” a different but no less alarmist piece the citizen quoted in the Times recalled. “Then came the three Chinese shopping centers, the Chinese banks, and the Chinese theater showing first-run movies from Hong Kong—with English subtitles.”

In Monterey Park, such audacity (if you wanted to call it that) threatened the community’s stability. Residents offended by, say, the razing of split-level ranch-style homes from the historical 1970s to accommodate apartment complexes drew on their worst instincts to try and push through “Official English” legislation in the mid-1980s. “Will the Last American to Leave Monterey Park Please Bring the Flag?” bumper stickers were distributed.

But this hyperlocal kind of nativism couldn’t turn back the demographic tide. In 1990, Monterey Park became the first city in the continental United States with a majority-Asian population. Yet Monterey Park’s growing citizenry didn’t embody a single sensibility. There were affluent professionals from Taiwan and Hong Kong as well as longtime residents of Los Angeles’s Chinatown looking to move to the suburbs. As Tim Fong, a sociologist who has studied Monterey Park, observed in the Chicago Tribune, “The Chinese jumped a step. They didn’t play the (slow) assimilation game.” This isn’t to say these new immigrants rejected assimilation. They were just becoming something entirely new.

Monterey Park became the first suburb that Chinese people would drive for hours to visit and eat in, for the same reasons earlier generations of immigrants had sought out the nearest urban Chinatown. And the changing population and the wealth they brought with them created new opportunities for all sorts of businesspeople, especially aspiring restaurateurs. The typical Chinese American restaurant made saucy, ostentatiously deep-fried concessions to mainstream appetites, leading to the ever-present rumor that most establishments had “secret menus” meant for more discerning eaters. It might be more accurate to say that most chefs at Chinese restaurants are more versatile than they initially let on—either that or families like mine possess Jedi-level powers of off-the-menu persuasion. But in a place like Monterey Park, the pressure to appeal to non-Chinese appetites disappeared. The concept of “mainstream” no longer held; neck bones and chicken feet and pork bellies and various gelatinous things could pay the bills and then some.

by Hua Hsu, Lucky Peach |  Read more:
Image: Yina Kim

The Secrets of the Wave Pilots

At 0400, three miles above the Pacific seafloor, the searchlight of a power boat swept through a warm June night last year, looking for a second boat, a sailing canoe. The captain of the canoe, Alson Kelen, potentially the world’s last-ever apprentice in the ancient art of wave-piloting, was trying to reach Aur, an atoll in the Marshall Islands, without the aid of a GPS device or any other way-finding instrument. If successful, he would prove that one of the most sophisticated navigational techniques ever developed still existed and, he hoped, inspire efforts to save it from extinction. Monitoring his progress from the power boat were an unlikely trio of Western scientists — an anthropologist, a physicist and an oceanographer — who were hoping his journey might help them explain how wave pilots, in defiance of the dizzying complexities of fluid dynamics, detect direction and proximity to land. More broadly, they wondered if watching him sail, in the context of growing concerns about the neurological effects of navigation-by-smartphone, would yield hints about how our orienteering skills influence our sense of place, our sense of home, even our sense of self.

When the boats set out in the afternoon from Majuro, the capital of the Marshall Islands, Kelen’s plan was to sail through the night and approach Aur at daybreak, to avoid crashing into its reef in the dark. But around sundown, the wind picked up and the waves grew higher and rounder, sorely testing both the scientists’ powers of observation and the structural integrity of the canoe. Through the salt-streaked windshield of the power boat, the anthropologist, Joseph Genz, took mental field notes — the spotlighted whitecaps, the position of Polaris, his grip on the cabin handrail — while he waited for Kelen to radio in his location or, rather, what he thought his location was.

The Marshalls provide a crucible for navigation: 70 square miles of land, total, comprising five islands and 29 atolls, rings of coral islets that grew up around the rims of underwater volcanoes millions of years ago and now encircle gentle lagoons. These green dots and doughnuts make up two parallel north-south chains, separated from their nearest neighbors by a hundred miles on average. Swells generated by distant storms near Alaska, Antarctica, California and Indonesia travel thousands of miles to these low-lying spits of sand. When they hit, part of their energy is reflected back out to sea in arcs, like sound waves emanating from a speaker; another part curls around the atoll or island and creates a confused chop in its lee. Wave-piloting is the art of reading — by feel and by sight — these and other patterns. Detecting the minute differences in what, to an untutored eye, looks no more meaningful than a washing-machine cycle allows a ri-meto, a person of the sea in Marshallese, to determine where the nearest solid ground is — and how far off it lies — long before it is visible.

In the 16th century, Ferdinand Magellan, searching for a new route to the nutmeg and cloves of the Spice Islands, sailed through the Pacific Ocean and named it ‘‘the peaceful sea’’ before he was stabbed to death in the Philippines. Only 18 of his 270 men survived the trip. When subsequent explorers, despite similar travails, managed to make landfall on the countless islands sprinkled across this expanse, they were surprised to find inhabitants with nary a galleon, compass or chart. God had created them there, the explorers hypothesized, or perhaps the islands were the remains of a sunken continent. As late as the 1960s, Western scholars still insisted that indigenous methods of navigating by stars, sun, wind and waves were not nearly accurate enough, nor indigenous boats seaworthy enough, to have reached these tiny habitats on purpose.

Archaeological and DNA evidence (and replica voyages) have since proved that the Pacific islands were settled intentionally — by descendants of the first humans to venture out of sight of land, beginning some 60,000 years ago, from Southeast Asia to the Solomon Islands. They reached the Marshall Islands about 2,000 years ago. The geography of the archipelago that made wave-piloting possible also made it indispensable as the sole means of collecting food, trading goods, waging war and locating unrelated sexual partners. Chiefs threatened to kill anyone who revealed navigational knowledge without permission. In order to become a ri-meto, you had to be trained by a ri-meto and then pass a voyaging test, devised by your chief, on the first try. As colonizers from Europe introduced easier ways to get around, the training of ri-metos declined and became restricted primarily to an outlying atoll called Rongelap, where a shallow circular reef, set between ocean and lagoon, became the site of a small wave-piloting school.

In 1954, an American hydrogen-bomb test less than a hundred miles away rendered Rongelap uninhabitable. Over the next decades, no new ri-metos were recognized; when the last well-known one died in 2003, he left a 55-year-old cargo-ship captain named Korent Joel, who had trained at Rongelap as a boy, the effective custodian of their people’s navigational secrets. Because of the radioactive fallout, Joel had not taken his voyaging test and thus was not a true ri-meto. But fearing that the knowledge might die with him, he asked for and received historic dispensation from his chief to train his younger cousin, Alson Kelen, as a wave pilot.

Now, in the lurching cabin of the power boat, Genz worried about whether Kelen knew what he was doing. Because Kelen was not a ri-meto, social mores forced him to insist that he was not navigating but kajjidede, or guessing. The sea was so rough tonight, Genz thought, that even for Joel, picking out a route would be like trying to hear a whisper in a gale. A voyage with this level of navigational difficulty had never been undertaken by anyone who was not a ri-meto or taking his test to become one. Genz steeled himself for the possibility that he might have to intervene for safety’s sake, even if this was the best chance that he and his colleagues might ever get to unravel the scientific mysteries of wave-piloting — and Kelen’s best chance to rally support for preserving it. Organizing this trip had cost $72,000 in research grants, a fortune in the Marshalls.

The radio crackled. ‘‘Jebro, Jebro, this is Jitdam,’’ Kelen said. ‘‘Do you copy? Over.’’

Genz swallowed. The cabin’s confines, together with the boat’s diesel odors, did nothing to allay his motion sickness. ‘‘Copy that,’’ he said. ‘‘Do you know where you are?’’

Though mankind has managed to navigate itself across the globe and into outer space, it has done so in defiance of our innate way-finding capacities (not to mention survival instincts), which are still those of forest-dwelling homebodies. Other species use far more sophisticated cognitive methods to orient themselves. Dung beetles follow the Milky Way; the Cataglyphis desert ant dead-reckons by counting its paces; monarch butterflies, on their thousand-mile, multigenerational flight from Mexico to the Rocky Mountains, calculate due north using the position of the sun, which requires accounting for the time of day, the day of the year and latitude; honeybees, newts, spiny lobsters, sea turtles and many others read magnetic fields. Last year, the fact of a ‘‘magnetic sense’’ was confirmed when Russian scientists put reed warblers in a cage that simulated different magnetic locations and found that the warblers always tried to fly ‘‘home’’ relative to whatever the programmed coordinates were. Precisely how the warblers detected these coordinates remains unclear. As does, for another example, the uncanny capacity of godwits to hatch from their eggs in Alaska and, alone, without ever stopping, take off for French Polynesia. Clearly they and other long-distance migrants inherit a mental map and the ability to constantly recalibrate it. What it looks like in their mind’s eye, however, and how it is maintained day and night, across thousands of miles, is still a mystery. (...)

Genz met Alson Kelen and Korent Joel in Majuro in 2005, when Genz was 28. A soft-spoken, freckled Wisconsinite and former Peace Corps volunteer who grew up sailing with his father, Genz was then studying for a doctorate in anthropology at the University of Hawaii. His adviser there, Ben Finney, was an anthropologist who helped lead the voyage of Hokulea, a replica Polynesian sailing canoe, from Hawaii to Tahiti and back in 1976; the success of the trip, which involved no modern instrumentation and was meant to prove the efficacy of indigenous ships and navigational methods, stirred a resurgence of native Hawaiian language, music, hula and crafts. Joel and Kelen dreamed of a similar revival for Marshallese sailing — the only way, they figured, for wave-piloting to endure — and contacted Finney for guidance. But Finney was nearing retirement, so he suggested that Genz go in his stead. With their chief’s blessing, Joel and Kelen offered Genz rare access, with one provision: He would not learn wave-piloting himself; he would simply document Kelen’s training.

Joel immediately asked Genz to bring scientists to the Marshalls who could help Joel understand the mechanics of the waves he knew only by feel — especially one called di lep, or backbone, the foundation of wave-piloting, which (in ri-meto lore) ran between atolls like a road. Joel’s grandfather had taught him to feel the di lep at the Rongelap reef: He would lie on his back in a canoe, blindfolded, while the old man dragged him around the coral, letting him experience how it changed the movement of the waves.

But when Joel took Genz out in the Pacific on borrowed yachts and told him they were encountering the di lep, he couldn’t feel it. Kelen said he couldn’t, either. When oceanographers from the University of Hawaii came to look for it, their equipment failed to detect it. The idea of a wave-road between islands, they told Genz, made no sense.

Privately, Genz began to fear that the di lep was imaginary, that wave-piloting was already extinct. On one research trip in 2006, when Korent Joel went below deck to take a nap, Genz changed the yacht’s course. When Joel awoke, Genz kept Joel away from the GPS device, and to the relief of them both, Joel directed the boat toward land. Later, he also passed his ri-meto test, judged by his chief, with Genz and Kelen crewing.

Worlds away, Huth, a worrier by nature, had become convinced that preserving mankind’s ability to way-find without technology was not just an abstract mental exercise but also a matter of life and death. In 2003, while kayaking alone in Nantucket Sound, fog descended, and Huth — spring-loaded and boyish, with a near-photographic memory — found his way home using local landmarks, the wind and the direction of the swells. Later, he learned that two young undergraduates, out paddling in the same fog, had become disoriented and drowned. This prompted him to begin teaching a class on primitive navigation techniques. When Huth met Genz at an academic conference in 2012 and described the methodology of his search for the Higgs boson and dark energy — subtracting dominant wave signals from a field, until a much subtler signal appears underneath — Genz told him about thep di lep, and it captured Huth’s imagination. If it was real, and if it really ran back and forth between islands, its behavior was unknown to physics and would require a supercomputer to model. That a person might be able to sense it bodily amid the cacophony generated by other ocean phenomena was astonishing.

Huth began creating possible di lep simulations in his free time and recruited van Vledder’s help. Initially, the most puzzling detail of Genz’s translation of Joel’s description was his claim that the di lep connected each atoll and island to all 33 others. That would yield a trillion trillion paths, far too many for even the most adept wave pilot to memorize. Most of what we know about ocean waves and currents — including what will happen to coastlines as climate change leads to higher sea levels (of special concern to the low-lying Netherlands and Marshall Islands) — comes from models that use global wind and bathymetry data to simulate what wave patterns probably look like at a given place and time. Our understanding of wave mechanics, on which those models are based, is wildly incomplete. To improve them, experts must constantly check their assumptions with measurements and observations. Perhaps, Huth and van Vledder thought, there were di leps in every ocean, invisible roads that no one was seeing because they didn’t know to look.

by Kim Tingley, NY Times |  Read more:
Image: Mark Peterson/Redux

Under the Crushing Weight of the Tuscan Sun

I have sat on Tuscan-brown sofas surrounded by Tuscan-yellow walls, lounged on Tuscan patios made with Tuscan pavers, surrounded by Tuscan landscaping. I have stood barefoot on Tuscan bathroom tiles, washing my hands under Tuscan faucets after having used Tuscan toilets. I have eaten, sometimes on Tuscan dinnerware, a Tuscan Chicken on Ciabatta from Wendy’s, a Tuscan Chicken Melt from Subway, the $6.99 Tuscan Duo at Olive Garden, and Tuscan Hummus from California Pizza Kitchen. Recently, I watched my friend fill his dog’s bowl with Beneful Tuscan Style Medley dog food. This barely merited a raised eyebrow; I’d already been guilty of feeding my cat Fancy Feast’s White Meat Chicken Tuscany. Why deprive our pets of the pleasures of Tuscan living?

In “Tuscan Leather,” from 2013, Drake raps, “Just give it time, we’ll see who’s still around a decade from now.” Whoever among us is still here, it seems certain that we will still be living with the insidious and inescapable word “Tuscan,” used as marketing adjective, cultural signifier, life-style choice. And while we may never escape our Tuscan lust, we at least know who’s to blame: Frances Mayes, the author of the memoir “Under the Tuscan Sun,” which recounts her experience restoring an abandoned villa called Bramasole in the Tuscan countryside. The book, published in 1996, spent more than two and a half years on the Times best-seller list and, in 2003, inspired a hot mess of a film adaptation starring Diane Lane. In the intervening years, Mayes has continued to put out Tuscan-themed books at a remarkable rate—“Bella Tuscany,” “Bringing Tuscany Home,” “Every Day in Tuscany,” “The Tuscan Sun Cookbook”—as well as her own line of Tuscan wines, olive oils, and even furniture. In so doing, she has managed to turn a region of Italy into a shorthand for a certain kind of bourgeois luxury and good taste. A savvy M.B.A. student should do a case study.

I feel sheepish admitting this, but I have a longtime love-hate relationship with “Under the Tuscan Sun.” Since first reading the book, in the nineties, when I was in my twenties, its success has haunted me, teased me, and tortured me as I’ve forged a career as a food and travel writer who occasionally does stories about Italy. I could understand the appeal of Mayes’s memoir to, for instance, my mother, who loves nothing more than to plot the construction of a new dream house. “I come from a long line of women who open their handbags and take out swatches of upholstery,” Mayes writes, “colored squares of bathroom tile, seven shades of paint samples, and strips of flowered wallpaper.” She may as well be speaking directly to my mom and many of her friends. But I was more puzzled by the people my own age who suddenly turned Tuscan crazy—drizzling extra-virgin olive oil on everything, mispronouncing “bruschetta,” pretending to love white beans. In 2002, I was asked to officiate a wedding of family friends in Tuscany, where a few dozen American guests stayed in a fourteenth-century villa that had once been a convent. The villa’s owners were fussy yuppies from Milan who had a long, scolding list of house rules—yet, when we inquired why the electricity went out every day from 2 P.M. to 8 P.M., they shrugged and told us we were uptight Americans. This irritating mix of fussy, casual, and condescending reminded me of the self-satisfied tone of “Under the Tuscan Sun.” I began to despise the villa owners so much that when the brother-in-law of the bride and groom got drunk on Campari and vomited on a fourteenth-century fresco, causing more than a thousand euros in damage, I had a good, long private laugh.

Much of my hangup, let’s be clear, had to do with my own jealousy. If only I could afford a lovely villa, I certainly wouldn’t have been so smug! I would think. I would have lived more authentically! But beyond Italy and villas and personal gripes, Mayes’s book cast a long shadow over my generation of food and travel writers. As a young journalist, I quickly realized that editors were not going to give me cushy travel assignments to Italy, and so I began veering slightly off the beaten path, going to Iceland, Nicaragua, Portugal, and other countries that aren’t Italy, in order to sell articles. But the spectre of Mayes found me anyway. Once, in the early two-thousands, when I was trying to sell a book about Iceland, a publisher told me, “You know what you should do? You should buy a house in Iceland. And then fix it up, and live there, and write something like ‘Under the Icelandic Sun.’ ” I never sold a book on Iceland, nor did I sell my other pitches from that period, which were essentially “Under the Portuguese Sun” and “Under the Nicaraguan Sun.” By the late aughts, the mere mention of Mayes’s memoir made me angry. At one point I lashed out against the book in print, calling it “treacly” in an essay that was published days before I encountered Frances Mayes at a wine writers’ conference. I was assigned to sit across from her at the opening reception. She shook my hand and said, “I read your piece,” then ignored me for the rest of the dinner.

by Jason Wilson, New Yorker |  Read more:
Image: Touchstone/Everett

Friday, March 18, 2016

The Ballmer Peak

Debriefing Mike Murphy

On a pleasant Super Tuesday afternoon — one of 10 or 11 Super Tuesdays we seem to be having this March — I am standing in the bloated carcass of that much-maligned beast known as The Establishment. In the unmarked suite of a generic mid-Wilshire office building (The Establishment can't be too careful, with all these populists sharpening pitchforks), I have come to Right to Rise, Jeb Bush's $118 million super-PAC, to watch Mike Murphy and his crew pack it in.

If you've been reading your Conventional Wisdom Herald, you know that Murphy, one of the most storied and furiously quick-witted political consultants of the last three decades, has lately been cast as the Titanic skipper who steered Jeb's nine-figure colossus smack into an iceberg. That donor loot helped buy Jeb all of four delegates before he dropped from the race, returning to a quiet life of low-energy contemplation. The Los Angeles Times called Right to Rise "one of the most expensive failures in American political history," which is among the more charitable assessments. (If you ever find yourself in Mike Murphy's position, never, ever look at Twitter.)

In his early career, profilers taking note of his long hair, leather jackets, and loud Hawaiian shirts made Murphy sound like a cross between the wild man of Borneo, Jimmy Buffett, and an unmade futon. These days, his hair is short, and there's a little less of it to account for. He looks more like a shambling film professor, in smart-guy faculty glasses, Lacoste half-zip, and khakis — his loud rainbow-striped socks being the only sartorial tell that he might still, as a Republican elder once told a reporter, be "in need of adult supervision." (...)

Murphy first cracked the political-consultant game back in 1982, cutting political ads from his dorm room and later dropping out of Georgetown's School of Foreign Service, figuring he dodged a career "stamping visas in Istanbul." Since then he's sold one political consultancy and his share of another, and is partner in a third (Revolution), for which he mostly does corporate work. He generally prefers this to campaigns these days, since even though there's accountability to corporate boards, "you don't have to face 22 people who have no experience, telling you how to do your job from their safe Twitter perch in journalism."

Murphy's clients have won around two dozen Senate and gubernatorial races (everyone from John Engler to Mitt Romney to Christie Todd Whitman to Arnold Schwarzenegger). If you notice a theme, it's that he often helps Republicans win in Democratic states. Likewise, he's played a major role in assisting three losing presidential candidates (McCain, Lamar! Alexander, and Jeb!). If you again notice a theme, it's that his presidential candidates sometimes seem more excited about their first names than the electorate does.

Like all hired guns in his trade, he's taken his share of mercenary money just for the check. But Murphy says when it comes to presidentials, he thinks it matters more and is a sucker for long shots. "I have friends I believe in who want to run. I'm a romantic, so I keep falling for that pitch." Jeb wasn't exactly a long shot, I remind him. Like hell he wasn't, says Murphy. It's a hard slog, not being a Grievance Candidate this year. "He was the guy who was handing out policy papers when Trump was handing out broken bottles."

Since a candidate is not permitted by law to discuss campaign specifics with his super-PAC once he declares, a law Murphy vows was strictly observed ("I'm too pretty to go to jail"), I ask him what he would've told Jeb during the campaign had he been allowed to. Over the years, Murphy has forged a reputation of telling his candidates the truth, no matter how bitter the medicine. (He once had to tell a congressional client that his toupee was unconvincing.) Though Murphy's tongue is usually on a hair-trigger, he stops and ponders this question for a beat. He then says he would've told Jeb, "What the f — were we thinking?"

Even pre-campaign, however, when they were allowed to coordinate as Right to Rise was amassing its unprecedented war chest, well before Trump's ascendancy, both knew that despite the media billing Bush the prohibitive favorite — a position they both detested — they were facing long odds. (The assumption was Ted Cruz would be occupying the anger-candidate slot that Trump has instead so ably filled.)

Murphy says Bush regarded this election as a necessary tussle between the politics of optimism and grievance. At a preseason dinner, Murphy gave Bush his best guess of their chances of winning — under 50 percent. "He grinned," Murphy says, "and named an even lower number. I remember leaving the dinner with a mix of great pride in Jeb's principled courage and with a sense of apprehension about the big headwinds we would face." And though he'd also have told his friend, if he'd been allowed to speak to him, that he was proud of Jeb "for fighting his corner," ultimately, Murphy admits, "there is no campaign trick or spending level or candidate whisperer that can prevent a party from committing political suicide if it wants to."

by Matt Labash, New Republic |  Read more:
Image: Gary Locke

A History of the Amiga - Part 1 Genesis


[ed. My first computer was an Amiga 1000. As the joke goes, it was so far ahead of its time not even Commodore knew how to market it.]

The Amiga computer was a dream given form: an inexpensive, fast, flexible multimedia computer that could do virtually anything. It handled graphics, sound, and video as easily as other computers of its time manipulated plain text. It was easily ten years ahead of its time. It was everything its designers imagined it could be, except for one crucial problem: the world was essentially unaware of its existence.

With personal computers now playing such a prominent role in modern society, it's surprising to discover that a machine with most of the features of modern PCs actually first came to light back in 1985. Almost without exception, the people who bought and used Amigas became diehard fans. Many of these people would later look back fondly on their Amiga days and lament the loss of the platform. Some would even state categorically that despite all the speed and power of modern PCs, the new machines have yet to capture the fun and the spirit of their Amiga predecessors. A few still use their Amigas, long after the equivalent mainstream personal computers of the same vintage have been relegated to the recycling bin. Amiga users, far more than any other group, were and are extremely passionate about their platform.

So if the Amiga was so great, why did so few people hear about it? The world has plenty of books about the IBM PC and its numerous clones, and even a large library about Apple Computer and the Macintosh platform. There are also many books and documentaries about the early days of the personal computing industry. A few well-known examples are the excellent book Accidental Empires (which became a PBS documentary called Triumph of the Nerds) and the seminal work Fire in the Valley (which became a TV movie on HBO entitled Pirates of Silicon Valley.)

These works tell an exciting tale about the early days of personal computing, and show us characters such as Bill Gates and Steve Jobs battling each other while they were still struggling to establish their new industry and be taken seriously by the rest of the world. They do a great job telling the story of Microsoft, IBM, and Apple, and other companies that did not survive as they did. But they mention Commodore and the Amiga rarely and in passing, if at all. Why?

When I first went looking for the corresponding story of the Amiga computer, I came up empty-handed. An exhaustive search for Amiga books came up with only a handful of old technical manuals, software how-to guides, and programming references. I couldn't believe it. Was the story so uninteresting? Was the Amiga really just a footnote in computing history, contributing nothing new and different from the other platforms?

As I began researching, I discovered the answer, and it surprised me even more than the existence of the computer itself. The story of Commodore and the Amiga was, by far, even more interesting than that of Apple or Microsoft. It is a tale of vision, of technical brilliance, dedication, and camaraderie. It is also a tale of deceit, of treachery, and of betrayal. It is a tale that has largely remained untold.

This series of articles attempts to explain what the Amiga was, what it meant to its designers and users, and why, despite its relative obscurity and early demise, it mattered so much to the computer industry. It follows some of the people whose lives were changed by their contact with the Amiga and shows what they are doing today. Finally, it looks at the small but dedicated group of people who have done what many thought was impossible and developed a new Amiga computer and operating system, ten years after the bankruptcy of Commodore. Long after most people had given up the Amiga for dead, these people have given their time, expertise and money in pursuit of this goal.

To many people, these efforts seem futile, even foolish. But to those who understand, who were there and lived through the Amiga at the height of its powers, they do not seem foolish at all.

But the story is about something else as well. More than a tale about a computer maker, this is the story about the age-old battle between mediocrity and excellence, the struggle between merely existing and trying to go beyond expectations. At many points in the story, the struggle is manifested by two sides: the hard-working, idealistic engineers driven to the bursting point and beyond to create something new and wonderful, and the incompetent and often avaricious managers and executives who end up destroying that dream. But the story goes beyond that. At its core, it is about people, not just the designers and programmers, but the users and enthusiasts, everyone whose lives were touched by the Amiga. And it is about me, because I count myself among those people, despite being over a decade too late to the party.

All these people have one thing in common. They understand the power of the dream.

by Jeremy Reimer, Ars Technica | Read more:
Image: Commodore

Thursday, March 17, 2016

Buddy Guy

The Mattering Instinct

We can’t pursue our lives without thinking that our lives matter—though one has to be careful here to distinguish the relevant sense of “matter." Simply to take actions on the basis of desires is to act as if your life matters. It’s inconceivable to pursue a human life without these kinds of presumptions—that your own life matters to some extent. Clinical depression is when you are convinced that you don’t and will never matter. That’s a pathological attitude, and it highlights, by its pathology, the way in which the mattering instinct normally functions. To be a fully functioning, non-depressed person is to live and to act, to take it for granted that you can act on your own behalf, pursue your goals and projects. And that we have a right to be treated in accord with our own commitment to our lives mattering. We quite naturally flare up into outrage and indignation when others act in violation of the presumption grounding the pursuance of our lives. So this is what I mean by the mattering instinct—that commitment to one’s own life that is inseparable from pursuing a coherent human life.

But I want to distinguish more precisely the relevant sense of “mattering." The commitment to your own mattering is, first of all, not to presume that you cosmically matter—that you matter to the universe. My very firm opinion is that we don’t matter to the universe. The universe is lacking in all attitudes, including any attitude toward us. Of course, the religious point of view is that we do cosmically matter. The universe, as represented by God, takes an attitude toward us. That is not what I’m saying is presumed in the mattering instinct. To presume that one matters isn’t to presume that you matter to the universe, nor is it to presume that you matter more than others. There have been philosophers who asserted that some—for example, people of genius—absolutely matter more than others. Nietzsche asserted this. He said, for example, that all the chaos and misery of the French Revolution was justified because it brought forth the genius of Napoleon. The only justification for a culture, according to Nietzsche, is that it fosters a person who bears all the startling originality of a great work of art. All the non-originals—which are, of course, the great bulk of us—don’t matter. Nietzsche often refers to them as “the botched and the bungled.” According to Nietzsche there is an inequitable distribution of mattering. But I neither mean to be asserting anything religious nor anything Nietzsche-like in talking about our mattering instinct. I reject the one as firmly as the other. In fact, I would argue that the core of the moral point of view is that there is an equitable distribution of mattering among humans. To the extent that any of us matters—and just try living your life without presuming that you do—we all equally matter. (...)

When you figure out what matters to you and what makes you feel like you’re living a meaningful life, you universalize this. Say I’m a scientist and all my feelings about my own mattering are crystalized around my life as a scientist. It’s quite natural to slide from that into thinking that the life of science is the life that matters. Why doesn’t everybody get their sense of meaning from science? That false universalizing takes place quite naturally, imperceptibly, being unconsciously affected by the forces of the mattering map. In different people the need to justify their own sense of mattering slides into the religious point of view and they end up concluding that, without a God to justify human mattering, life is meaningless: Why doesn’t everybody see that the life that matters is the life of religion? That’s false reasoning about mattering as well. These are the things I’m thinking about: What’s justified by the mattering instinct, which itself cannot and need not be justified, and what isn’t justified by it.

Yes, I want to explain the mattering instinct in terms of evolutionary psychology because I think everything about us, everything about human nature, demands an evolutionary explanation. And I do think that the outlines of such an explanation are quite apparent. That I matter, that my life demands the ceaseless attention I give it, is exactly what those genes would have any organism believing, if that organism was evolved enough for belief. The will to survive evolves, in a higher creature like us, into the will to matter. (...)

Science is science and philosophy is philosophy, and it takes a philosopher to do the demarcation. How does science differ from philosophy? That’s not a scientific question. In fact, what science is is not itself a scientific question; what science is is the basic question in the philosophy of science, or at least the most general one.

Here’s what I think science is: Science is this ingenuous motley collection of techniques and cognitive abilities that we use in order to try to figure out what is, the questions of what is: What exists? What kind of universe are we living in? How is our universe ontologically furnished? People talk about the scientific method. There’s no method. That makes it sound like it’s a recipe: one, two, three, do this and you’re doing science. Instead, science is a grab bag of different techniques and cognitive abilities: observation, collecting of data, experimental testing, a priori mathematics, theorizing, model simulations; different scientific activities call for different talents, different cognitive abilities.

The abilities and techniques that a geologist who’s collecting samples of soil and rocks to figure out thermal resistance is using, compared to a cognitive scientist who’s figuring out a computer simulation of long-term memory, compared to Albert Einstein performing a thought experiment—what it’s like to ride a lightwave—compared to a string theorist working out the mathematical implications of 11 dimensions of M-theory, compared to a computational biologist sifting through big data in order to spot genomic phenotypes, are all so very different. These are very different cognitive abilities and talents, and they’re all brought together in order to figure out what kind of universe we’re living in, what its constituents are and what the laws of nature governing the constituents are.

Here’s the wonderful trick about science: Given all of these motley attributes, talents, techniques, activities, in order for it to be science, you have to bring reality itself into the picture as a collaborator. Science is a way to prod reality to answer us back when we’re getting it wrong. It’s an amazing thing that we’ve figured out how to do it and it’s a good thing too because our intuitions are askew. Why shouldn’t they be? We’re just evolved apes, as we know through science. Our views about space and time, causality, individuation are all off. If we hadn’t developed an enterprise whose whole point is to prod reality to answer us back when we’re getting it wrong, we’d never know how out of joint our basic intuitions are.

Science has been able to correct this because no matter how theoretical it is, you have to be able to get your predictions. You have to be able to get reality to say, “So you think simultaneity is absolute, do you? no matter which frame of reference you’re measuring it in? Well, we’re just going to see about that.” And you perform the tests and, sure enough, our intuitions are wrong. That’s what science is. If philosophers think that they can compete with that, they're off their rockers.

That’s the mistake that a lot of scientists make. I call them philosophy jeerers—the ones who just dismiss philosophy, that have nothing to add because they think that philosophers are trying to compete with this amazing grab bag that we’ve worked out and that gets reality itself to be a collaborator. But there’s more to be done, to be figured out, than just what kind of world we live in, the job description of science. In fact, everything I’ve just been saying, in defending science as our best means of figuring out the nature of our universe, hasn’t been science at all but rather philosophy, a kind of rewording of what Karl Popper had said.

Karl Popper, a philosopher, coined the term “falsifiability,” to try to highlight the importance of this all-important ability of science to prod reality into being our collaborator. Popper is the one philosopher that scientists will cite. They like him. He has a very heroic view of scientists. They’re just out to falsify their theories. "A theory that we accept," he says, “just hasn’t been falsified yet.” It’s a very heroic view of scientists. They’re never egotistically attached to their theories. A very idealized view of science and scientists. No wonder scientists eat Popper up.

One of the things that Popper had said, and this relates very much to this whole idea of beauty in our scientific theories, is that we have to be able to test our theories in order for them to be scientific. Our whole way of framing our theories and the questions that we want to solve and the data that we’re interested in looking at—particularly in theory formation, there are certain metaphysical presumptions that we bring with us in order to do science at all—they can’t be validated by science, but they’re implicit in the very carrying on of science. That there are metaphysical presumptions that go into theory formation is an aspect of Popper’s description of science that most scientists forget that Popper ever said.

One of these is that nature is law-like. If we find some anomaly, some contradiction to an existing law, we don’t say, “Oh, well, maybe nature just isn’t law-like. Maybe this was a miracle.” No. We say that we got the laws wrong and go back to the drawing board. Newtonian physics gets corrected, shown to be only a limiting case under the more general relativistic physics. We’re always presuming that nature is law-like in order to do science at all. We also bring with us our intuitions about beauty and, all things being equal, if we have two theories that are adequate to all the empirical evidence we have, we go with the one that’s more elegant, more beautiful. Usually that means more mathematically beautiful. That can be a very strong metaphysical ingredient in the formation of our theories.

It was particularly dramatic in Einstein that he had these very strong views of the beauty and harmony of the laws of nature, and that was utilized in general relativity. General relativity was published in 1915. It had to wait until 1919, when Eddington went to Africa and took pictures of the solar eclipse, for some empirical validation to be established. Sure enough, light waves were bent because of the mass of the sun; gravity distorted the geometry of space-time.

This was the first empirical verification that came for general relativity; there was nothing before then. Einstein jokingly had said to somebody that if the empirical evidence had not validated his theory, he would’ve felt sorry for the dear Lord. He said to Hans Reichenbach, a philosopher of science and a physicist, that he knew before the empirical validation arrived in 1919 that the theory had to be true because it was too beautiful and elegant not to be true. That’s a very strong intuition, a metaphysical intuition that informed his formulation of the theory, which is exactly the kind of thing that Popper was talking about.

The laws of nature are elegant, which usually means mathematically elegant. We’re moved by this. You can’t learn the relativity theory and not be moved by the beauty of it.

Look, there are people who say the string theory is not science until you can somehow get reality to answer us back. It’s not science; it’s metaphysics—this is an argument.

The notion of the multiverse: It certainly seems that it’s hard to get any empirical evidence for parallel universes, but yet it’s a very elegant way of answering a lot of questions, like the so-called fine tuning of the physical constants. These are places in which science might be slipping over into philosophy. What we have to just keep doing is working away at it and perhaps we’ll be able to figure out an ingenious way for reality to answer us back.

by Rebecca Newberger Goldstein , Edge | Read more:
Image: uncredited