Saturday, March 19, 2016

Depends On Your Point of View

Last night I came upon a new exhibit in my running critique. I will show it to you, and then try to interpret what it means. It happened on a program where he said, she said and “we’ll have to leave it there” are a kind of house style: The Newshour on PBS. (Link.) Let’s set the scene:

* A big story: the poisoning of Flint, Michigan’s water supply— a major public health disaster.
* Latest news: the House Committee on Oversight and Government Reform held a hearing at which Michigan Governor Rick Snyder, a Republican, and EPA Administrator Gina McCarth, an Obama appointee, both testified.
* Outcome: They were ritualistically denounced and told to the resign by members of Congress in the opposing party. (Big surprise.)
* Cast of characters in the clip I’m about to show you: Judy Woodruff of the Newshour is host and interviewer. David Shepardson is a Reuters reporter in the Washington bureau who has been covering the Flint disaster. (Formerly of the Detroit News and a Michigan native.) Marc Edwards is a civil and environmental engineer and professor at Virginia Tech. (“He’s widely credited with helping to expose the Flint water problems. He testified before the same House committee earlier this week.”)

Now watch what happens when Woodruff asks the Reuters reporter: who bears responsibility for the water crisis in Flint? Which individual or agency is most at fault here? (The part I’ve isolated is 2:22.)

Here is what I saw. What did you see?


The Reuters journalist defaults on the question he was asked. He cannot name a single agency or person who is responsible. The first thing and the last thing he says is “depends on your point of view.” These are weasel words. In between he manages to isolate the crucial moment — when the state of Michigan failed to add “corrosion control” to water drawn from the Flint River — but he cannot say which official or which part of government is responsible for that lapse. Although he’s on the program for his knowledge of a story he’s been reporting on for months, the question of where responsibility lies seems to flummox and decenter him. He implies that he can’t answer because there actually is no answer, just the clashing points of view.

Republicans in Congress scream at Obama’s EPA person: you failed! Democrats in Congress scream at a Republican governor: you failed! Our reporter on the scene shrugs, as if to say: take your pick, hapless citizens! His actual words: “Splitting up the blame depends on your point of view.”

This is a sentiment that Judy Woodruff, who is running the show, can readily understand. He’s talking her language when he says “depends on your point of view.” That is just the sort of the down-the-middle futility that PBS Newshour traffics in. Does she press him to do better? Does she say, “Our viewers want to know: how can such thing a happen in the United States? You’ve been immersed in the story, can you at least tell us where to look if we’re searching for accountability?” She does not. Instead, she sympathizes with David Shepardson. “It’s impossible to separate it from the politics.” But we’ll try!

For the try she has to turn to the academic on the panel, who then gives a little master class in how to answer the question: who is at fault here? Here are the points Marc Edwards of Virginia Tech makes:

* Governor Snyder failed to listen to the people of Flint when they complained about the water.
* Synder trusted too much in the Michigan Department of Environmental Quality and the EPA.
* He has accepted some blame for these failures, calling the Flint water crisis his Katrina.
* EPA, by contrast, has been evading responsibility for its part in the scandal.
* EPA called the report by its own whistleblower “inconclusive” when it really wasn’t.
* The agency hesitated and doubted itself when it came to enforcing federal law. WTF?
* EPA said it had been “strong-armed” by the state officials as if they had more authority than the Federal government.

Who is responsible? That was the question on the PBS table. If we listen to the journalist on the panel we learn: “it depends on which team you’re on,” and “they’re all playing politics,” and “it’s impossible to separate truth from spin.”

Professor Marc Edwards, more confident in his ability to speak truth to power, cuts through all that crap: There are different levels of failure and layers of responsibility here, he says. Some people are further along than others in admitting fault. Yes, it’s complicated — as real life usually is — but that doesn’t mean it’s impossible to assign responsibility. Nor does responsibility lie in one person’s lap or one agency’s hands. Multiple parties are involved. But when people who have some responsibility obfuscate, that’s outrageous. And it has to be called out.

Now I ask you: who’s in the ivory tower here? The journalist or the academic?

I know what you’re thinking, PBS Newshour people. Hey, we’re the ones who booked Marc Edwards on our show and let him run with it. That’s good craft in broadcast journalism! Fair point, Newshour people. All credit to you for having him on. Good move. Full stop.

What interests me here is the losing gambit and musty feel of formulaic, down-the-middle journalism. The misplaced confidence of the correspondent positioning himself between warring parties. The spectacle of a Reuters reporter, steeped in the particulars of the case, defaulting on the basic question of who is responsible. The forfeiture of Fourth Estate duties to other, adjacent professions. The union with gridlock and hopelessness represented in those weasel words: “depends on your point of view.” The failure of nerve when Judy Woodruff lets a professional peer dodge her question— a thing they chortle about and sneer at when politicians do it. The contribution that “not our job” journalists make to unaccountable government, and to public cynicism. The bloodlessness and lack of affect in the journalist commenting on the Flint crisis, in contrast to the academic who is quietly seething.

by Jay Rosen, Press Think |  Read more:
Image: YouTube

Motion Design is the Future of UI

Wokking the Suburbs


As he stepped woozily into the first American afternoon of his life, the last thing my father wanted to do was eat Chinese food. He scanned the crowd for the friend who’d come from Providence (my father would stay with this friend for a few weeks before heading to Amherst to begin his graduate studies). That friend didn’t know how to drive, however, so he promised to buy lunch for another friend in exchange for a ride to the Boston airport. The two young men greeted my father at the gate, exchanged some backslaps, and rushed him to the car, where they stowed the sum total of his worldly possessions in the trunk and folded him into the backseat. Then they gleefully set off for Boston’s Chinatown, a portal back into the world my father (and these friends before him) had just left behind. Camaraderie and goodwill were fine enough reasons to drive hours to fetch someone from the airport; just as important was the airport’s proximity to food you couldn’t get in Providence.

He remembers nothing about the meal itself. He was still nauseous from the journey—Taipei to Tokyo to Seattle to Boston—and, after all, he’d spent every single day of the first twenty-something years of his life eating Chinese food.

“For someone who had just come from Taiwan, it was no good. For someone who came from Providence, it must have been very good!” he laughs.

When my mother came to the United States a few years after my father (Taipei-Tokyo-San Francisco), the family friends who picked her up at least had the decency to wait a day and allow her to find her legs before taking her to a restaurant in the nearest Chinatown.

“I remember the place was called Jing Long, Golden Dragon. Many years later there was a gang massacre in there,” she casually recalls. “I still remember the place. It was San Francisco’s most famous. The woman who brought me was very happy but I wasn’t hungry. “Of course, they always think if you come from Taiwan or China you must be hungry for Chinese food.”

It was the early 1970s, and my parents had each arrived in the United States with only a vague sense of what their respective futures held, beyond a few years of graduate studies. They certainly didn’t know they would be repeating these treks in the coming decades, subjecting weary passengers (namely, me) to their own long drives in search of Chinese food. I often daydream about this period of their lives and imagine them grappling with some sense of terminal dislocation, starving for familiar aromas, and regretting the warnings of their fellow new Americans that these were the last good Chinese spots for the next hundred or so miles. They would eventually meet and marry in Champaign-Urbana, Illinois (where they acquired a taste for pizza), and then live for a spell in Texas (where they were told that the local steak house wasn’t for “their kind”), before settling in suburban California. Maybe this was what it meant to live in America. You could move around. You were afforded opportunities unavailable back home. You were free to go by “Eric” at work and name your children after US presidents. You could refashion yourself a churchgoer, a lover of rum-raisin ice cream, an aficionado of classical music or Bob Dylan, a fan of the Dallas Cowboys because everyone else in the neighborhood seemed to be one. But for all the opportunities, those first days in America had prepared them for one reality: sometimes you had to drive great distances in order to eat well. (...)

Suburbs are seen as founts of conformity, but they are rarely places beholden to tradition. Nobody goes to the suburbs on a vision quest—most are drawn instead by the promise of ready-made status, a stability in life modeled after the stability of neat, predictable blocks and gated communities. And yet, a suburb might also be seen as a slate that can be perpetually wiped clean to accommodate new aspirations.

There remain vestiges of what stood before, and these histories capture the cyclical aspirations that define the suburb: Cherry Tree Lane, where an actual orchard was once the best possible use of free acreage; the distinctive, peaked roof of a former Sizzler turned dim sum spot; the Hallmark retailer, all windows and glass ledges, that is now a noodle shop; and the kitschy railroad-car diner across the street that’s now another noodle shop. But Cupertino was still in transition throughout the 1980s and early 1990s. Monterey Park, hundreds of miles to our south, was the finished article.

All suburban Chinatowns owe something to Frederic Hsieh, a young realtor who regarded Monterey Park and foresaw the future. He began buying properties all over this otherwise generic community in the mid-1970s and blitzed newspapers throughout Taiwan and Hong Kong with promises of a “Chinese Beverly Hills” located a short drive from Los Angeles’s Chinatown. While there had been a steady stream of Chinese immigrants over the previous decade, Hsieh guessed that the uncertain political situation in Asia combined with greater business opportunities in the United States would bring more of them to California. Instead of the cramped, urban Chinatowns in San Francisco or Flushing, Hsieh wanted to offer these newcomers a version of the American dream: wide streets, multicar garages, good schools, minimal culture shock, and a short drive to Chinatown. In 1977, he invited twenty of the city’s most prominent civic and business leaders to a meeting over lunch (Chinese food, naturally) and explained that he was building a “modern-day mecca” for the droves of Chinese immigrants on their way. This didn’t go over so well with some of Monterey Park’s predominantly white establishment, who mistook his bluster for arrogance. As a member of the city’s Planning Commission later told the Los Angeles Times, “Everyone in the room thought the guy was blowing smoke. Then when I got home I thought, what gall. What ineffable gall. He was going to come into my living room and change my furniture?”

Gall was contagious. The following year, Wu Jin Shen, a former stockbroker from Taiwan, opened Diho Market, Monterey Park’s first Asian grocery. Wu would eventually oversee a chain of stores with four hundred employees and $30 million in sales. Soon after, a Laura Scudder potato-chip factory that had been remade into a Safeway was remade into an Asian supermarket. Another grocery store was refitted with a Pagoda-style roof.

Chinese restaurateurs were the shock troops of Hsieh’s would-be conquest. “The first thing Monterey Park residents noticed were the Chinese restaurants that popped up,” a different but no less alarmist piece the citizen quoted in the Times recalled. “Then came the three Chinese shopping centers, the Chinese banks, and the Chinese theater showing first-run movies from Hong Kong—with English subtitles.”

In Monterey Park, such audacity (if you wanted to call it that) threatened the community’s stability. Residents offended by, say, the razing of split-level ranch-style homes from the historical 1970s to accommodate apartment complexes drew on their worst instincts to try and push through “Official English” legislation in the mid-1980s. “Will the Last American to Leave Monterey Park Please Bring the Flag?” bumper stickers were distributed.

But this hyperlocal kind of nativism couldn’t turn back the demographic tide. In 1990, Monterey Park became the first city in the continental United States with a majority-Asian population. Yet Monterey Park’s growing citizenry didn’t embody a single sensibility. There were affluent professionals from Taiwan and Hong Kong as well as longtime residents of Los Angeles’s Chinatown looking to move to the suburbs. As Tim Fong, a sociologist who has studied Monterey Park, observed in the Chicago Tribune, “The Chinese jumped a step. They didn’t play the (slow) assimilation game.” This isn’t to say these new immigrants rejected assimilation. They were just becoming something entirely new.

Monterey Park became the first suburb that Chinese people would drive for hours to visit and eat in, for the same reasons earlier generations of immigrants had sought out the nearest urban Chinatown. And the changing population and the wealth they brought with them created new opportunities for all sorts of businesspeople, especially aspiring restaurateurs. The typical Chinese American restaurant made saucy, ostentatiously deep-fried concessions to mainstream appetites, leading to the ever-present rumor that most establishments had “secret menus” meant for more discerning eaters. It might be more accurate to say that most chefs at Chinese restaurants are more versatile than they initially let on—either that or families like mine possess Jedi-level powers of off-the-menu persuasion. But in a place like Monterey Park, the pressure to appeal to non-Chinese appetites disappeared. The concept of “mainstream” no longer held; neck bones and chicken feet and pork bellies and various gelatinous things could pay the bills and then some.

by Hua Hsu, Lucky Peach |  Read more:
Image: Yina Kim

The Secrets of the Wave Pilots

At 0400, three miles above the Pacific seafloor, the searchlight of a power boat swept through a warm June night last year, looking for a second boat, a sailing canoe. The captain of the canoe, Alson Kelen, potentially the world’s last-ever apprentice in the ancient art of wave-piloting, was trying to reach Aur, an atoll in the Marshall Islands, without the aid of a GPS device or any other way-finding instrument. If successful, he would prove that one of the most sophisticated navigational techniques ever developed still existed and, he hoped, inspire efforts to save it from extinction. Monitoring his progress from the power boat were an unlikely trio of Western scientists — an anthropologist, a physicist and an oceanographer — who were hoping his journey might help them explain how wave pilots, in defiance of the dizzying complexities of fluid dynamics, detect direction and proximity to land. More broadly, they wondered if watching him sail, in the context of growing concerns about the neurological effects of navigation-by-smartphone, would yield hints about how our orienteering skills influence our sense of place, our sense of home, even our sense of self.

When the boats set out in the afternoon from Majuro, the capital of the Marshall Islands, Kelen’s plan was to sail through the night and approach Aur at daybreak, to avoid crashing into its reef in the dark. But around sundown, the wind picked up and the waves grew higher and rounder, sorely testing both the scientists’ powers of observation and the structural integrity of the canoe. Through the salt-streaked windshield of the power boat, the anthropologist, Joseph Genz, took mental field notes — the spotlighted whitecaps, the position of Polaris, his grip on the cabin handrail — while he waited for Kelen to radio in his location or, rather, what he thought his location was.

The Marshalls provide a crucible for navigation: 70 square miles of land, total, comprising five islands and 29 atolls, rings of coral islets that grew up around the rims of underwater volcanoes millions of years ago and now encircle gentle lagoons. These green dots and doughnuts make up two parallel north-south chains, separated from their nearest neighbors by a hundred miles on average. Swells generated by distant storms near Alaska, Antarctica, California and Indonesia travel thousands of miles to these low-lying spits of sand. When they hit, part of their energy is reflected back out to sea in arcs, like sound waves emanating from a speaker; another part curls around the atoll or island and creates a confused chop in its lee. Wave-piloting is the art of reading — by feel and by sight — these and other patterns. Detecting the minute differences in what, to an untutored eye, looks no more meaningful than a washing-machine cycle allows a ri-meto, a person of the sea in Marshallese, to determine where the nearest solid ground is — and how far off it lies — long before it is visible.

In the 16th century, Ferdinand Magellan, searching for a new route to the nutmeg and cloves of the Spice Islands, sailed through the Pacific Ocean and named it ‘‘the peaceful sea’’ before he was stabbed to death in the Philippines. Only 18 of his 270 men survived the trip. When subsequent explorers, despite similar travails, managed to make landfall on the countless islands sprinkled across this expanse, they were surprised to find inhabitants with nary a galleon, compass or chart. God had created them there, the explorers hypothesized, or perhaps the islands were the remains of a sunken continent. As late as the 1960s, Western scholars still insisted that indigenous methods of navigating by stars, sun, wind and waves were not nearly accurate enough, nor indigenous boats seaworthy enough, to have reached these tiny habitats on purpose.

Archaeological and DNA evidence (and replica voyages) have since proved that the Pacific islands were settled intentionally — by descendants of the first humans to venture out of sight of land, beginning some 60,000 years ago, from Southeast Asia to the Solomon Islands. They reached the Marshall Islands about 2,000 years ago. The geography of the archipelago that made wave-piloting possible also made it indispensable as the sole means of collecting food, trading goods, waging war and locating unrelated sexual partners. Chiefs threatened to kill anyone who revealed navigational knowledge without permission. In order to become a ri-meto, you had to be trained by a ri-meto and then pass a voyaging test, devised by your chief, on the first try. As colonizers from Europe introduced easier ways to get around, the training of ri-metos declined and became restricted primarily to an outlying atoll called Rongelap, where a shallow circular reef, set between ocean and lagoon, became the site of a small wave-piloting school.

In 1954, an American hydrogen-bomb test less than a hundred miles away rendered Rongelap uninhabitable. Over the next decades, no new ri-metos were recognized; when the last well-known one died in 2003, he left a 55-year-old cargo-ship captain named Korent Joel, who had trained at Rongelap as a boy, the effective custodian of their people’s navigational secrets. Because of the radioactive fallout, Joel had not taken his voyaging test and thus was not a true ri-meto. But fearing that the knowledge might die with him, he asked for and received historic dispensation from his chief to train his younger cousin, Alson Kelen, as a wave pilot.

Now, in the lurching cabin of the power boat, Genz worried about whether Kelen knew what he was doing. Because Kelen was not a ri-meto, social mores forced him to insist that he was not navigating but kajjidede, or guessing. The sea was so rough tonight, Genz thought, that even for Joel, picking out a route would be like trying to hear a whisper in a gale. A voyage with this level of navigational difficulty had never been undertaken by anyone who was not a ri-meto or taking his test to become one. Genz steeled himself for the possibility that he might have to intervene for safety’s sake, even if this was the best chance that he and his colleagues might ever get to unravel the scientific mysteries of wave-piloting — and Kelen’s best chance to rally support for preserving it. Organizing this trip had cost $72,000 in research grants, a fortune in the Marshalls.

The radio crackled. ‘‘Jebro, Jebro, this is Jitdam,’’ Kelen said. ‘‘Do you copy? Over.’’

Genz swallowed. The cabin’s confines, together with the boat’s diesel odors, did nothing to allay his motion sickness. ‘‘Copy that,’’ he said. ‘‘Do you know where you are?’’

Though mankind has managed to navigate itself across the globe and into outer space, it has done so in defiance of our innate way-finding capacities (not to mention survival instincts), which are still those of forest-dwelling homebodies. Other species use far more sophisticated cognitive methods to orient themselves. Dung beetles follow the Milky Way; the Cataglyphis desert ant dead-reckons by counting its paces; monarch butterflies, on their thousand-mile, multigenerational flight from Mexico to the Rocky Mountains, calculate due north using the position of the sun, which requires accounting for the time of day, the day of the year and latitude; honeybees, newts, spiny lobsters, sea turtles and many others read magnetic fields. Last year, the fact of a ‘‘magnetic sense’’ was confirmed when Russian scientists put reed warblers in a cage that simulated different magnetic locations and found that the warblers always tried to fly ‘‘home’’ relative to whatever the programmed coordinates were. Precisely how the warblers detected these coordinates remains unclear. As does, for another example, the uncanny capacity of godwits to hatch from their eggs in Alaska and, alone, without ever stopping, take off for French Polynesia. Clearly they and other long-distance migrants inherit a mental map and the ability to constantly recalibrate it. What it looks like in their mind’s eye, however, and how it is maintained day and night, across thousands of miles, is still a mystery. (...)

Genz met Alson Kelen and Korent Joel in Majuro in 2005, when Genz was 28. A soft-spoken, freckled Wisconsinite and former Peace Corps volunteer who grew up sailing with his father, Genz was then studying for a doctorate in anthropology at the University of Hawaii. His adviser there, Ben Finney, was an anthropologist who helped lead the voyage of Hokulea, a replica Polynesian sailing canoe, from Hawaii to Tahiti and back in 1976; the success of the trip, which involved no modern instrumentation and was meant to prove the efficacy of indigenous ships and navigational methods, stirred a resurgence of native Hawaiian language, music, hula and crafts. Joel and Kelen dreamed of a similar revival for Marshallese sailing — the only way, they figured, for wave-piloting to endure — and contacted Finney for guidance. But Finney was nearing retirement, so he suggested that Genz go in his stead. With their chief’s blessing, Joel and Kelen offered Genz rare access, with one provision: He would not learn wave-piloting himself; he would simply document Kelen’s training.

Joel immediately asked Genz to bring scientists to the Marshalls who could help Joel understand the mechanics of the waves he knew only by feel — especially one called di lep, or backbone, the foundation of wave-piloting, which (in ri-meto lore) ran between atolls like a road. Joel’s grandfather had taught him to feel the di lep at the Rongelap reef: He would lie on his back in a canoe, blindfolded, while the old man dragged him around the coral, letting him experience how it changed the movement of the waves.

But when Joel took Genz out in the Pacific on borrowed yachts and told him they were encountering the di lep, he couldn’t feel it. Kelen said he couldn’t, either. When oceanographers from the University of Hawaii came to look for it, their equipment failed to detect it. The idea of a wave-road between islands, they told Genz, made no sense.

Privately, Genz began to fear that the di lep was imaginary, that wave-piloting was already extinct. On one research trip in 2006, when Korent Joel went below deck to take a nap, Genz changed the yacht’s course. When Joel awoke, Genz kept Joel away from the GPS device, and to the relief of them both, Joel directed the boat toward land. Later, he also passed his ri-meto test, judged by his chief, with Genz and Kelen crewing.

Worlds away, Huth, a worrier by nature, had become convinced that preserving mankind’s ability to way-find without technology was not just an abstract mental exercise but also a matter of life and death. In 2003, while kayaking alone in Nantucket Sound, fog descended, and Huth — spring-loaded and boyish, with a near-photographic memory — found his way home using local landmarks, the wind and the direction of the swells. Later, he learned that two young undergraduates, out paddling in the same fog, had become disoriented and drowned. This prompted him to begin teaching a class on primitive navigation techniques. When Huth met Genz at an academic conference in 2012 and described the methodology of his search for the Higgs boson and dark energy — subtracting dominant wave signals from a field, until a much subtler signal appears underneath — Genz told him about thep di lep, and it captured Huth’s imagination. If it was real, and if it really ran back and forth between islands, its behavior was unknown to physics and would require a supercomputer to model. That a person might be able to sense it bodily amid the cacophony generated by other ocean phenomena was astonishing.

Huth began creating possible di lep simulations in his free time and recruited van Vledder’s help. Initially, the most puzzling detail of Genz’s translation of Joel’s description was his claim that the di lep connected each atoll and island to all 33 others. That would yield a trillion trillion paths, far too many for even the most adept wave pilot to memorize. Most of what we know about ocean waves and currents — including what will happen to coastlines as climate change leads to higher sea levels (of special concern to the low-lying Netherlands and Marshall Islands) — comes from models that use global wind and bathymetry data to simulate what wave patterns probably look like at a given place and time. Our understanding of wave mechanics, on which those models are based, is wildly incomplete. To improve them, experts must constantly check their assumptions with measurements and observations. Perhaps, Huth and van Vledder thought, there were di leps in every ocean, invisible roads that no one was seeing because they didn’t know to look.

by Kim Tingley, NY Times |  Read more:
Image: Mark Peterson/Redux

Under the Crushing Weight of the Tuscan Sun

I have sat on Tuscan-brown sofas surrounded by Tuscan-yellow walls, lounged on Tuscan patios made with Tuscan pavers, surrounded by Tuscan landscaping. I have stood barefoot on Tuscan bathroom tiles, washing my hands under Tuscan faucets after having used Tuscan toilets. I have eaten, sometimes on Tuscan dinnerware, a Tuscan Chicken on Ciabatta from Wendy’s, a Tuscan Chicken Melt from Subway, the $6.99 Tuscan Duo at Olive Garden, and Tuscan Hummus from California Pizza Kitchen. Recently, I watched my friend fill his dog’s bowl with Beneful Tuscan Style Medley dog food. This barely merited a raised eyebrow; I’d already been guilty of feeding my cat Fancy Feast’s White Meat Chicken Tuscany. Why deprive our pets of the pleasures of Tuscan living?

In “Tuscan Leather,” from 2013, Drake raps, “Just give it time, we’ll see who’s still around a decade from now.” Whoever among us is still here, it seems certain that we will still be living with the insidious and inescapable word “Tuscan,” used as marketing adjective, cultural signifier, life-style choice. And while we may never escape our Tuscan lust, we at least know who’s to blame: Frances Mayes, the author of the memoir “Under the Tuscan Sun,” which recounts her experience restoring an abandoned villa called Bramasole in the Tuscan countryside. The book, published in 1996, spent more than two and a half years on the Times best-seller list and, in 2003, inspired a hot mess of a film adaptation starring Diane Lane. In the intervening years, Mayes has continued to put out Tuscan-themed books at a remarkable rate—“Bella Tuscany,” “Bringing Tuscany Home,” “Every Day in Tuscany,” “The Tuscan Sun Cookbook”—as well as her own line of Tuscan wines, olive oils, and even furniture. In so doing, she has managed to turn a region of Italy into a shorthand for a certain kind of bourgeois luxury and good taste. A savvy M.B.A. student should do a case study.

I feel sheepish admitting this, but I have a longtime love-hate relationship with “Under the Tuscan Sun.” Since first reading the book, in the nineties, when I was in my twenties, its success has haunted me, teased me, and tortured me as I’ve forged a career as a food and travel writer who occasionally does stories about Italy. I could understand the appeal of Mayes’s memoir to, for instance, my mother, who loves nothing more than to plot the construction of a new dream house. “I come from a long line of women who open their handbags and take out swatches of upholstery,” Mayes writes, “colored squares of bathroom tile, seven shades of paint samples, and strips of flowered wallpaper.” She may as well be speaking directly to my mom and many of her friends. But I was more puzzled by the people my own age who suddenly turned Tuscan crazy—drizzling extra-virgin olive oil on everything, mispronouncing “bruschetta,” pretending to love white beans. In 2002, I was asked to officiate a wedding of family friends in Tuscany, where a few dozen American guests stayed in a fourteenth-century villa that had once been a convent. The villa’s owners were fussy yuppies from Milan who had a long, scolding list of house rules—yet, when we inquired why the electricity went out every day from 2 P.M. to 8 P.M., they shrugged and told us we were uptight Americans. This irritating mix of fussy, casual, and condescending reminded me of the self-satisfied tone of “Under the Tuscan Sun.” I began to despise the villa owners so much that when the brother-in-law of the bride and groom got drunk on Campari and vomited on a fourteenth-century fresco, causing more than a thousand euros in damage, I had a good, long private laugh.

Much of my hangup, let’s be clear, had to do with my own jealousy. If only I could afford a lovely villa, I certainly wouldn’t have been so smug! I would think. I would have lived more authentically! But beyond Italy and villas and personal gripes, Mayes’s book cast a long shadow over my generation of food and travel writers. As a young journalist, I quickly realized that editors were not going to give me cushy travel assignments to Italy, and so I began veering slightly off the beaten path, going to Iceland, Nicaragua, Portugal, and other countries that aren’t Italy, in order to sell articles. But the spectre of Mayes found me anyway. Once, in the early two-thousands, when I was trying to sell a book about Iceland, a publisher told me, “You know what you should do? You should buy a house in Iceland. And then fix it up, and live there, and write something like ‘Under the Icelandic Sun.’ ” I never sold a book on Iceland, nor did I sell my other pitches from that period, which were essentially “Under the Portuguese Sun” and “Under the Nicaraguan Sun.” By the late aughts, the mere mention of Mayes’s memoir made me angry. At one point I lashed out against the book in print, calling it “treacly” in an essay that was published days before I encountered Frances Mayes at a wine writers’ conference. I was assigned to sit across from her at the opening reception. She shook my hand and said, “I read your piece,” then ignored me for the rest of the dinner.

by Jason Wilson, New Yorker |  Read more:
Image: Touchstone/Everett

Friday, March 18, 2016

The Ballmer Peak

Debriefing Mike Murphy

On a pleasant Super Tuesday afternoon — one of 10 or 11 Super Tuesdays we seem to be having this March — I am standing in the bloated carcass of that much-maligned beast known as The Establishment. In the unmarked suite of a generic mid-Wilshire office building (The Establishment can't be too careful, with all these populists sharpening pitchforks), I have come to Right to Rise, Jeb Bush's $118 million super-PAC, to watch Mike Murphy and his crew pack it in.

If you've been reading your Conventional Wisdom Herald, you know that Murphy, one of the most storied and furiously quick-witted political consultants of the last three decades, has lately been cast as the Titanic skipper who steered Jeb's nine-figure colossus smack into an iceberg. That donor loot helped buy Jeb all of four delegates before he dropped from the race, returning to a quiet life of low-energy contemplation. The Los Angeles Times called Right to Rise "one of the most expensive failures in American political history," which is among the more charitable assessments. (If you ever find yourself in Mike Murphy's position, never, ever look at Twitter.)

In his early career, profilers taking note of his long hair, leather jackets, and loud Hawaiian shirts made Murphy sound like a cross between the wild man of Borneo, Jimmy Buffett, and an unmade futon. These days, his hair is short, and there's a little less of it to account for. He looks more like a shambling film professor, in smart-guy faculty glasses, Lacoste half-zip, and khakis — his loud rainbow-striped socks being the only sartorial tell that he might still, as a Republican elder once told a reporter, be "in need of adult supervision." (...)

Murphy first cracked the political-consultant game back in 1982, cutting political ads from his dorm room and later dropping out of Georgetown's School of Foreign Service, figuring he dodged a career "stamping visas in Istanbul." Since then he's sold one political consultancy and his share of another, and is partner in a third (Revolution), for which he mostly does corporate work. He generally prefers this to campaigns these days, since even though there's accountability to corporate boards, "you don't have to face 22 people who have no experience, telling you how to do your job from their safe Twitter perch in journalism."

Murphy's clients have won around two dozen Senate and gubernatorial races (everyone from John Engler to Mitt Romney to Christie Todd Whitman to Arnold Schwarzenegger). If you notice a theme, it's that he often helps Republicans win in Democratic states. Likewise, he's played a major role in assisting three losing presidential candidates (McCain, Lamar! Alexander, and Jeb!). If you again notice a theme, it's that his presidential candidates sometimes seem more excited about their first names than the electorate does.

Like all hired guns in his trade, he's taken his share of mercenary money just for the check. But Murphy says when it comes to presidentials, he thinks it matters more and is a sucker for long shots. "I have friends I believe in who want to run. I'm a romantic, so I keep falling for that pitch." Jeb wasn't exactly a long shot, I remind him. Like hell he wasn't, says Murphy. It's a hard slog, not being a Grievance Candidate this year. "He was the guy who was handing out policy papers when Trump was handing out broken bottles."

Since a candidate is not permitted by law to discuss campaign specifics with his super-PAC once he declares, a law Murphy vows was strictly observed ("I'm too pretty to go to jail"), I ask him what he would've told Jeb during the campaign had he been allowed to. Over the years, Murphy has forged a reputation of telling his candidates the truth, no matter how bitter the medicine. (He once had to tell a congressional client that his toupee was unconvincing.) Though Murphy's tongue is usually on a hair-trigger, he stops and ponders this question for a beat. He then says he would've told Jeb, "What the f — were we thinking?"

Even pre-campaign, however, when they were allowed to coordinate as Right to Rise was amassing its unprecedented war chest, well before Trump's ascendancy, both knew that despite the media billing Bush the prohibitive favorite — a position they both detested — they were facing long odds. (The assumption was Ted Cruz would be occupying the anger-candidate slot that Trump has instead so ably filled.)

Murphy says Bush regarded this election as a necessary tussle between the politics of optimism and grievance. At a preseason dinner, Murphy gave Bush his best guess of their chances of winning — under 50 percent. "He grinned," Murphy says, "and named an even lower number. I remember leaving the dinner with a mix of great pride in Jeb's principled courage and with a sense of apprehension about the big headwinds we would face." And though he'd also have told his friend, if he'd been allowed to speak to him, that he was proud of Jeb "for fighting his corner," ultimately, Murphy admits, "there is no campaign trick or spending level or candidate whisperer that can prevent a party from committing political suicide if it wants to."

by Matt Labash, New Republic |  Read more:
Image: Gary Locke

A History of the Amiga - Part 1 Genesis


[ed. My first computer was an Amiga 1000. As the joke goes, it was so far ahead of its time not even Commodore knew how to market it.]

The Amiga computer was a dream given form: an inexpensive, fast, flexible multimedia computer that could do virtually anything. It handled graphics, sound, and video as easily as other computers of its time manipulated plain text. It was easily ten years ahead of its time. It was everything its designers imagined it could be, except for one crucial problem: the world was essentially unaware of its existence.

With personal computers now playing such a prominent role in modern society, it's surprising to discover that a machine with most of the features of modern PCs actually first came to light back in 1985. Almost without exception, the people who bought and used Amigas became diehard fans. Many of these people would later look back fondly on their Amiga days and lament the loss of the platform. Some would even state categorically that despite all the speed and power of modern PCs, the new machines have yet to capture the fun and the spirit of their Amiga predecessors. A few still use their Amigas, long after the equivalent mainstream personal computers of the same vintage have been relegated to the recycling bin. Amiga users, far more than any other group, were and are extremely passionate about their platform.

So if the Amiga was so great, why did so few people hear about it? The world has plenty of books about the IBM PC and its numerous clones, and even a large library about Apple Computer and the Macintosh platform. There are also many books and documentaries about the early days of the personal computing industry. A few well-known examples are the excellent book Accidental Empires (which became a PBS documentary called Triumph of the Nerds) and the seminal work Fire in the Valley (which became a TV movie on HBO entitled Pirates of Silicon Valley.)

These works tell an exciting tale about the early days of personal computing, and show us characters such as Bill Gates and Steve Jobs battling each other while they were still struggling to establish their new industry and be taken seriously by the rest of the world. They do a great job telling the story of Microsoft, IBM, and Apple, and other companies that did not survive as they did. But they mention Commodore and the Amiga rarely and in passing, if at all. Why?

When I first went looking for the corresponding story of the Amiga computer, I came up empty-handed. An exhaustive search for Amiga books came up with only a handful of old technical manuals, software how-to guides, and programming references. I couldn't believe it. Was the story so uninteresting? Was the Amiga really just a footnote in computing history, contributing nothing new and different from the other platforms?

As I began researching, I discovered the answer, and it surprised me even more than the existence of the computer itself. The story of Commodore and the Amiga was, by far, even more interesting than that of Apple or Microsoft. It is a tale of vision, of technical brilliance, dedication, and camaraderie. It is also a tale of deceit, of treachery, and of betrayal. It is a tale that has largely remained untold.

This series of articles attempts to explain what the Amiga was, what it meant to its designers and users, and why, despite its relative obscurity and early demise, it mattered so much to the computer industry. It follows some of the people whose lives were changed by their contact with the Amiga and shows what they are doing today. Finally, it looks at the small but dedicated group of people who have done what many thought was impossible and developed a new Amiga computer and operating system, ten years after the bankruptcy of Commodore. Long after most people had given up the Amiga for dead, these people have given their time, expertise and money in pursuit of this goal.

To many people, these efforts seem futile, even foolish. But to those who understand, who were there and lived through the Amiga at the height of its powers, they do not seem foolish at all.

But the story is about something else as well. More than a tale about a computer maker, this is the story about the age-old battle between mediocrity and excellence, the struggle between merely existing and trying to go beyond expectations. At many points in the story, the struggle is manifested by two sides: the hard-working, idealistic engineers driven to the bursting point and beyond to create something new and wonderful, and the incompetent and often avaricious managers and executives who end up destroying that dream. But the story goes beyond that. At its core, it is about people, not just the designers and programmers, but the users and enthusiasts, everyone whose lives were touched by the Amiga. And it is about me, because I count myself among those people, despite being over a decade too late to the party.

All these people have one thing in common. They understand the power of the dream.

by Jeremy Reimer, Ars Technica | Read more:
Image: Commodore

Thursday, March 17, 2016

Buddy Guy

The Mattering Instinct

We can’t pursue our lives without thinking that our lives matter—though one has to be careful here to distinguish the relevant sense of “matter." Simply to take actions on the basis of desires is to act as if your life matters. It’s inconceivable to pursue a human life without these kinds of presumptions—that your own life matters to some extent. Clinical depression is when you are convinced that you don’t and will never matter. That’s a pathological attitude, and it highlights, by its pathology, the way in which the mattering instinct normally functions. To be a fully functioning, non-depressed person is to live and to act, to take it for granted that you can act on your own behalf, pursue your goals and projects. And that we have a right to be treated in accord with our own commitment to our lives mattering. We quite naturally flare up into outrage and indignation when others act in violation of the presumption grounding the pursuance of our lives. So this is what I mean by the mattering instinct—that commitment to one’s own life that is inseparable from pursuing a coherent human life.

But I want to distinguish more precisely the relevant sense of “mattering." The commitment to your own mattering is, first of all, not to presume that you cosmically matter—that you matter to the universe. My very firm opinion is that we don’t matter to the universe. The universe is lacking in all attitudes, including any attitude toward us. Of course, the religious point of view is that we do cosmically matter. The universe, as represented by God, takes an attitude toward us. That is not what I’m saying is presumed in the mattering instinct. To presume that one matters isn’t to presume that you matter to the universe, nor is it to presume that you matter more than others. There have been philosophers who asserted that some—for example, people of genius—absolutely matter more than others. Nietzsche asserted this. He said, for example, that all the chaos and misery of the French Revolution was justified because it brought forth the genius of Napoleon. The only justification for a culture, according to Nietzsche, is that it fosters a person who bears all the startling originality of a great work of art. All the non-originals—which are, of course, the great bulk of us—don’t matter. Nietzsche often refers to them as “the botched and the bungled.” According to Nietzsche there is an inequitable distribution of mattering. But I neither mean to be asserting anything religious nor anything Nietzsche-like in talking about our mattering instinct. I reject the one as firmly as the other. In fact, I would argue that the core of the moral point of view is that there is an equitable distribution of mattering among humans. To the extent that any of us matters—and just try living your life without presuming that you do—we all equally matter. (...)

When you figure out what matters to you and what makes you feel like you’re living a meaningful life, you universalize this. Say I’m a scientist and all my feelings about my own mattering are crystalized around my life as a scientist. It’s quite natural to slide from that into thinking that the life of science is the life that matters. Why doesn’t everybody get their sense of meaning from science? That false universalizing takes place quite naturally, imperceptibly, being unconsciously affected by the forces of the mattering map. In different people the need to justify their own sense of mattering slides into the religious point of view and they end up concluding that, without a God to justify human mattering, life is meaningless: Why doesn’t everybody see that the life that matters is the life of religion? That’s false reasoning about mattering as well. These are the things I’m thinking about: What’s justified by the mattering instinct, which itself cannot and need not be justified, and what isn’t justified by it.

Yes, I want to explain the mattering instinct in terms of evolutionary psychology because I think everything about us, everything about human nature, demands an evolutionary explanation. And I do think that the outlines of such an explanation are quite apparent. That I matter, that my life demands the ceaseless attention I give it, is exactly what those genes would have any organism believing, if that organism was evolved enough for belief. The will to survive evolves, in a higher creature like us, into the will to matter. (...)

Science is science and philosophy is philosophy, and it takes a philosopher to do the demarcation. How does science differ from philosophy? That’s not a scientific question. In fact, what science is is not itself a scientific question; what science is is the basic question in the philosophy of science, or at least the most general one.

Here’s what I think science is: Science is this ingenuous motley collection of techniques and cognitive abilities that we use in order to try to figure out what is, the questions of what is: What exists? What kind of universe are we living in? How is our universe ontologically furnished? People talk about the scientific method. There’s no method. That makes it sound like it’s a recipe: one, two, three, do this and you’re doing science. Instead, science is a grab bag of different techniques and cognitive abilities: observation, collecting of data, experimental testing, a priori mathematics, theorizing, model simulations; different scientific activities call for different talents, different cognitive abilities.

The abilities and techniques that a geologist who’s collecting samples of soil and rocks to figure out thermal resistance is using, compared to a cognitive scientist who’s figuring out a computer simulation of long-term memory, compared to Albert Einstein performing a thought experiment—what it’s like to ride a lightwave—compared to a string theorist working out the mathematical implications of 11 dimensions of M-theory, compared to a computational biologist sifting through big data in order to spot genomic phenotypes, are all so very different. These are very different cognitive abilities and talents, and they’re all brought together in order to figure out what kind of universe we’re living in, what its constituents are and what the laws of nature governing the constituents are.

Here’s the wonderful trick about science: Given all of these motley attributes, talents, techniques, activities, in order for it to be science, you have to bring reality itself into the picture as a collaborator. Science is a way to prod reality to answer us back when we’re getting it wrong. It’s an amazing thing that we’ve figured out how to do it and it’s a good thing too because our intuitions are askew. Why shouldn’t they be? We’re just evolved apes, as we know through science. Our views about space and time, causality, individuation are all off. If we hadn’t developed an enterprise whose whole point is to prod reality to answer us back when we’re getting it wrong, we’d never know how out of joint our basic intuitions are.

Science has been able to correct this because no matter how theoretical it is, you have to be able to get your predictions. You have to be able to get reality to say, “So you think simultaneity is absolute, do you? no matter which frame of reference you’re measuring it in? Well, we’re just going to see about that.” And you perform the tests and, sure enough, our intuitions are wrong. That’s what science is. If philosophers think that they can compete with that, they're off their rockers.

That’s the mistake that a lot of scientists make. I call them philosophy jeerers—the ones who just dismiss philosophy, that have nothing to add because they think that philosophers are trying to compete with this amazing grab bag that we’ve worked out and that gets reality itself to be a collaborator. But there’s more to be done, to be figured out, than just what kind of world we live in, the job description of science. In fact, everything I’ve just been saying, in defending science as our best means of figuring out the nature of our universe, hasn’t been science at all but rather philosophy, a kind of rewording of what Karl Popper had said.

Karl Popper, a philosopher, coined the term “falsifiability,” to try to highlight the importance of this all-important ability of science to prod reality into being our collaborator. Popper is the one philosopher that scientists will cite. They like him. He has a very heroic view of scientists. They’re just out to falsify their theories. "A theory that we accept," he says, “just hasn’t been falsified yet.” It’s a very heroic view of scientists. They’re never egotistically attached to their theories. A very idealized view of science and scientists. No wonder scientists eat Popper up.

One of the things that Popper had said, and this relates very much to this whole idea of beauty in our scientific theories, is that we have to be able to test our theories in order for them to be scientific. Our whole way of framing our theories and the questions that we want to solve and the data that we’re interested in looking at—particularly in theory formation, there are certain metaphysical presumptions that we bring with us in order to do science at all—they can’t be validated by science, but they’re implicit in the very carrying on of science. That there are metaphysical presumptions that go into theory formation is an aspect of Popper’s description of science that most scientists forget that Popper ever said.

One of these is that nature is law-like. If we find some anomaly, some contradiction to an existing law, we don’t say, “Oh, well, maybe nature just isn’t law-like. Maybe this was a miracle.” No. We say that we got the laws wrong and go back to the drawing board. Newtonian physics gets corrected, shown to be only a limiting case under the more general relativistic physics. We’re always presuming that nature is law-like in order to do science at all. We also bring with us our intuitions about beauty and, all things being equal, if we have two theories that are adequate to all the empirical evidence we have, we go with the one that’s more elegant, more beautiful. Usually that means more mathematically beautiful. That can be a very strong metaphysical ingredient in the formation of our theories.

It was particularly dramatic in Einstein that he had these very strong views of the beauty and harmony of the laws of nature, and that was utilized in general relativity. General relativity was published in 1915. It had to wait until 1919, when Eddington went to Africa and took pictures of the solar eclipse, for some empirical validation to be established. Sure enough, light waves were bent because of the mass of the sun; gravity distorted the geometry of space-time.

This was the first empirical verification that came for general relativity; there was nothing before then. Einstein jokingly had said to somebody that if the empirical evidence had not validated his theory, he would’ve felt sorry for the dear Lord. He said to Hans Reichenbach, a philosopher of science and a physicist, that he knew before the empirical validation arrived in 1919 that the theory had to be true because it was too beautiful and elegant not to be true. That’s a very strong intuition, a metaphysical intuition that informed his formulation of the theory, which is exactly the kind of thing that Popper was talking about.

The laws of nature are elegant, which usually means mathematically elegant. We’re moved by this. You can’t learn the relativity theory and not be moved by the beauty of it.

Look, there are people who say the string theory is not science until you can somehow get reality to answer us back. It’s not science; it’s metaphysics—this is an argument.

The notion of the multiverse: It certainly seems that it’s hard to get any empirical evidence for parallel universes, but yet it’s a very elegant way of answering a lot of questions, like the so-called fine tuning of the physical constants. These are places in which science might be slipping over into philosophy. What we have to just keep doing is working away at it and perhaps we’ll be able to figure out an ingenious way for reality to answer us back.

by Rebecca Newberger Goldstein , Edge | Read more:
Image: uncredited

Tuesday, March 15, 2016

Why Do We Work So Hard?

When John Maynard Keynes mused in 1930 that, a century hence, society might be so rich that the hours worked by each person could be cut to ten or 15 a week, he was not hallucinating, just extrapolating. The working week was shrinking fast. Average hours worked dropped from 60 at the turn of the century to 40 by the 1950s. The combination of extra time and money gave rise to an age of mass leisure, to family holidays and meals together in front of the television. There was a vision of the good life in this era. It was one in which work was largely a means to an end – the working class had become a leisured class. Households saved money to buy a house and a car, to take holidays, to finance a retirement at ease. This was the era of the three-Martini lunch: a leisurely, expense-padded midday bout of hard drinking. This was when bankers lived by the 3-6-3 rule: borrow at 3%, lend at 6%, and head off to the golf course by 3pm.

The vision of a leisure-filled future occurred against the backdrop of the competition against communism, but it is a capitalist dream: one in which the productive application of technology rises steadily, until material needs can be met with just a few hours of work. It is a story of the triumph of innovation and markets, and one in which the details of a post-work world are left somewhat hazy. Keynes, in his essay on the future, reckoned that when the end of work arrived:
For the first time since his creation man will be faced with his real, his permanent problem – how to use his freedom from pressing economic cares, how to occupy the leisure, which science and compound interest will have won for him, to live wisely and agreeably and well.
Karl Marx had a different view: that being occupied by good work was living well. Engagement in productive, purposeful work was the means by which people could realise their full potential. He’s not credited with having got much right about the modern world, but maybe he wasn’t so wrong about our relationship with work.

In those decades after the second world war, Keynes seemed to have the better of the argument. As productivity rose across the rich world, hourly wages for typical workers kept rising and hours worked per week kept falling – to the mid-30s, by the 1970s. But then something went wrong. Less-skilled workers found themselves forced to accept ever-smaller pay rises to stay in work. The bargaining power of the typical blue-collar worker eroded as technology and globalisation handed bosses a whole toolkit of ways to squeeze labour costs. At the same time, the welfare state ceased its expansion and began to retreat, swept back by governments keen to boost growth by cutting taxes and removing labour-market restrictions. The income gains that might have gone to workers, that might have kept living standards rising even as hours fell, that might have kept society on the road to the Keynesian dream, flowed instead to those at the top of the income ladder. Willingly or unwillingly, those lower down the ladder worked fewer and fewer hours. Those at the top, meanwhile, worked longer and longer.

It was not obvious that things would turn out this way. You might have thought that whereas, before, a male professional worked 50 hours a week while his wife stayed at home with the children, a couple of married professionals might instead each opt to work 35 hours a week, sharing more of the housework, and ending up with both more money and more leisure. That didn’t happen. Rather, both are now more likely to work 60 hours a week and pay several people to care for the house and children.

Why? One possibility is that we have all got stuck on a treadmill. Technology and globalisation mean that an increasing number of good jobs are winner-take-most competitions. Banks and law firms amass extraordinary financial returns, directors and partners within those firms make colossal salaries, and the route to those coveted positions lies through years of round-the-clock work. The number of firms with global reach, and of tech start-ups that dominate a market niche, is limited. Securing a place near the top of the income spectrum in such a firm, and remaining in it, is a matter of constant struggle and competition. Meanwhile the technological forces that enable a few elite firms to become dominant also allow work, in the form of those constantly pinging emails, to follow us everywhere.

This relentless competition increases the need to earn high salaries, for as well-paid people cluster together they bid up the price of the resources for which they compete. In the brainpower-heavy cities where most of them live, getting on the property ladder requires the sort of sum that can be built up only through long hours in an important job. Then there is conspicuous consumption: the need to have a great-looking car and a home out of Interiors magazine, the competition to place children in good (that is, private) schools, the need to maintain a coterie of domestic workers – you mean you don’t have a personal shopper? And so on, and on.

The dollars and hours pile up as we aim for a good life that always stays just out of reach. In moments of exhaustion we imagine simpler lives in smaller towns with more hours free for family and hobbies and ourselves. Perhaps we just live in a nightmarish arms race: if we were all to disarm, collectively, then we could all live a calmer, happier, more equal life.

But that is not quite how it is. The problem is not that overworked professionals are all miserable. The problem is that they are not.

Drinking coffee one morning with a friend from my home town, we discuss our fathers’ working habits. Both are just past retirement age. Both worked in an era in which a good job was not all-consuming. When my father began his professional career, the post-war concept of the good life was still going strong. He was a dedicated, even passionate worker. Yet he never supposed that work should be the centre of his life.

Work was a means to an end; it was something you did to earn the money to pay for the important things in life. This was the advice I was given as a university student, struggling to figure out what career to pursue in order to have the best chance at an important, meaningful job. I think my parents were rather baffled by my determination to find satisfaction in my professional life. Life was what happened outside work. Life, in our house, was a week’s holiday at the beach or Pop standing on the sidelines at our baseball games. It was my parents at church, in the pew or volunteering in some way or another. It was having kids who gave you grandkids. Work merely provided more people to whom to show pictures of the grandkids.

This generation of workers, on the early side of the baby boom, is marching off to retirement now. There are things to do in those sunset years. But the hours will surely stretch out and become hard to fill. As I sit with my friend it dawns on us that retirement sounds awful. Why would we stop working?

Here is the alternative to the treadmill thesis. As professional life has evolved over the past generation, it has become much more pleasant. Software and information technology have eliminated much of the drudgery of the workplace. The duller sorts of labour have gone, performed by people in offshore service-centres or by machines. Offices in the rich world’s capitals are packed not with drones filing paperwork or adding up numbers but with clever people working collaboratively.

The pleasure lies partly in flow, in the process of losing oneself in a puzzle with a solution on which other people depend. The sense of purposeful immersion and exertion is the more appealing given the hands-on nature of the work: top professionals are the master craftsmen of the age, shaping high-quality, bespoke products from beginning to end. We design, fashion, smooth and improve, filing the rough edges and polishing the words, the numbers, the code or whatever is our chosen material. At the end of the day we can sit back and admire our work – the completed article, the sealed deal, the functioning app – in the way that artisans once did, and those earning a middling wage in the sprawling service-sector no longer do.

The fact that our jobs now follow us around is not necessarily a bad thing, either. Workers in cognitively demanding fields, thinking their way through tricky challenges, have always done so at odd hours. Academics in the midst of important research, or admen cooking up a new creative campaign, have always turned over the big questions in their heads while showering in the morning or gardening on a weekend afternoon. If more people find their brains constantly and profitably engaged, so much the better.

Smartphones do not just enable work to follow us around; they also make life easier. Tasks that might otherwise require you to stay late in the office can be taken home. Parents can enjoy dinner and bedtime with the children before turning back to the job at hand. Technology is also lowering the cost of the support staff that make long hours possible. No need to employ a full-time personal assistant to run the errands these days: there are apps to take care of the shopping, the laundry and the dinner, walk the dog, fix the car and mend the hole in the roof. All of these allow us to focus ever more of our time and energy on doing what our jobs require of us.

There are downsides to this life. It does not allow us much time with newborn children or family members who are ill; or to develop hobbies, side-interests or the pleasures of particular, leisurely rituals – or anything, indeed, that is not intimately connected with professional success. But the inadmissible truth is that the eclipsing of life’s other complications is part of the reward.

It is a cognitive and emotional relief to immerse oneself in something all-consuming while other difficulties float by. The complexities of intellectual puzzles are nothing to those of emotional ones. Work is a wonderful refuge.

by Ryan Avent, The Economist |  Read more:
Image: Izhar Cohen

The Last Island of the Savages

The lumps of white coral shone round the dark mound like a chaplet of bleached skulls, and everything around was so quiet that when I stood still all sound and all movement in the world seemed to come to an end. It was a great peace, as if the earth had been one grave, and for a time I stood there thinking mostly of the living who, buried in remote places out of the knowledge of mankind, still are fated to share in its tragic or grotesque miseries. In its noble struggles too—who knows? The human heart is vast enough to contain all the world. It is valiant enough to bear the burden, but where is the courage that would cast it off?

                                                                            —Joseph Conrad, Lord Jim

Shortly before midnight on August 2, 1981, a Panamanian-registered freighter called the Primrose, which was traveling in heavy seas between Bangladesh and Australia with a cargo of poultry feed, ran aground on a coral reef in the Bay of Bengal. As dawn broke the next morning, the captain was probably relieved to see dry land just a few hundred yards from the Primrose’s resting place: a low-lying island, several miles across, with a narrow beach of clean white sand giving way to dense jungle. If he consulted his charts, he realized that this was North Sentinel Island, a western outlier in the Andaman archipelago, which belongs to India and stretches in a ragged line between Burma and Sumatra. But the sea was too rough to lower the lifeboats, and so—since the ship seemed to be in no danger of sinking—the captain decided to keep his crew on board and wait for help to arrive.

A few days later, a young sailor on lookout duty in the Primrose’s Watchtower spotted several people coming down from the forest toward the beach and peering out at the stranded vessel. They must be a rescue party sent by the shipping company, he thought. Then he took a closer look at them. They were small men, well-built, frizzy-haired, and black. They were naked except for narrow belts that circled their waists. And they were holding spears, bows, and arrows, which they had begun waving in a manner that seemed not altogether friendly.

Not long after this, a wireless operator at the Regent Shipping Company’s offices in Hong Kong received an urgent distress call from the Primrose’s captain, asking for an immediate airdrop of firearms so that his Island crew could defend itself. “Wild men, estimate more than 50, carrying various homemade weapons are making two or three wooden boats,” the message read. “Worrying they will board us at sunset. All crew members’ lives not guaranteed.”

If the Primrose’s predicament seemed a thing less of the twentieth century than of the eighteenth—an episode, perhaps, from Captain Cook’s voyages in the Pacific—it is because the island where the ship lay grounded had somehow managed to slip through the net of history. Although its existence had been known for centuries, its inhabitants had had virtually no contact with the rest of humanity. Anthropologists referred to them as “Sentinelese,” but no one knew what they called themselves—indeed, no one even knew what language they spoke. And in any case, no one within living memory had gotten close enough to ask. Whether the natives’ prelapsarian state was one of savagery or innocence, no one knew either.

The same monsoon-whipped waves that had driven the Primrose onto the reef kept the tribesmen’s canoes at bay, and high winds blew their arrows off the mark. The crew kept up a twenty-four-hour guard with makeshift weapons—a flare gun, axes, some lengths of pipe—as news of the emergency slowly filtered to the outside world. (An Indian government spokesman denied reports in the Hong Kong press that the Sentinelese were “cannibals.” A Hong Kong government spokesman suggested that perhaps the Primrose’s radio officer had “gone bananas.”) After nearly a week, the Indian Navy dispatched a tugboat and a helicopter to rescue the besieged sailors.

The natives of North Sentinel must have watched the whirring aircraft as it hovered three times above the great steel hulk, lowering a rope ladder to pluck the men safely back into modernity. Then the strange machines departed, the sea calmed, and the island remained, lush and impenetrable, still waiting for its Cook or its Columbus.

Epochs of history rarely come to a sudden end, seldom announce their passing with anything so dramatic as the death of a king or the dismantling of a wall. More often, they withdraw slowly and imperceptibly (or at least unperceived), like the ebbing tide on a deserted beach.

That is how the Age of Discovery ended. For more than five hundred years, the envoys of civilization sailed through storms and hacked through jungles, startling in turn one tribe after another of long-lost human cousins. For an instant, before the inevitable breaking of faith, the two groups would face each other, staring—as innocent, both of them, as children, and blameless as if the world had been born afresh. To live such a moment seems, when we think of it now, to have been one of the most profound experiences that our planet in its vanished immensity once offered. But each time the moment repeated itself on each fresh beach, there was one less island to be found, one less chance to start everything anew. It began to repeat itself less and less often, until there came a time, maybe a century ago, when there were only a few such places left, only a few doors still unopened.

Sometime quite recently, the last door opened. I believe it happened not long before the end of the millennium, on an island already all but known, a place encircled by the buzzing, thrumming web of a world still unknown to it, and by the mesh of a history that had forever been drawing closer. (...)

This is how you get to the most isolated human settlement on earth: You board an evening flight at JFK for Heathrow, Air India 112, a plane full of elegant sari-clad women, London-bound businessmen, hippie backpackers. You settle in to watch a movie (a romantic comedy in which Harrison Ford and Anne Heche get stranded on a desert island) and after a quick nap you are in London.

Then you catch another plane. You read yesterday’s Times while flying above the corrugated gullies of eastern Turkey, watch a Hindi musical somewhere over Iran. That night, and for the week that follows, you are in New Delhi, where the smog lies on the ground like mustard gas, and where one day you see an elephant—an elephant!—in the midst of downtown traffic.

From New Delhi you go by train to Calcutta, where you must wait for a ship. And you must wait for a ticket. There are endless lines at the shipping company office, and jostling, and passing back and forth of black-and-white photographs in triplicate and hundred-rupee notes and stacks of documents interleaved with Sapphire brand carbon paper. Next you are on the ship, a big Polish-built steamer crawling with cockroaches. The steamer passes all manner of scenery: slim and fragile riverboats like craft from a pharaoh’s tomb; broad-beamed, lateen-rigged Homeric merchantmen. You watch the sun set into the Bay of Bengal, play cards with some Swedish backpackers, and take in the shipboard video programming, which consists of the complete works of Macaulay Culkin, subtitled in Arabic. On the morning of the sixth day your ship sails into a wide, sheltered bay—steaming jungles off the port bow, a taxi-crowded jetty to starboard—and you have arrived in the Andamans, at Port Blair.

In Port Blair you board a bus, finding a seat beneath a wall-mounted loudspeaker blaring a Hindi cover of “The Macarena Song.” The bus rumbles through the bustling market town, past barefoot men peddling betel nut, past a billboard for the local computer-training school (“I want to become the 21st century’s computer professional”). On the western outskirts you see a sawmill that is turning the Andaman forests into pencils on behalf of a company in Madras, and you see the airport, where workmen are busy extending the runway—out into a field where water buffalo graze—so that in a few years, big jetliners will be able to land here, bringing tour groups direct from Bangkok and Singapore. A little farther on, you pass rice paddies, and patches of jungle, and the Water Sports Training Centre, and thatched huts, and family-planning posters, and satellite dishes craning skyward. And then, within an hour’s time, you are at the ocean again, and on a very clear day you will see the island in the distance, a slight disturbance of the horizon.

by Adam Goodheart, American Scholar |  Read more:
Image: Ana Raquel S. Hernandes/Flickr

The World's Top Fighter Pilots Fear This Woman's Voice


All F/A-18 Super Hornet fighter jets come with a female voice that issues greetings and warnings, in tones ranging from stern and sharp to extremely urgent. It doesn't matter if the pilot is wearing a Malaysian, Kuwaiti, or Australian flag on his flight suit, the airplane speaks in a Tennessee twang that sounds a lot like Loretta Lynn in the middle of a very bad day. Embark on a miscue, and the jet issues an audible correction: “Roll right! Roll right!” or “Pull up! Pull up!”

U.S. Air Force pilots refer to voice of the Super Hornet as “Bitchin' Betty,” while among Britain's Royal Air Force she is known as “Nagging Nora.” But a real woman personifies the aircraft, 60-year-old Leslie Shook, and she recently retired after 35 years as an employee of Boeing Co. “I knew I had an accent which I did not think was desirable in the plane,” Shook said in an interview, the voice familiar to generations of fighter pilots coming in clear over a telephone. “No one ever said anything about it. I was my own worst critic as far as that goes.”

After powering up the F/A-18 and hearing Shook's greeting, a pilot won't typically hear much from her again unless the situation gets serious. You might be in danger of flying into a mountain, triggering a warning recorded by Shook. Or perhaps you have just drained half the fuel supply for the mission, in which case you will hear her repeat: “Bingo. Bingo.”

Nearly every aircraft has its own voice. The first digital voice in a U.S. combat jet was that of Kim Crow, a professional actor who still does voice-over work. For whatever reason, women’s voices have been common in fighter jets and numerous civilian aircraft.

Shook worked in St. Louis for McDonnell Douglas, which Boeing acquired in 1997. McDonnell was among the first to use voice commands on the flight deck, for both civilian and military jets, and the company favored women for the job.

Shook’s involvement with the Hornet came about by happenstance, as one more job to record following a long day in her work as a video-services coordinator for the defense contractor. That meant she helped arrange such things as video shoots, photography, audio recordings, television commercials, and speaking events.

In the mid-1990s, when an F/A-18 customer requested a voice command for the jet’s ground-avoidance system, Boeing arranged a recording session. Several people were involved, including a Navy lieutenant colonel, and the woman recording the command wasn’t suitable.

“They did not like her voice; it was too sweet for the airplane,” Shook recalled. She was feeling tired and hungry that evening, ready to get home, and she stepped in with some voice advice. “I explained to them that Betty has a cadence, a sharpness to get your attention.”

The Navy officer suggested Shook do the recording, and the fighter jet quickly had its digital scold.

by Justin Bachman, Bloomberg | Read more:
Image: Boeing Co. and U.S. Air Force photo/Staff Sgt. Ben Fulton

Monday, March 14, 2016

In a Hail of Bullets and Fire

In late 2013, Jang Song-thaek, an uncle of Kim Jong-un, the North Korean leader, was taken to the Gang Gun Military Academy in a Pyongyang suburb.

Hundreds of officials were gathered there to witness the execution of Mr. Jang’s two trusted deputies in the administrative department of the ruling Workers’ Party.

The two men, Ri Ryong-ha and Jang Su-gil, were torn apart by antiaircraft machine guns, according to South Korea’s National Intelligence Service. The executioners then incinerated their bodies with flamethrowers.

Jang Song-thaek, widely considered the second-most powerful figure in the North, fainted during the ordeal, according to a new book published in South Korea that offers a rare glimpse into the secretive Pyongyang regime.

“Son-in-Law of a Theocracy,” by Ra Jong-yil, a former deputy director of the National Intelligence Service, is a rich biography of Mr. Jang, the most prominent victim of the purges his young nephew has conducted since assuming power in 2011.

Mr. Jang was convicted of treason in 2013. He was executed at the same place and in the same way as his deputies, the South Korean intelligence agency said.

The book asserts that although he was a fixture of the North Korean political elite for decades, he dreamed of reforming his country. “With his execution, North Korea lost virtually the only person there who could have helped the country introduce reform and openness,” Mr. Ra said during a recent interview.

Mr. Ra, who is also a professor of political science and a former South Korean ambassador to Japan and Britain, mined existing publications but also interviewed sources in South Korea, Japan and China, including high-ranking defectors from the North who spoke on the condition of anonymity.

Mr. Jang met one of the daughters of North Korea’s founder, Kim Il-sung, while both attended Kim Il-sung University in the mid-1960s. The daughter, Kim Kyong-hee, developed a crush on Mr. Jang, who was tall and humorous — and sang and played the accordion.

Her father transferred the young man to a provincial college to keep the two apart. But Ms. Kim hopped in her Soviet Volga sedan to see Mr. Jang each weekend.

Once they married in 1972, Mr. Jang’s career took off under the patronage of Kim Jong-il, his brother-in-law and the designated successor of the regime.

In his memoir, a Japanese sushi chef for Kim Jong-il from 1988 to 2001 who goes by the alias Kenji Fujimoto remembered Mr. Jang as a fun-loving prankster who was a regular at banquets that could last until morning or even stretch a few days. A key feature of the events was a “pleasure squad” of young, attractive women who would dance the cancan, sing American country songs or perform a striptease, according to the book and accounts by defectors.

Mr. Jang also mobilized North Korean diplomats abroad to import Danish dairy products, Black Sea caviar, French cognac and Japanese electronics — gifts Mr. Kim handed out during his parties to keep his elites loyal.

But North Korean diplomats who have defected to South Korea also said that during his frequent trips overseas to shop for Mr. Kim, Mr. Jang would drink heavily and speak dejectedly about people dying of hunger back home.

Few benefited more than Mr. Jang from the regime he loyally served. But he was never fully embraced by the Kim family because he was not blood kin. This “liminal existence” enabled him to see the absurdities of the regime more clearly than any other figure within it, Mr. Ra wrote.

Mr. Ra said Hwang Jang-yop, a North Korean party secretary who defected to Seoul in 1997 and lived here until his death in 2010, shared a conversation he once had with Mr. Jang. When told that the North’s economy was cratering, Mr. Jang responded sarcastically: “How can an economy already at the bottom go further down?”

Mr. Jang’s frequent partying with the “pleasure squad” strained his marriage. Senior defectors from the North said it was an open secret among the Pyongyang elite that the couple both had extramarital affairs.

Their only child, Jang Kum-song, killed herself in Paris in 2006. She overdosed on sleeping pills after the Pyongyang government caught wind of her dating a Frenchman and summoned her home.

Still, the marriage endured. When Kim Jong-il banished Mr. Jang three times for overstepping his authority, his wife intervened on his behalf.

After Mr. Kim suffered a stroke in 2008 and died in 2011, Mr. Jang helped his young nephew, Kim Jong-un, establish himself as successor. At the same time, he vastly expanded his own influence — and ambition.

by Choe Sang-Hun, NY Times | Read more:
Image: CreditKyodo, via Reuters

What Would It Mean To Have A 'Hapa' Bachelorette?

On a recent episode of The Bachelor, the ABC dating reality show that ends its 20th season Monday night, contestant Caila Quinn brings Ben Higgins home to meet her interracial family.

"Have you ever met Filipinos before?" Quinn's mother asks, leading Higgins into a dining room where the table is filled with traditional Filipino food.

"I don't know," he replies. "No. I don't think so."

As they sit around the adobo and pancit, Quinn's father talks to Higgins, white man to white man. What comes with dating Quinn, the father says, "is a very special Philippine community." Quinn grimaces.

"I had no idea what I was getting into when I married Caila's mother," the father says. But being married to a Filipina, he assures Higgins, has been "the most fun" and "magical."

This scene can be read as an attempt by The Bachelor franchise to dispel criticisms (and the memory of a 2012 lawsuit) concerning its whitewashed casts. It shows how these attempts can be clunky at best, offensive and creepy at worst.

Quinn's run also demonstrates how, as this rose-strewn, fantasy-fueled romance machine tries to include more people of color, diversification looks like biracial Asian-American — often known as "hapa" — women.

Among the 19 women who have won the "final rose" since The Bachelor premiered in 2002, two — Tessa Horst and Catherine Giudici — have been biracial Asian-white. All other winners, aside from Mary Delgado in 2004 who was Cuban-American, appear to have been white. As these handy graphics by writer and video artist Karen X. Cheng show, in the previous seven years, the only women of color who lasted into the final few weeks were of mixed-race Asian-white background. (...)

To understand why only a narrow group of women of color — biracial Asian-white women — survive in this world is to delve into romantic tropes, the stuff The Bachelor is made of.

"As objects of beauty, these women are benefiting from two helpful stereotypes about female desirability," said Ann Morning, associate professor of sociology at New York University. One is whiteness as the persisting standard of beauty. The other is Asian women as sexualized, exotic and submissive.

Taken alone, the first stereotype can be detrimental. "Today, being white is often perceived as a kind of boring, colorless identity," Morning said. But that stereotype about whiteness can work to balance negative stereotypes about Asian women.

Lily Anne Welty Tamai, curator of history at the Japanese American National Museum (and a friend of mine), explained where these stereotypes about Asian women come from. The trope of Puccini's 1904 Madama Butterfly paved the way for American incarnations of a tragic love story between an American soldier and Asian woman in the mid-20th century, when American soldiers brought home war stories — and sometimes brides — from Asia, where women were often part of the conquest. Popular narratives included the 1957 film Sayonara and the 1989 musical Miss Saigon. ("I guess they just never got around to making the Korea version," Tamai said.)

These stories cemented in the American consciousness the idea of the Asian woman as the foreign sex toy: the geisha, the china doll, the "me love you long time" sex worker.

by Akemi Johnson, NPR |  Read more:
Image: Kelsey McNeal/ABC via Getty Images