Tuesday, March 22, 2016

The Makeup Master List


[ed. Wow. I had no idea it was this complicated.]

Skincare
How to Prevent Clumpy Mascara
How to Shape Your Eyebrows
How to Catch Eyeshadow Fallout
Five Alternatives to Black Eyeliner
How to Make Lower Lashes Look Thicker
How to Use Eyelash Primer for Eyebrows
How to Disguise False Lashes
How to Heat Your Lash Curler
White Eyeliner as a Eyeshadow Base
How to Make Your Lashes Look Thicker
Sticky Tape for Eyeliner
Eye Makeup for Brown, Blue, Green, Hazel and Grey Eyes
How to Waterline
10 Ways to use Eyeshadow
How to Tightline Your Eyes
Eye Makeup Tutorial
How to Curl Your Eyelashes
Powder, Pencil, Liquid or Gel Eyeliner?
How to Apply False Lashes
How to Apply Winged Eyeliner
How to Press a Pigment
How to Achieve Long Eyelashes
How to Create a Easy Smokey Eye
How to Make Your Eyeliner Last
How to Apply Individual Eyelashes
How to Groom Your Eyebrows
How to Make Your Eyes Look Larger

Lips

Lena Dunham: The Uncontested Queen of Angst

[ed. I'm late to the Lena Dunham party but have been binge-watching Girls lately and she's wonderful.]

Lena Dunham had a panic attack last night. The actor/writer/director/producer started to tot up her advancing years, and before she knew it, she was measuring out her grave. "I thought, in two and a half years I'll be 30, then 10 years from that I'll be 40, then 10 years from that I'll be 50." She shudders to a halt. Is she genuinely worried? "All the time. It's why I don't sleep at night."

Nobody in the 21st century has done angst quite like Dunham. The angst of being unloved, undesired, unattractive, unpopular, unsuccessful. The irony is that, at 27, she has been named as one of Time magazine's 100 most influential people in the world, signed a $3.5m book deal, completed her third series of the TV comedy Girls and has a rock star boyfriend.

Girls is funny, filthy, disturbing and acute. Dunham has taken Sex And The City and refashioned it for an age of eternal internships, dysfunctional relationships and middle-class disappointment. Whereas Sex And The City is aspirational – desirable women, designer wardrobes, glamorous jobs – nobody would want to be Dunham's character, Hannah, in Girls. Her clothes are scruffy and stained, her brilliant career is stymied, OCD blights her life and the men she meets are more rapists than dreamboats. Hannah is the ultimate un-American heroine.

Girls is Sex And The City when the recession has bit, the world looks bleak and dreams have turned to dust. It focuses on the lives of four girls in Brooklyn, New York, where Dunham lives. Three are conventionally attractive, if angsty in their own way, but it is Hannah who is the really interesting one – an aspiring writer desperate to expose herself to any possible experience to make her work more real, but equally desperate for regular love and safety. Hannah often hangs around in knickers and vest, exposing body and soul. She is as self-obsessed as she is self-loathing, socially gauche (at one job interview, she makes a joke about her would-be boss being a date-rapist) and frequently humiliated (her boyfriend sends her a photo of his penis, then apologises because it wasn't meant for her). And we root for her all the way.

We meet at a restaurant in Los Angeles, where she edits Girls. I don't recognise Dunham at first, and am not sure why. She's certainly more elegant than Hannah – she doesn't dress as loudly or look as bulky – but there's something else. Then it strikes me. I don't recognise her with her clothes on. So often in Girls, she's stripped to her tattoos or less, and here she is fully covered: stripy blue and white T-shirt, chic red cardigan, leggings and boots. The tattoos – garish, inky and prison-like – say a lot about Dunham. She got her first at 17, as a mark of her womanly independence, and yet they are illustrations of her favourite childhood books (Eloise on her back, Ferdinand the bull on her shoulder). She laughs when I tell her I didn't recognise her with clothes on, and admits that covering up has become a disguise. "My tattoos are the main way people recognise me if I'm out."

Dunham's success is an astonishing story, not least as an example of self-actualisation. The girl who ransacked her own life to write a warts'n'all TV show about a girl desperate to become famous by writing a warts'n'all book about her own life becomes world famous in the process.

I tell her I imagine she was a hyperactive child, forever on the go. No, she says, anything but. "I was a really lazy kid. I almost never left the house. If it was a weekend, I wouldn't even go outside, because I hated going in the park, hated doing any sports, hated walking around, hated doing almost everything. I liked to read and watch TV." Both her parents are artists and she was fascinated by their world.

Was she confident? "In a sense, yes, but I didn't have that many friends. I just talked a lot. I talked before I walked. I talked to myself, my parents, my babysitter, my little sister, the doctor, whoever was there. But I didn't have a tremendous amount of friends until high school." She says it was partly because she didn't want them and partly because they didn't want her. "I was pretty annoying. Looking back, I was a know-it-all."

What was the most annoying thing about her? "I'm not saying I was smarter than other kids, but I wanted to talk about what I wanted to talk about, and I wasn't interested in meeting anybody halfway. I remember being on play dates and not feeling there was a sympatico between us, then going home and hanging out with my parents and feeling, well, this is what's fun, this is what's interesting to do."

The waiter arrives. She's on first-name terms with him and orders fruit and yoghurt, and orange juice. Hannah is more of a macaroni cheese and cheesecake girl.

In her teens, Dunham went to Saint Ann's, a school in Brooklyn that specialises in the arts. There, she met Girls co-star Jemima Kirke, and came out of her shell. "It was an amazing place, like a home for wayward children." Was she regarded as wayward? "No, I think maybe eccentric and slightly difficult. I was never a bad kid, I just wasn't necessarily doing my work as I was told to or connecting perfectly with my peer group."

Was she better behaved than Hannah? "Ummm, yes." Actually, she says, when she did try to act properly wayward, she was useless. In one episode of Girls, Hannah takes cocaine and makes a night of it. When Dunham dabbled, it was a different story. "I tried coke, but was a total failure. I snorted a little bit, then always sneezed. It was sadder than having not tried drugs, in that I tried drugs and failed at trying drugs."

Dunham often sounds like an eager-to-please teenager, her voice rising at the end of sentences, so that statements become questions. But her actual words belie that: confident, considered, wise beyond her years. Allison Williams, who plays Hannah's friend Marnie in Girls, once said Dunham "has the soul of a wonderful 85-year-old man".

After school, Dunham studied creative writing at Oberlin, a liberal arts college. By the age of 20, she was writing, directing and appearing in short films featuring characters that bore an uncanny resemblance to herself: schlumpy, neurotic, funny, and so uncool they were cool. Within two years of graduating, she had made Tiny Furniture, her first full-length feature film.

Dunham's work is like a Russian doll of self-reference. Each project seems like a more ambitious version of the previous one. Hannah is based on the short period in Dunham's life when she was in a rut, unable to realise her ambitions, working dead-end jobs, falling out with friends, falling in with dodgy men. In Tiny Furniture, made when she was 23 and featuring her real-life mother and sister as her fictional mother and sister, Dunham's self-abasing Aura is a template for Hannah. Aura goes even lower than Hannah, allowing a man she fancies to have sex with her in a drainpipe on a construction site. Tiny Furniture in turn references a video Dunham made as a student, called The Fountain, in which she strips to her bikini, climbs into a college fountain, bathes in it and brushes her teeth. This video marked the emergence of Lena Dunham – it received more than 1.5m hits on YouTube, with thousands of bruising below-the-line comments, some versions of which made their way into the film ("Look, whales ahead!" "What a blubber factory!" "No, her stomach isn't huge, it's just that her boobs are really small – it's an optical illusion"). After the fountain episode, her character's boyfriend says that while he wants to get naked in front of people who want to see him naked, she wants to do it in front of people who don't want to see her naked. Dunham is part Woody Allen, part Nora Ephron (whose screenwriter mother told her, "Everything is copy").

In Girls, Hannah tells a friend that she's immune to insults, "because no one could ever hate me as much as I hate myself, OK? So any mean thing someone is going to think of to say about me, I've said to me, about me, probably in the last half-hour." By way of comfort, her friend offers, "You think everyone in the world is out to humiliate you. You're like a great big ugly psychotic wound." Girls, especially the first series, does not hold back.

Doesn't she get confused between herself and her characters? "I don't," she says with surprising certainty, through a mouthful of berries. "Other people do. Sometimes, the other cast members will call me Lena within the scene." In fact, what disorients her now is not so much the similarities between herself and Hannah as the differences. "It's confusing that I'm playing a character who's unable to assert herself and unable to get traction with her work and unable to be clear about her creativity, and yet at the same time I'm also writing, directing and acting in the show. It's strange to be in the meek, confused stressed-out skin of Hannah, then have to move into orchestrating the performances."

She stares at the massive plate in front of her ("Tell me if there's any of my fruit you want. I've over-fruited"), juggling with the real and the unreal. The more she writes, the less she trusts her ability to sift fact from fiction, and the less it matters to her: "It starts being about emotional truth." The most bewildering thing, though, she says, is her inability to distinguish real life from a TV studio. "At the end of a day's shooting, I don't know what's going on. I go to bed and I'm seeing a boom operator with a boom pole above my bed, and I close my eyes and think I have to do another take of sleeping, that first take wasn't good." I assume she's joking, but she isn't. "I still think I'm in the show, in bed getting filmed, acting as if I'm sleeping."

One of the radical aspects of Girls is how much time Hannah does spend in bed: mooching, writing, sleeping, having bad sex, occasionally having good sex. A number of young women who adore Girls tell me how surprised they are at how often Hannah is naked or near enough. I ask if she's an exhibitionist. Her voice tightens slightly. "I've always fought against that label, because it seems so simplistic and it has such a sexual connotation to it. I'm sure it must be perceived in that way, and it wouldn't be an inaccurate thing to say, but that's not how I was thinking about it when I did it."

In the past, she has said her interest in exposing herself is rooted in anything but self-confidence. In fact, there seems to be an element of masochism in it; an invitation for others to abuse her. She smiles. "Well, a lot of my parents' friends were performance artists, so I think I just understood that the body could be a tool in that exploration."

The only scene that embarrassed her is one in which she played table tennis topless with a new boyfriend. "It was one of the first times in the show that nudity had felt like it was supposed to be fun and cute and sexual. It wasn't a comfortable space for me to occupy. I have an easier time playing romantic rejection than playing loving situations. I have an easier time playing humiliating nudity than playing sexy nudity. I think it's because there's something really vulnerable…" She prods at the plate with her fork as she attempts to complete the sentence. "See, I'm stabbing a melon as I say this… I think there's something really vulnerable about the earnest emotions that come with being in love or being attracted to somebody that are anxiety-inducing to play, whereas there's the armour of humour and relatability to that other stuff that makes it easier to do. The times I'm embarrassed are when I'm writing about loving situations and romantic moments, rather than totally degrading sex and looking bad in your underwear." So she's happy only when she's playing unhappy? "Yeah, it's true! It's really complicated."

by Simon Hattenstone, The Guardian |  Read more:
Image: Girls

Nathan Benn, Cape Canaveral Florida 1981
via:

Monday, March 21, 2016

Traditional Economics Failed. Here’s a New Blueprint.

Politics in democracy can be understood many ways, but on one level it is the expression of where people believe their self-interest lies— that is to say, “what is good for me?” Even when voters vote according to primal affinities or fears rather than economic advantage (as Thomas Frank, in What’s the Matter With Kansas?, lamented of poor whites who vote Republican), it is because they’ve come to define self-interest more in terms of those primal identities than in terms of dollars and cents.

This is not proof of the stupidity of such voters. It is proof of the malleability and multidimensionality of self-interest. While the degree to which human beings pursue that which they think is good for them has not and will probably never change, what they believe is good for them can change and from time to time has, radically.

We assert a simple proposition: that fundamental shifts in popular understanding of how the world works necessarily produce fundamental shifts in our conception of self-interest, which in turn necessarily producefundamental shifts in how we think to order our societies.

Consider for a moment this simple example:

For the overwhelming majority of human history, people looked up into the sky and saw the sun, moon, stars, and planets revolve around the earth. This bedrock assumption based on everyday observation framed our self-conception as a species and our interpretation of everything around us.

Alas, it was completely wrong.

Advances in both observation technology and scientific understanding allowed people to first see, and much later accept, that in fact the earth was not the center of the universe, but rather, a speck in an ever-enlarging and increasingly humbling and complex cosmos. We are not the center of the universe.

It’s worth reflecting for a moment on the fact that the evidence for this scientific truth was there the whole time. But people didn’t perceive it until concepts like gravity allowed us to imagine the possibility of orbits. New understanding turns simple observation into meaningful perception. Without it, what one observes can be radically misinterpreted. New understanding can completely change the way we see a situation and how we see our self-interest with respect to it. Concepts determine, and often distort, percepts.

Today, most of the public is unaware that we are in the midst of a moment of new understanding. In recent decades, a revolution has taken place in our scientific and mathematical understanding of the systemic nature of the world we inhabit.

–We used to understand the world as stable and predictable, and now we see that it is unstable and inherently impossible to predict.

–We used to assume that what you do in one place has little or no effect on what happens in another place, but now we understand that small differences in initial choices can cascade into huge variations in ultimate consequences.

–We used to assume that people are primarily rational, and now we see that they are primarily emotional.

Now, consider: how might these new shifts in understanding affect our sense of who we are and what is good for us?

A Second Enlightenment and the Radical Redefinition of Self-Interest

In traditional economic theory, as in politics, we Americans are taught to believe that selfishness is next to godliness. We are taught that the market is at its most efficient when individuals act rationally to maximize their own self-interest without regard to the effects on anyone else. We are taught that democracy is at its most functional when individuals and factions pursue their own self-interest aggressively. In both instances, we are taught that an invisible hand converts this relentless clash and competition of self-seekers into a greater good.

These teachings are half right: most people indeed are looking out for themselves. We have no illusions about that. But the teachings are half wrong in that they enshrine a particular, and particularly narrow, notion of what it means to look out for oneself.

Conventional wisdom conflates self-interest and selfishness. It makes sense to be self-interested in the long run. It does not make sense to be reflexively selfish in every transaction. And that, unfortunately, is what market fundamentalism and libertarian politics promote: a brand of selfishness that is profoundly against our actual interest.

Let’s back up a step.

When Thomas Jefferson wrote in the Declaration of Independence that certain truths were held to be “self-evident,” he was not recording a timeless fact; he was asserting one into being. Today we read his words through the filter of modernity. We assume that those truths had always been self-evident. But they weren’t. They most certainly were not a generation before Jefferson wrote. In the quarter century between 1750 and 1775, in a confluence of dramatic changes in science, politics, religion, and economics, a group of enlightened British colonists in America grew gradually more open to the idea that all men are created equal and are endowed by their Creator with certain unalienable rights.

It took Jefferson’s assertion, and the Revolution that followed, to make those truths self-evident.

We point this out as a simple reminder. Every so often in history, new truths about human nature and the nature of human societies crystallize. Such paradigmatic shifts build gradually but cascade suddenly.

This has certainly been the case with prevailing ideas about what constitutes self-interest. Self-interest, it turns out, is not a fixed entity that can be objectively defined and held constant. It is a malleable, culturally embodied notion.

Think about it. Before the Enlightenment, the average serf believed that his destiny was foreordained. He fatalistically understood the scope of life’s possibility to be circumscribed by his status at birth. His concept of self-interest extended only as far as that of his nobleman. His station was fixed, and reinforced by tradition and social ritual. His hopes for betterment were pinned on the afterlife. Post-Enlightenment, that all changed. The average European now believed he was master of his own destiny. Instead of worrying about his odds of a good afterlife, he worried about improving his lot here and now. He was motivated to advance beyond what had seemed fated. He was inclined to be skeptical about received notions of what was possible in life.

The multiple revolutions of the Enlightenment— scientific, philosophical, spiritual, material, political— substituted reason for doctrine, agency for fatalism, independence for obedience, scientific method for superstition, human ambition for divine predestination. Driving this change was a new physics and mathematics that made the world seem rational and linear and subject to human mastery.

The science of that age had enormous explanatory and predictive power, and it yielded an entirely new way of conceptualizing self-interest. Now the individual, relying on his own wits, was to be celebrated for looking out for himself— and was expected to do so. As physics developed into a story of zero-sum collisions, as man mastered steam and made machines, as Darwin’s theories of natural selection and evolution took hold, the binding and life-defining power of old traditions and institutions waned. A new belief seeped osmotically across disciplines and domains: Every man can make himself anew. And before long, this mutated into another ethic: Every man for himself.

Compared to the backward-looking, authority-worshipping, passive notion of self-interest that had previously prevailed, this, to be sure, was astounding progress. It was liberation. Nowhere more than in America— a land of wide-open spaces, small populations, and easily erased histories— did this atomized ideal of self-interest take hold. As Steven Watts describes in his groundbreaking history The Republic Reborn, “the cult of the self-made man” emerged in the first three decades after Independence. The civic ethos of the founding evaporated amidst the giddy free-agent opportunity to stake a claim and enrich oneself. Two centuries later, our greed-celebrating, ambition-soaked culture still echoes this original song of self-interest and individualism.

Over time, the rational self-seeking of the American has been elevated into an ideology now as strong and totalizing as the divine right of kings once was in medieval Europe. Homo economicus, the rationalist self-seeker of orthodox economics, along with his cousin Homo politicus, gradually came to define what is considered normal in the market and politics. We’ve convinced ourselves that a million individual acts of selfishness magically add up to a common good. And we’ve paid a great price for such arrogance. We have today a dominant legal and economic doctrine that treats people as disconnected automatons and treats the mess we leave behind as someone else’s problem. We also have, in the Great Recession, painful evidence of the limits of this doctrine’s usefulness.

But now a new story is unfolding.

by Eric Liu and Nick Hanauer, Evonomics | Read more:
Image: Sasquatch Books

Renting a Friend in Tokyo

It's muggy and I'm confused. I don't understand where I am, though it was only a short walk from my Airbnb studio to this little curry place. I don’t understand the lunch menu, or even if it is a lunch menu. Could be a religious tract or a laminated ransom note. I’m new in Tokyo, and sweaty, and jet-lagged. But I am entirely at ease. I owe this to my friend Miyabi. She’s one of those reassuring presences, warm and eternally nodding and unfailingly loyal, like she will never leave my side. At least not for another 90 minutes, which is how much of her friendship I’ve paid for.

Miyabi isn’t a prostitute, or an escort or an actor or a therapist. Or maybe she’s a little of each. For the past five years she has been a professional rent-a-friend, working for a company called Client Partners.

My lunch mate pokes daintily at her curry and speaks of the friends whose money came before mine. There was the head of a prominent company, rich and “very clever” but conversationally marooned at “hello.” Discreetly and patiently, Miyabi helped draw other words out. There was the string of teenage girls struggling to navigate mystifying social dynamics; at their parents’ request, Miyabi would show up and just be a friend. You know, a normal, companionable, 27-year-old friend. She has been paid to cry at funerals and swoon at weddings, lest there be shame over a paltry turnout. Last year, a high schooler hired her and 20 other women just long enough to snap one grinning, peace-sign-flashing, I-totally-have-friends Instagram photo.

When I learned that friendship is rentable in Tokyo, it merely seemed like more Japanese wackiness, in a subset I’d come to think of as interest-kitsch. Every day in Japan, it seems, some weird new appetite is identified and gratified. There are cats to rent, after all, used underwear to purchase, owls to pet at owl bars. Cuddle cafés exist for the uncuddled, goat cafés for the un-goated. Handsome men will wipe away the tears of stressed-out female office workers. All to say I expected something more or less goofy when I lined up several English-speaking rent-a-friends for my week in Tokyo. The agency Miyabi works for exists primarily for lonely locals, but the service struck me as well suited to a solo traveler, too, so I paid a translator to help with the arrangements. Maybe a more typical Japanese business would’ve bristled at this kind of intrusion from a foreigner. But the rent-a-friend world isn’t typical, I would soon learn, and in some ways it wants to subvert all that is.

Contrived Instagram photos aside, Miyabi’s career mostly comprises the small, unremarkable acts of ordinary friendship: Shooting the breeze over dinner. Listening on a long walk. Speaking simple kindnesses on a simple drive to the client’s parents’ house, simply to pretend you two are in love and absolutely on the verge of getting married, so don’t even worry, Mom and Dad.

As a girl, Miyabi longed to be a flight attendant—Continental, for some reason—and that tidy solicitousness still emanates. She wears a smart gray skirt and a gauzy beige blouse over which a sheet of impeccable hair drapes weightlessly. She doesn’t care that I am peccable. She smiles when I smile, touches my arm to make a point. Her graciousness cloaks a demanding job. With an average of 15 gigs a week, Miyabi’s hours are irregular and bleed from day into night. The daughter of a doctor and a nurse, she still struggles to convince her parents that her relatively new field is legitimate. The money is fine but not incredible; I’m paying her roughly $115 for two hours, some percentage of which Client Partners keeps. So why does she do it? Miyabi puts down her chopsticks and explains: It helps people—real and lonesome people in need of, well, whatever ineffable thing friendship means to our species. “So many people are good at life online or life at work, but not real life,” she says, pantomiming someone staring at a phone. For such clients a dollop of emotional contact with a friendly person is powerful, she adds, even with a price tag attached.

So this isn’t secretly about romance? I ask. Not at all, she replies. (...)

During my time in Tokyo I develop a seamless routine of leaving the apartment, drifting vaguely toward the address on my phone, squinting confusedly, doubling back, eating some gyoza, and eventually stumbling onto my destination. On a drizzly Friday morning, my destination is the Client Partners headquarters, a small but airy suite in a nondescript Shibuya district office building. I rope my translator in for this, and we’re met by a round-faced woman in a long robelike garment. Maki Abe is the CEO, and for the next hour we sit across a desk from her and talk not about wacky interest-kitsch but about a nation’s spiritual health.

“We look like a rich country from the outside, but mentally we have problems,” Maki says. She speaks slowly, methodically. “Japan is all about face. We don’t know how to talk from the gut. We can’t ask for help. So many people are alone with their problems, and stuck, and their hearts aren’t touching.”

Maki and I bowed when we met, but we also shook hands. She brings it up later. “There are many people who haven’t been touched for years. We have clients who start to cry when we shake hands with them.”

It’s not that people lack friends, she says. Facebook, Instagram— scroll around and you find a country bursting with mugging, partying companionship. It just isn’t real, that’s all. “There’s a real me and a masked me. We have a word for the lonely gap in between that: kodoku.”

by Chris Colin, Afar |  Read more:
Image: Landon Nordeman

Emojimania

When it comes to emojis, the future is very, very ... Face with Tears of Joy.

If you don't know what that means then you: a) aren't a 14-year-old girl. b) love to hate those tiny pictures that people text you all the time. Or c) are nowhere near a smartphone or online chat.

Otherwise, here in 2016, it's all emojis, all the time. And Face with Tears of Joy, by the way, is a bright yellow happy face with a classic, toothy grin as tears fall.

The Face was chosen by Oxford Dictionaries as its 2015 "word" of the year, based on its popularity and reflecting the rise of emojis to help charitable causes, promote businesses and generally assist oh-so-many-more of us in further expressing ourselves on social media and in texts. (...)

WHERE DID THEY COME FROM?

While there's now a strict definition of emojis as images created through standardized computer coding that works across platforms, they have many, many popular cousins by way of "stickers," which are images without the wonky back end. Kimojis, the invention of Kim Kardashian, aren't technically emojis, for instance, at least in the eyes of purists.

In tech lore, the great emoji explosion has a grandfather in Japan and his name is Shigetaka Kurita. He was inspired in the 1990s by manja and kanji when he and others on a team working to develop what is considered the world's first widespread mobile Internet platform came up with some rudimentary characters. They were working a good decade before Apple developed a set of emojis for the first iPhones.

Emojis are either loads of fun or the bane of your existence. One thing is sure: There's no worry they'll become a "language" in and of themselves. While everybody from Coca-Cola to the Kitten Bowl have come up with little pictographs to whip up interest in themselves, emojis exist mainly to nuance the words regular folk type, standing in for tone of voice, facial expressions and physical gestures - extended middle finger emoji added recently.

"Words aren't dead. Long live the emoji, long live the word," laughed Gretchen McCulloch, a Toronto linguist who, like some others in her field, is studying emojis and other aspects of Internet language.

Emojis have been compared to hieroglyphs, but McCulloch is not on board. That ancient picture-speak included symbols with literal meaning, but others stood in for actual sound.

Emoji enthusiasts have played with telling word-free stories using their little darlings alone and translating song lyrics into the pictures, "but they can't be put together like letters to make a pronounceable word," McCulloch said.

THE EMOJI OVERSEERS

Back when Kurita was creating some of the first emojis, chaos already had ensued in trying to make all the pagers and all the emerging mobile phones and the newfangled thing called email and everything else Internet-ish that was bubbling up speak to each other. And also to allow people in Japan used to a more formal way of communicating make themselves understood in the emerging shorthand.

Enter the Unicode Consortium, on the coding end. It's a volunteer nonprofit industry organization working in collaboration with the International Organization for Standardization, the latter an independent non-governmental body that helps develop specifications for all sorts of things, including emojis, on a global scale.

Unicode, co-founded and headed by Mark Davis in Zurich, has a big, big mission, of which emojis have a place: making sure all the languages in the world are encoded and supported across platforms and devices.

The key word here is volunteer. Davis has a whole other job at Google, but he has dedicated himself to the task above. He also co-chairs the consortium's emoji subcommittee, a cog in a vetting process for new emojis that can take up to two years before new ones are put into the Unicode Standard for the likes of Apple, Google, Microsoft and Facebook to do with what they wish.

Where does Davis sit with the rapid rise of emojis?

"It has been a surprise. We didn't fully understand how popular they were going to be," he said.

At the moment, Unicode has released 1,624 emojis, with more options when you factor in modifiers for such things as skin tone. The emoji subcommittee fields about 100 proposals for new emojis a year. Not all make it through the vetting process.

"We don't encode emoji for movie or fictional people, or for deities. And we're not going to give you a Donald Trump," Davis said.

Gender, he said, is among the next frontiers for emojis. Demand for a female runner, for instance, will be voted on in May as critics have questioned a male-female divide. The consortium is trying to come up with a way to more easily and quickly customize emoji for gender, hair color and other features, Davis said.

"Personally, I am very much looking forward to a face palm emoji," he joked.

by Leanne Italie, AP |  Read more:
Image: via:

The Rest Is Advertising

Recently, I landed the tech-journalism equivalent of a Thomas Pynchon interview: I got someone from Twitter to answer my call. Notorious for keeping its communications department locked up tight, Twitter is not only the psychic bellwether and newswire for the media industry, but also a stingy interview-granter, especially now that it’s floundering with poor profits, executive turnover, and a toxic culture. I’ve tried to get them on the record before. No one has replied.

This time, though, a senior executive from one of Twitter’s key divisions seemed happy—eager, even—to talk with me, and for as long as I wanted. You might even say he prattled. I was a little stunned: I’d been writing about tech matters for years as a freelance journalist, and this was far more access than I was used to receiving. What was different? I was calling as a reporter—but not exactly. I was writing a story for The Atlantic—but not for the news division. Instead, I was working for a moneymaking wing of The Atlantic called Re:think, and I was writing sponsored content.

In case you haven’t heard, journalism is now in perpetual crisis, and conditions are increasingly surreal. The fate of the controversialists at Gawker rests on a delayed jury trial over a Hulk Hogan sex tape. Newspapers publish directly to Facebook, and Snapchat hires journalists away from CNN. Last year, the Pulitzer Prizes doubled as the irony awards; one winner in the local reporting category, it emerged, had left his newspaper job months earlier for a better paying gig in PR. “Is there a future in journalism and writing and the Internet?” Choire Sicha, cofounder of The Awl ,wrote last January. “Haha, FUCK no, not really.” Even those who have kept their jobs in journalism, he explained, can’t say what they might be doing, or where, in a few years’ time. Disruption clouds the future even as it holds it up for worship.

But for every crisis in every industry, a potential savior emerges. And in journalism, the latest candidate is sponsored content.

Also called native advertising, sponsored content borrows the look, the name recognition, and even the staff of its host publication to push brand messages on unsuspecting viewers. Forget old-fashioned banner ads, those most reviled of early Internet artifacts. This is vertically integrated, barely disclaimed content marketing, and it’s here to solve journalism’s cash flow problem, or so we’re told. “15 Reasons Your Next Vacation Needs to Be in SW Florida,” went a recent BuzzFeed headline—just another listicle crying out for eyeballs on an overcrowded homepage, except this one had a tiny yellow sidebar to announce, in a sneaky whisper, “Promoted by the Beaches of Fort Myers & Sanibel.”

Advertorials are what we expect out of BuzzFeed, the ur-source of digital doggerel and the first media company to open its own in-house studio—a sort of mini Saatchi & Saatchi—to build “original, custom content” for brands. But now legacy publishers are following BuzzFeed’s lead, heeding the call of the digital co-marketers and starting in-house sponsored content shops of their own. CNN opened one last spring, and its keepers, with nary a trace of self-awareness, dubbed it Courageous. The New York Times has T Brand Studio (clients include Dell, Shell, and Goldman Sachs), the S. I. Newhouse empire has something called 23 Stories by Condé Nast, and The Atlantic has Re:think. As the breathless barkers who sell the stuff will tell you, sponsored content has something for everyone. Brands get their exposure, publishers get their bankroll, freelancer reporters get some work on the side, and readers get advertising that goes down exceptionally easy—if they even notice they’re seeing an ad at all.

The promise is that quality promotional content will sit cheek-by-jowl with traditional journalism, aping its style and leveraging its prestige without undermining its credibility.

The problem, as I learned all too quickly when I wrote my sponsored story for The Atlantic (paid for by a prominent tech multinational), is that the line between what’s sponsored and what isn’t—between advertising and journalism—has already been rubbed away.

by Jacob Silverman, The Baffler |  Read more:
Image: Eric Hanson

Pierre Koening’s Stahl House


Case Study House #22, aka Stahl House, might be the ultimate mid-Century dream home. The story of the home begins in May 1954 when the Stahl family, who still own the place, invested in a small, rather awkward lot high in the Hollywood Hills. In 1956, Buck Stahl built a model of the home he and wife Carlotta wanted to live in. In 1957, the Stahls showed it to Pierre Koenig (October 17, 1925 – April 4, 2004), who along with other architects of the age sought to bring modernist style and industrial efficiency to affordable suburban residences.

Could life in post-war USA be improved through architecture? Was Le Corbusier’s right when he said that houses were “machines for living”?

On April 8th, 1959 Stahl House was inducted into the Case Study House program by Arts & Architecture magazine, becoming Case Study House # 22. (This followed Pierre Koenig’s Case Study House #21.) The magazine commissioned architects – Richard Neutra, Raphael Soriano, Craig Ellwood, Charles and Ray Eames, Eero Saarinen, Thornton Abell, A. Quincy Jones, Ralph Rapson and others – to design and build inexpensive and efficient model homes for the United States residential housing boom caused by the end of World War II. In all 36 residences were made between 1945 and 1964.(...)

Work on #22 began in May 1959 and was completed a year later in May of 1960.

Pierre once described the process of building Stahl House as “trying to solve a problem – the client had champagne tastes and a beer budget.” Bruce Stahl, Buck and Carlotta’s son, adds: “We were a blue collar family living in a white collar house. Nobody famous ever lived here.”

by Karen Strike, Flashbak |  Read more:
Image: uncredited

KC & The Sunshine Band

RRS Boaty McBoatface


The good news for the Natural Environment Research Council’s decision to crowd-search a name for its latest polar research vessel is unprecedented public engagement in a sometimes niche area of scientific study. The bad news? Sailing due south in a vessel that sounds like it was christened by a five-year-old who has drunk three cartons of Capri-Sun.

Just a day after the NERC launched its poll to name the £200m vessel – which will first head to Antarctica in 2019 – the clear favourite was RRS Boaty McBoatface, with well over 18,000 votes. The RRS stands for royal research ship. (...)

The NERC – which was wise enough to ask that people “suggest” names, giving it future wriggle room – asked for ideas to be inspirational.

Some undoubtedly were, with its website, which kept crashing on Sunday under the weight of traffic, showing dozens of serious suggestions connected to inspiring figures such as Sir David Attenborough, or names such as Polar Dream.

But the bulk of entries were distinctly less sober. Aside from the leading contender, ideas included Its Bloody Cold Here, What Iceberg, Captain Haddock, Big Shipinnit, Science!!! and Big Metal Floaty Thingy-thing.

by Peter Walker, The Guardian |  Read more:
Image: NERC

Saturday, March 19, 2016


André Robé - 47th Street - New York - 1957
via:

Depends On Your Point of View

Last night I came upon a new exhibit in my running critique. I will show it to you, and then try to interpret what it means. It happened on a program where he said, she said and “we’ll have to leave it there” are a kind of house style: The Newshour on PBS. (Link.) Let’s set the scene:

* A big story: the poisoning of Flint, Michigan’s water supply— a major public health disaster.
* Latest news: the House Committee on Oversight and Government Reform held a hearing at which Michigan Governor Rick Snyder, a Republican, and EPA Administrator Gina McCarth, an Obama appointee, both testified.
* Outcome: They were ritualistically denounced and told to the resign by members of Congress in the opposing party. (Big surprise.)
* Cast of characters in the clip I’m about to show you: Judy Woodruff of the Newshour is host and interviewer. David Shepardson is a Reuters reporter in the Washington bureau who has been covering the Flint disaster. (Formerly of the Detroit News and a Michigan native.) Marc Edwards is a civil and environmental engineer and professor at Virginia Tech. (“He’s widely credited with helping to expose the Flint water problems. He testified before the same House committee earlier this week.”)

Now watch what happens when Woodruff asks the Reuters reporter: who bears responsibility for the water crisis in Flint? Which individual or agency is most at fault here? (The part I’ve isolated is 2:22.)

Here is what I saw. What did you see?


The Reuters journalist defaults on the question he was asked. He cannot name a single agency or person who is responsible. The first thing and the last thing he says is “depends on your point of view.” These are weasel words. In between he manages to isolate the crucial moment — when the state of Michigan failed to add “corrosion control” to water drawn from the Flint River — but he cannot say which official or which part of government is responsible for that lapse. Although he’s on the program for his knowledge of a story he’s been reporting on for months, the question of where responsibility lies seems to flummox and decenter him. He implies that he can’t answer because there actually is no answer, just the clashing points of view.

Republicans in Congress scream at Obama’s EPA person: you failed! Democrats in Congress scream at a Republican governor: you failed! Our reporter on the scene shrugs, as if to say: take your pick, hapless citizens! His actual words: “Splitting up the blame depends on your point of view.”

This is a sentiment that Judy Woodruff, who is running the show, can readily understand. He’s talking her language when he says “depends on your point of view.” That is just the sort of the down-the-middle futility that PBS Newshour traffics in. Does she press him to do better? Does she say, “Our viewers want to know: how can such thing a happen in the United States? You’ve been immersed in the story, can you at least tell us where to look if we’re searching for accountability?” She does not. Instead, she sympathizes with David Shepardson. “It’s impossible to separate it from the politics.” But we’ll try!

For the try she has to turn to the academic on the panel, who then gives a little master class in how to answer the question: who is at fault here? Here are the points Marc Edwards of Virginia Tech makes:

* Governor Snyder failed to listen to the people of Flint when they complained about the water.
* Synder trusted too much in the Michigan Department of Environmental Quality and the EPA.
* He has accepted some blame for these failures, calling the Flint water crisis his Katrina.
* EPA, by contrast, has been evading responsibility for its part in the scandal.
* EPA called the report by its own whistleblower “inconclusive” when it really wasn’t.
* The agency hesitated and doubted itself when it came to enforcing federal law. WTF?
* EPA said it had been “strong-armed” by the state officials as if they had more authority than the Federal government.

Who is responsible? That was the question on the PBS table. If we listen to the journalist on the panel we learn: “it depends on which team you’re on,” and “they’re all playing politics,” and “it’s impossible to separate truth from spin.”

Professor Marc Edwards, more confident in his ability to speak truth to power, cuts through all that crap: There are different levels of failure and layers of responsibility here, he says. Some people are further along than others in admitting fault. Yes, it’s complicated — as real life usually is — but that doesn’t mean it’s impossible to assign responsibility. Nor does responsibility lie in one person’s lap or one agency’s hands. Multiple parties are involved. But when people who have some responsibility obfuscate, that’s outrageous. And it has to be called out.

Now I ask you: who’s in the ivory tower here? The journalist or the academic?

I know what you’re thinking, PBS Newshour people. Hey, we’re the ones who booked Marc Edwards on our show and let him run with it. That’s good craft in broadcast journalism! Fair point, Newshour people. All credit to you for having him on. Good move. Full stop.

What interests me here is the losing gambit and musty feel of formulaic, down-the-middle journalism. The misplaced confidence of the correspondent positioning himself between warring parties. The spectacle of a Reuters reporter, steeped in the particulars of the case, defaulting on the basic question of who is responsible. The forfeiture of Fourth Estate duties to other, adjacent professions. The union with gridlock and hopelessness represented in those weasel words: “depends on your point of view.” The failure of nerve when Judy Woodruff lets a professional peer dodge her question— a thing they chortle about and sneer at when politicians do it. The contribution that “not our job” journalists make to unaccountable government, and to public cynicism. The bloodlessness and lack of affect in the journalist commenting on the Flint crisis, in contrast to the academic who is quietly seething.

by Jay Rosen, Press Think |  Read more:
Image: YouTube

Motion Design is the Future of UI

Wokking the Suburbs


As he stepped woozily into the first American afternoon of his life, the last thing my father wanted to do was eat Chinese food. He scanned the crowd for the friend who’d come from Providence (my father would stay with this friend for a few weeks before heading to Amherst to begin his graduate studies). That friend didn’t know how to drive, however, so he promised to buy lunch for another friend in exchange for a ride to the Boston airport. The two young men greeted my father at the gate, exchanged some backslaps, and rushed him to the car, where they stowed the sum total of his worldly possessions in the trunk and folded him into the backseat. Then they gleefully set off for Boston’s Chinatown, a portal back into the world my father (and these friends before him) had just left behind. Camaraderie and goodwill were fine enough reasons to drive hours to fetch someone from the airport; just as important was the airport’s proximity to food you couldn’t get in Providence.

He remembers nothing about the meal itself. He was still nauseous from the journey—Taipei to Tokyo to Seattle to Boston—and, after all, he’d spent every single day of the first twenty-something years of his life eating Chinese food.

“For someone who had just come from Taiwan, it was no good. For someone who came from Providence, it must have been very good!” he laughs.

When my mother came to the United States a few years after my father (Taipei-Tokyo-San Francisco), the family friends who picked her up at least had the decency to wait a day and allow her to find her legs before taking her to a restaurant in the nearest Chinatown.

“I remember the place was called Jing Long, Golden Dragon. Many years later there was a gang massacre in there,” she casually recalls. “I still remember the place. It was San Francisco’s most famous. The woman who brought me was very happy but I wasn’t hungry. “Of course, they always think if you come from Taiwan or China you must be hungry for Chinese food.”

It was the early 1970s, and my parents had each arrived in the United States with only a vague sense of what their respective futures held, beyond a few years of graduate studies. They certainly didn’t know they would be repeating these treks in the coming decades, subjecting weary passengers (namely, me) to their own long drives in search of Chinese food. I often daydream about this period of their lives and imagine them grappling with some sense of terminal dislocation, starving for familiar aromas, and regretting the warnings of their fellow new Americans that these were the last good Chinese spots for the next hundred or so miles. They would eventually meet and marry in Champaign-Urbana, Illinois (where they acquired a taste for pizza), and then live for a spell in Texas (where they were told that the local steak house wasn’t for “their kind”), before settling in suburban California. Maybe this was what it meant to live in America. You could move around. You were afforded opportunities unavailable back home. You were free to go by “Eric” at work and name your children after US presidents. You could refashion yourself a churchgoer, a lover of rum-raisin ice cream, an aficionado of classical music or Bob Dylan, a fan of the Dallas Cowboys because everyone else in the neighborhood seemed to be one. But for all the opportunities, those first days in America had prepared them for one reality: sometimes you had to drive great distances in order to eat well. (...)

Suburbs are seen as founts of conformity, but they are rarely places beholden to tradition. Nobody goes to the suburbs on a vision quest—most are drawn instead by the promise of ready-made status, a stability in life modeled after the stability of neat, predictable blocks and gated communities. And yet, a suburb might also be seen as a slate that can be perpetually wiped clean to accommodate new aspirations.

There remain vestiges of what stood before, and these histories capture the cyclical aspirations that define the suburb: Cherry Tree Lane, where an actual orchard was once the best possible use of free acreage; the distinctive, peaked roof of a former Sizzler turned dim sum spot; the Hallmark retailer, all windows and glass ledges, that is now a noodle shop; and the kitschy railroad-car diner across the street that’s now another noodle shop. But Cupertino was still in transition throughout the 1980s and early 1990s. Monterey Park, hundreds of miles to our south, was the finished article.

All suburban Chinatowns owe something to Frederic Hsieh, a young realtor who regarded Monterey Park and foresaw the future. He began buying properties all over this otherwise generic community in the mid-1970s and blitzed newspapers throughout Taiwan and Hong Kong with promises of a “Chinese Beverly Hills” located a short drive from Los Angeles’s Chinatown. While there had been a steady stream of Chinese immigrants over the previous decade, Hsieh guessed that the uncertain political situation in Asia combined with greater business opportunities in the United States would bring more of them to California. Instead of the cramped, urban Chinatowns in San Francisco or Flushing, Hsieh wanted to offer these newcomers a version of the American dream: wide streets, multicar garages, good schools, minimal culture shock, and a short drive to Chinatown. In 1977, he invited twenty of the city’s most prominent civic and business leaders to a meeting over lunch (Chinese food, naturally) and explained that he was building a “modern-day mecca” for the droves of Chinese immigrants on their way. This didn’t go over so well with some of Monterey Park’s predominantly white establishment, who mistook his bluster for arrogance. As a member of the city’s Planning Commission later told the Los Angeles Times, “Everyone in the room thought the guy was blowing smoke. Then when I got home I thought, what gall. What ineffable gall. He was going to come into my living room and change my furniture?”

Gall was contagious. The following year, Wu Jin Shen, a former stockbroker from Taiwan, opened Diho Market, Monterey Park’s first Asian grocery. Wu would eventually oversee a chain of stores with four hundred employees and $30 million in sales. Soon after, a Laura Scudder potato-chip factory that had been remade into a Safeway was remade into an Asian supermarket. Another grocery store was refitted with a Pagoda-style roof.

Chinese restaurateurs were the shock troops of Hsieh’s would-be conquest. “The first thing Monterey Park residents noticed were the Chinese restaurants that popped up,” a different but no less alarmist piece the citizen quoted in the Times recalled. “Then came the three Chinese shopping centers, the Chinese banks, and the Chinese theater showing first-run movies from Hong Kong—with English subtitles.”

In Monterey Park, such audacity (if you wanted to call it that) threatened the community’s stability. Residents offended by, say, the razing of split-level ranch-style homes from the historical 1970s to accommodate apartment complexes drew on their worst instincts to try and push through “Official English” legislation in the mid-1980s. “Will the Last American to Leave Monterey Park Please Bring the Flag?” bumper stickers were distributed.

But this hyperlocal kind of nativism couldn’t turn back the demographic tide. In 1990, Monterey Park became the first city in the continental United States with a majority-Asian population. Yet Monterey Park’s growing citizenry didn’t embody a single sensibility. There were affluent professionals from Taiwan and Hong Kong as well as longtime residents of Los Angeles’s Chinatown looking to move to the suburbs. As Tim Fong, a sociologist who has studied Monterey Park, observed in the Chicago Tribune, “The Chinese jumped a step. They didn’t play the (slow) assimilation game.” This isn’t to say these new immigrants rejected assimilation. They were just becoming something entirely new.

Monterey Park became the first suburb that Chinese people would drive for hours to visit and eat in, for the same reasons earlier generations of immigrants had sought out the nearest urban Chinatown. And the changing population and the wealth they brought with them created new opportunities for all sorts of businesspeople, especially aspiring restaurateurs. The typical Chinese American restaurant made saucy, ostentatiously deep-fried concessions to mainstream appetites, leading to the ever-present rumor that most establishments had “secret menus” meant for more discerning eaters. It might be more accurate to say that most chefs at Chinese restaurants are more versatile than they initially let on—either that or families like mine possess Jedi-level powers of off-the-menu persuasion. But in a place like Monterey Park, the pressure to appeal to non-Chinese appetites disappeared. The concept of “mainstream” no longer held; neck bones and chicken feet and pork bellies and various gelatinous things could pay the bills and then some.

by Hua Hsu, Lucky Peach |  Read more:
Image: Yina Kim

The Secrets of the Wave Pilots

At 0400, three miles above the Pacific seafloor, the searchlight of a power boat swept through a warm June night last year, looking for a second boat, a sailing canoe. The captain of the canoe, Alson Kelen, potentially the world’s last-ever apprentice in the ancient art of wave-piloting, was trying to reach Aur, an atoll in the Marshall Islands, without the aid of a GPS device or any other way-finding instrument. If successful, he would prove that one of the most sophisticated navigational techniques ever developed still existed and, he hoped, inspire efforts to save it from extinction. Monitoring his progress from the power boat were an unlikely trio of Western scientists — an anthropologist, a physicist and an oceanographer — who were hoping his journey might help them explain how wave pilots, in defiance of the dizzying complexities of fluid dynamics, detect direction and proximity to land. More broadly, they wondered if watching him sail, in the context of growing concerns about the neurological effects of navigation-by-smartphone, would yield hints about how our orienteering skills influence our sense of place, our sense of home, even our sense of self.

When the boats set out in the afternoon from Majuro, the capital of the Marshall Islands, Kelen’s plan was to sail through the night and approach Aur at daybreak, to avoid crashing into its reef in the dark. But around sundown, the wind picked up and the waves grew higher and rounder, sorely testing both the scientists’ powers of observation and the structural integrity of the canoe. Through the salt-streaked windshield of the power boat, the anthropologist, Joseph Genz, took mental field notes — the spotlighted whitecaps, the position of Polaris, his grip on the cabin handrail — while he waited for Kelen to radio in his location or, rather, what he thought his location was.

The Marshalls provide a crucible for navigation: 70 square miles of land, total, comprising five islands and 29 atolls, rings of coral islets that grew up around the rims of underwater volcanoes millions of years ago and now encircle gentle lagoons. These green dots and doughnuts make up two parallel north-south chains, separated from their nearest neighbors by a hundred miles on average. Swells generated by distant storms near Alaska, Antarctica, California and Indonesia travel thousands of miles to these low-lying spits of sand. When they hit, part of their energy is reflected back out to sea in arcs, like sound waves emanating from a speaker; another part curls around the atoll or island and creates a confused chop in its lee. Wave-piloting is the art of reading — by feel and by sight — these and other patterns. Detecting the minute differences in what, to an untutored eye, looks no more meaningful than a washing-machine cycle allows a ri-meto, a person of the sea in Marshallese, to determine where the nearest solid ground is — and how far off it lies — long before it is visible.

In the 16th century, Ferdinand Magellan, searching for a new route to the nutmeg and cloves of the Spice Islands, sailed through the Pacific Ocean and named it ‘‘the peaceful sea’’ before he was stabbed to death in the Philippines. Only 18 of his 270 men survived the trip. When subsequent explorers, despite similar travails, managed to make landfall on the countless islands sprinkled across this expanse, they were surprised to find inhabitants with nary a galleon, compass or chart. God had created them there, the explorers hypothesized, or perhaps the islands were the remains of a sunken continent. As late as the 1960s, Western scholars still insisted that indigenous methods of navigating by stars, sun, wind and waves were not nearly accurate enough, nor indigenous boats seaworthy enough, to have reached these tiny habitats on purpose.

Archaeological and DNA evidence (and replica voyages) have since proved that the Pacific islands were settled intentionally — by descendants of the first humans to venture out of sight of land, beginning some 60,000 years ago, from Southeast Asia to the Solomon Islands. They reached the Marshall Islands about 2,000 years ago. The geography of the archipelago that made wave-piloting possible also made it indispensable as the sole means of collecting food, trading goods, waging war and locating unrelated sexual partners. Chiefs threatened to kill anyone who revealed navigational knowledge without permission. In order to become a ri-meto, you had to be trained by a ri-meto and then pass a voyaging test, devised by your chief, on the first try. As colonizers from Europe introduced easier ways to get around, the training of ri-metos declined and became restricted primarily to an outlying atoll called Rongelap, where a shallow circular reef, set between ocean and lagoon, became the site of a small wave-piloting school.

In 1954, an American hydrogen-bomb test less than a hundred miles away rendered Rongelap uninhabitable. Over the next decades, no new ri-metos were recognized; when the last well-known one died in 2003, he left a 55-year-old cargo-ship captain named Korent Joel, who had trained at Rongelap as a boy, the effective custodian of their people’s navigational secrets. Because of the radioactive fallout, Joel had not taken his voyaging test and thus was not a true ri-meto. But fearing that the knowledge might die with him, he asked for and received historic dispensation from his chief to train his younger cousin, Alson Kelen, as a wave pilot.

Now, in the lurching cabin of the power boat, Genz worried about whether Kelen knew what he was doing. Because Kelen was not a ri-meto, social mores forced him to insist that he was not navigating but kajjidede, or guessing. The sea was so rough tonight, Genz thought, that even for Joel, picking out a route would be like trying to hear a whisper in a gale. A voyage with this level of navigational difficulty had never been undertaken by anyone who was not a ri-meto or taking his test to become one. Genz steeled himself for the possibility that he might have to intervene for safety’s sake, even if this was the best chance that he and his colleagues might ever get to unravel the scientific mysteries of wave-piloting — and Kelen’s best chance to rally support for preserving it. Organizing this trip had cost $72,000 in research grants, a fortune in the Marshalls.

The radio crackled. ‘‘Jebro, Jebro, this is Jitdam,’’ Kelen said. ‘‘Do you copy? Over.’’

Genz swallowed. The cabin’s confines, together with the boat’s diesel odors, did nothing to allay his motion sickness. ‘‘Copy that,’’ he said. ‘‘Do you know where you are?’’

Though mankind has managed to navigate itself across the globe and into outer space, it has done so in defiance of our innate way-finding capacities (not to mention survival instincts), which are still those of forest-dwelling homebodies. Other species use far more sophisticated cognitive methods to orient themselves. Dung beetles follow the Milky Way; the Cataglyphis desert ant dead-reckons by counting its paces; monarch butterflies, on their thousand-mile, multigenerational flight from Mexico to the Rocky Mountains, calculate due north using the position of the sun, which requires accounting for the time of day, the day of the year and latitude; honeybees, newts, spiny lobsters, sea turtles and many others read magnetic fields. Last year, the fact of a ‘‘magnetic sense’’ was confirmed when Russian scientists put reed warblers in a cage that simulated different magnetic locations and found that the warblers always tried to fly ‘‘home’’ relative to whatever the programmed coordinates were. Precisely how the warblers detected these coordinates remains unclear. As does, for another example, the uncanny capacity of godwits to hatch from their eggs in Alaska and, alone, without ever stopping, take off for French Polynesia. Clearly they and other long-distance migrants inherit a mental map and the ability to constantly recalibrate it. What it looks like in their mind’s eye, however, and how it is maintained day and night, across thousands of miles, is still a mystery. (...)

Genz met Alson Kelen and Korent Joel in Majuro in 2005, when Genz was 28. A soft-spoken, freckled Wisconsinite and former Peace Corps volunteer who grew up sailing with his father, Genz was then studying for a doctorate in anthropology at the University of Hawaii. His adviser there, Ben Finney, was an anthropologist who helped lead the voyage of Hokulea, a replica Polynesian sailing canoe, from Hawaii to Tahiti and back in 1976; the success of the trip, which involved no modern instrumentation and was meant to prove the efficacy of indigenous ships and navigational methods, stirred a resurgence of native Hawaiian language, music, hula and crafts. Joel and Kelen dreamed of a similar revival for Marshallese sailing — the only way, they figured, for wave-piloting to endure — and contacted Finney for guidance. But Finney was nearing retirement, so he suggested that Genz go in his stead. With their chief’s blessing, Joel and Kelen offered Genz rare access, with one provision: He would not learn wave-piloting himself; he would simply document Kelen’s training.

Joel immediately asked Genz to bring scientists to the Marshalls who could help Joel understand the mechanics of the waves he knew only by feel — especially one called di lep, or backbone, the foundation of wave-piloting, which (in ri-meto lore) ran between atolls like a road. Joel’s grandfather had taught him to feel the di lep at the Rongelap reef: He would lie on his back in a canoe, blindfolded, while the old man dragged him around the coral, letting him experience how it changed the movement of the waves.

But when Joel took Genz out in the Pacific on borrowed yachts and told him they were encountering the di lep, he couldn’t feel it. Kelen said he couldn’t, either. When oceanographers from the University of Hawaii came to look for it, their equipment failed to detect it. The idea of a wave-road between islands, they told Genz, made no sense.

Privately, Genz began to fear that the di lep was imaginary, that wave-piloting was already extinct. On one research trip in 2006, when Korent Joel went below deck to take a nap, Genz changed the yacht’s course. When Joel awoke, Genz kept Joel away from the GPS device, and to the relief of them both, Joel directed the boat toward land. Later, he also passed his ri-meto test, judged by his chief, with Genz and Kelen crewing.

Worlds away, Huth, a worrier by nature, had become convinced that preserving mankind’s ability to way-find without technology was not just an abstract mental exercise but also a matter of life and death. In 2003, while kayaking alone in Nantucket Sound, fog descended, and Huth — spring-loaded and boyish, with a near-photographic memory — found his way home using local landmarks, the wind and the direction of the swells. Later, he learned that two young undergraduates, out paddling in the same fog, had become disoriented and drowned. This prompted him to begin teaching a class on primitive navigation techniques. When Huth met Genz at an academic conference in 2012 and described the methodology of his search for the Higgs boson and dark energy — subtracting dominant wave signals from a field, until a much subtler signal appears underneath — Genz told him about thep di lep, and it captured Huth’s imagination. If it was real, and if it really ran back and forth between islands, its behavior was unknown to physics and would require a supercomputer to model. That a person might be able to sense it bodily amid the cacophony generated by other ocean phenomena was astonishing.

Huth began creating possible di lep simulations in his free time and recruited van Vledder’s help. Initially, the most puzzling detail of Genz’s translation of Joel’s description was his claim that the di lep connected each atoll and island to all 33 others. That would yield a trillion trillion paths, far too many for even the most adept wave pilot to memorize. Most of what we know about ocean waves and currents — including what will happen to coastlines as climate change leads to higher sea levels (of special concern to the low-lying Netherlands and Marshall Islands) — comes from models that use global wind and bathymetry data to simulate what wave patterns probably look like at a given place and time. Our understanding of wave mechanics, on which those models are based, is wildly incomplete. To improve them, experts must constantly check their assumptions with measurements and observations. Perhaps, Huth and van Vledder thought, there were di leps in every ocean, invisible roads that no one was seeing because they didn’t know to look.

by Kim Tingley, NY Times |  Read more:
Image: Mark Peterson/Redux

Under the Crushing Weight of the Tuscan Sun

I have sat on Tuscan-brown sofas surrounded by Tuscan-yellow walls, lounged on Tuscan patios made with Tuscan pavers, surrounded by Tuscan landscaping. I have stood barefoot on Tuscan bathroom tiles, washing my hands under Tuscan faucets after having used Tuscan toilets. I have eaten, sometimes on Tuscan dinnerware, a Tuscan Chicken on Ciabatta from Wendy’s, a Tuscan Chicken Melt from Subway, the $6.99 Tuscan Duo at Olive Garden, and Tuscan Hummus from California Pizza Kitchen. Recently, I watched my friend fill his dog’s bowl with Beneful Tuscan Style Medley dog food. This barely merited a raised eyebrow; I’d already been guilty of feeding my cat Fancy Feast’s White Meat Chicken Tuscany. Why deprive our pets of the pleasures of Tuscan living?

In “Tuscan Leather,” from 2013, Drake raps, “Just give it time, we’ll see who’s still around a decade from now.” Whoever among us is still here, it seems certain that we will still be living with the insidious and inescapable word “Tuscan,” used as marketing adjective, cultural signifier, life-style choice. And while we may never escape our Tuscan lust, we at least know who’s to blame: Frances Mayes, the author of the memoir “Under the Tuscan Sun,” which recounts her experience restoring an abandoned villa called Bramasole in the Tuscan countryside. The book, published in 1996, spent more than two and a half years on the Times best-seller list and, in 2003, inspired a hot mess of a film adaptation starring Diane Lane. In the intervening years, Mayes has continued to put out Tuscan-themed books at a remarkable rate—“Bella Tuscany,” “Bringing Tuscany Home,” “Every Day in Tuscany,” “The Tuscan Sun Cookbook”—as well as her own line of Tuscan wines, olive oils, and even furniture. In so doing, she has managed to turn a region of Italy into a shorthand for a certain kind of bourgeois luxury and good taste. A savvy M.B.A. student should do a case study.

I feel sheepish admitting this, but I have a longtime love-hate relationship with “Under the Tuscan Sun.” Since first reading the book, in the nineties, when I was in my twenties, its success has haunted me, teased me, and tortured me as I’ve forged a career as a food and travel writer who occasionally does stories about Italy. I could understand the appeal of Mayes’s memoir to, for instance, my mother, who loves nothing more than to plot the construction of a new dream house. “I come from a long line of women who open their handbags and take out swatches of upholstery,” Mayes writes, “colored squares of bathroom tile, seven shades of paint samples, and strips of flowered wallpaper.” She may as well be speaking directly to my mom and many of her friends. But I was more puzzled by the people my own age who suddenly turned Tuscan crazy—drizzling extra-virgin olive oil on everything, mispronouncing “bruschetta,” pretending to love white beans. In 2002, I was asked to officiate a wedding of family friends in Tuscany, where a few dozen American guests stayed in a fourteenth-century villa that had once been a convent. The villa’s owners were fussy yuppies from Milan who had a long, scolding list of house rules—yet, when we inquired why the electricity went out every day from 2 P.M. to 8 P.M., they shrugged and told us we were uptight Americans. This irritating mix of fussy, casual, and condescending reminded me of the self-satisfied tone of “Under the Tuscan Sun.” I began to despise the villa owners so much that when the brother-in-law of the bride and groom got drunk on Campari and vomited on a fourteenth-century fresco, causing more than a thousand euros in damage, I had a good, long private laugh.

Much of my hangup, let’s be clear, had to do with my own jealousy. If only I could afford a lovely villa, I certainly wouldn’t have been so smug! I would think. I would have lived more authentically! But beyond Italy and villas and personal gripes, Mayes’s book cast a long shadow over my generation of food and travel writers. As a young journalist, I quickly realized that editors were not going to give me cushy travel assignments to Italy, and so I began veering slightly off the beaten path, going to Iceland, Nicaragua, Portugal, and other countries that aren’t Italy, in order to sell articles. But the spectre of Mayes found me anyway. Once, in the early two-thousands, when I was trying to sell a book about Iceland, a publisher told me, “You know what you should do? You should buy a house in Iceland. And then fix it up, and live there, and write something like ‘Under the Icelandic Sun.’ ” I never sold a book on Iceland, nor did I sell my other pitches from that period, which were essentially “Under the Portuguese Sun” and “Under the Nicaraguan Sun.” By the late aughts, the mere mention of Mayes’s memoir made me angry. At one point I lashed out against the book in print, calling it “treacly” in an essay that was published days before I encountered Frances Mayes at a wine writers’ conference. I was assigned to sit across from her at the opening reception. She shook my hand and said, “I read your piece,” then ignored me for the rest of the dinner.

by Jason Wilson, New Yorker |  Read more:
Image: Touchstone/Everett