Wednesday, December 7, 2016

How the Internet Unleashed a Burst of Cartooning Creativity


In 1989 Bill Watterson, the writer of “Calvin and Hobbes”, a brilliant comic strip about a six-year-old child and his stuffed tiger, denounced his industry. In a searing lecture, he attacked bland, predictable comics, churned out by profit-driven syndicates. Cartooning, said Mr Watterson, “will never be more than a cheap, brainless commodity until it is published differently.”

In 2012 he is finally getting his way. As the newspaper industry continues its decline, the funnies pages have decoupled from print. Instead of working for huge syndicates, or for censored newspapers with touchy editors, cartoonists are now free to create whatever they want. Whether it is cutting satire about Chinese politics, or a simple joke about being a dog, everything can win an audience on the internet.

This burst of new life comes as cartoons seemed to be in terminal decline. Punch, once a fierce political satire magazine whose cartoons feature in almost every British history textbook, finally closed its doors in 2002. The edgier Viz magazine, which sold a million copies an issue in the early 1990s, now sells 65,000. In the United States, of the sprawling EC Comics stable, only Mad magazine remains, its circulation down from 2.1m in 1974 to 180,000. Meanwhile, the American newspaper industry, home of the cartoon strip, now makes less in advertising revenue than at any time since the 1950s. (...)

Triumph of the nerds

The decline of newspapers and the rise of the internet have broken that system. Newspapers no longer have the money to pay big bucks to cartoonists, and the web means anybody can get published. Cartoonists who want to make their name no longer send sketches to syndicates or approach newspapers: they simply set up websites and spread the word on Twitter and Facebook. Randall Munroe, the creator of “XKCD”, left a job at NASA to write his stick men strip, full of science and technology jokes (see above and below). Kate Beaton, a Canadian artist who draws “Hark, A Vagrant”, sketched her cartoons between shifts while working in a museum. Matthew Inman created his comic “The Oatmeal” by accident while trying to promote a dating website he built to escape his job as a computer coder.

The typical format for a web comic was established a decade or more ago, says Zach Weiner, the writer of “Saturday Morning Breakfast Cereal”, or “SMBC” (below). It has not changed much since. Most cartoonists update on a regular basis — daily, or every other day — and run in sequence. “I think that’s purely because that’s what the old newspapers used to do,” says Mr Weiner. But whereas many newspaper comics tried to appeal to as many people as possible, often with lame, fairly universal jokes, online cartoonists are free to be experimental, in both content and form.

Ryan North uses the same drawing every day for his “Dinosaur Comics” — the joke is in the dialogue, which he writes fresh every weekday, and the absurdity of dinosaurs discussing Shakespeare and dating. “SMBC” flicks between one-panel gags and extremely long, elaborate stories. Fred Gallagher, the writer of “Megatokyo”, has created an entire soap-opera-like world, drawn in beautiful Japanese manga-style, accessible only to those who follow the sage regularly. Mr Munroe’s “XKCD” is usually a simple strip comic, but recently featured one explorable comic, entitled “Click and Drag”, which, if printed at high resolution, would be 46 feet wide.

Perhaps thanks to the technical skills needed to succeed, web cartoonists tend to be young — few are over 30 — well-educated and extremely geeky.

by The Economist, Medium |  Read more:
Image: XKCD

Plum Crazy

I arrive at the SeaShell Motel in Naples around midnight. After an unexpected credit-shaming at the Budget rental car counter in the Fort Lauderdale airport, I’ve hauled ass through the Big Cypress Swamp in a downpour, enduring a static-ridden NPR station and the onset of McDonald’s farts, to find my late check-in instructions aggressively taped to the office door, as if by somebody familiar with Saran-wrapping frat boys to pine trees. I push open the door to my room, recalling one Travelocity commenter’s description of the place: scary at first. But it’s not scary at all; the room is spacious and clean. It’s just that a security light shines in the window like the angel of death all night, making it impossible to sleep without suffocating your face with pillows that another Travelocity commenter accurately described as flat.

In the morning, the receptionist asks, “Did you get your envelope okay? I was so scared it would fall off.” Rather, she yells this to me over an Eastern European couple who are fighting about a botched room reservation, a situation that turns out to be of the husband’s own doing, much like his unbuttoned floral shirt and plaid swim trunks combo. They may be the type of people who go on a beach vacation but never leave the motel pool. Not like myself—I’ve come on a beach vacation to hang out with plant nerds at the International Plumeria Conference.

Plumeria, also known as frangipani, is a tropical flowering tree most people associate with Hawaiian leis. The fragrant flowers usually have five petals, and, in the wild, most species of plumeria have white blooms with a yellow center. In nurseries and backyards, though, flowers of the species Plumeria rubra vary in color, size, and scent, with growers giving them fanciful names such as Fruit Salad and Vera Cruz Rose. A catalog of blooms—the industry leader is Jungle Jack’s out of San Diego County—might sound like a strip club roster if heard out of context: Essence, Temptation, Fantasia, Xquisite, Mystique. The plants are native to Mexico, South America, and the Caribbean, and weren’t brought to Hawaii until the 1860s, about two hundred years after they were first classified by the French botanist Charles Plumier, the original plumeria addict.

The Plumeria Society of America was founded in Houston in 1979 by three women who aimed to spread interest in the plant, then familiar only to those who’d vacationed in Hawaii. One of the women was a famous singer named Nancy Ames, but it was another, Elizabeth Thornton, the Queen of Plumeria, who was known for her breathtaking hybrids like Texas Aggie and Thornton’s Lemon Drop.

Plumeria rubra alone now consists of close to four thousand cultivars (when PSA registration began in 1989 there were just fifty-one). Celadine is commonplace in many cemeteries, hence its nickname: Graveyard Yellow. There is no such thing as a blue plumeria, or a green or a black, though people keep buying color frauds on Amazon and eBay. Depending on whom you ask, there are now legit purples: the Metallica, the Purple Jack. There are reds that turn almost black in intense, scorching heat: Black Widow, Black Tiger. There is a bloom called Plum Crazy, a deep purple and red with upturned edges and slithery, eel-like veins, devastatingly beautiful. In the mid-aughts famed grower Jim Little released a vibrant orange-gold plumeria in honor of Don Ho. It is said that Thornton, a University of Texas grad, spent her lifetime hoping to cultivate a burnt-orange bloom from seed, but she never did.

While researching her best seller The Orchid Thief, Susan Orlean came upon plumerias in South Florida but didn’t know them by name: “Along the path there were enormous tropical trees with pimply bark and flowers the color of bubble gum, the kind of trees you would draw in a tropical cartoon.” Trees for perpetual adolescence. Trees for people like me.

The International Plumeria Conference takes place every ten years. The last time it was held—the inaugural convention, in Galveston—I was twenty-five and my experience with houseplants ran toward half-dead crotons and dank nightstand weed. My dad got me into plumeria. He’s an old surfer with a dozen trees in Satellite Beach, Florida, including a light pink bloom that he keeps calling Surfqueeny after my first AOL screen name. The Plumeria Society of America would identify it as a NOID—pronounced like the Domino’s mascot of yore and simply meaning “no ID,” origin unknown. Dad gave me a Kauka Wilder variety when I left grad school in North Carolina for New Orleans nine years ago. I did just about everything to kill it. The plant didn’t bloom until it was ten feet tall—the flower like a pop star’s fake nails, with long, narrow petals in hues of bright yellow and fuchsia—an umbrella with a clunky nine-foot handle. These days I have eight plumerias and I consider myself fairly obsessed, which is why I’m here in South Florida this May weekend: to convene with the especially obsessed. (...)

The hills are alive with the sound of plumeria freaks saying I have five of these, and let me tell you, they’re the gift that keeps on giving, or I tried to root this one and it rotted on me. Irish Spring soap is strung from the trees, which Hetty tells us is to prevent deer from eating the flowers. Apparently plumeria are very tasty to certain animals, the American bulldog, for instance. “Mine used to eat the whole dang plant,” Terry says. Plumeria’s many known enemies include wild hogs, fungi, spider mites, and borer beetles. In Australia, there’s an endangered turkey that’s known to dig up and shred the plants to make its enormous sexing heaps.

Dennis, an Aussie grower who pronounces flowers flarers, has brought us all twirlers, a contraption he’s invented consisting of fishing line glued to a tiny toothpick-size stick, which various people on the hill are now using to feel up the insides of the flowers, thereby encouraging the anthers to drop their pollen and produce a seedpod. This is called hand-pollination, and it can create new types of plumeria, since seeds aren’t always true to the mother plant. Cross-pollination is the surest bet for a new type of bloom, but it requires a scalpel and a surgical method first discovered in the 1950s by hybridization pioneer Bill Moragne, who named dozens of cultivars in honor of his family: the Cyndi Moragne, the Edi Moragne, and the crowd-pleasing Jeannie Moragne.

There are trees out here with seedpods already on them, which is something to behold—they resemble giant glossy beans or overripe bananas or anorexic eggplants conjoined at the tip. In the end, though, they all dry out and turn the same crispy brown like a giant dead roach that splits open to reveal a bunch of smaller roachlike seeds inside. It’s kind of gross, but seedlings are the only way to get a new, undiscovered bloom.

“Let’s talk about seeds,” Mike, the emcee, says when we reconvene after lunch. “What’s the best way to store them?”

“Prescription bottle!” the audience answers.

“Yes, we have a lot of those around, don’t we?”

by Gwendolyn Knapp, Oxford American | Read more:
Image: Peter Rowley

What North Korean Defectors Think of North Korea


[ed. Fascinating. If you're at all curious about what life is like in North Korea, take a moment to watch this. I'd also recommend reading The Orphan Master's Son by Adam Johnson.

Tuesday, December 6, 2016

Amazon Plans to *Disrupt* the Bodega Industry with "Amazon Go"


Thank GOD. I was just about to write a 20,000-word thinkin' piece about how dangerously close human beings have been getting to their food sources lately. But luckily, Seattle's own Amazon dot com has stepped in and saved me from my task. Today, the mega-retailer announced the launch of Amazon Go, a convenience store designed to end the thousand inconveniences of convenience stores.

Forget for a moment that weird patriarchy-perpetuating cupcake scene and just repeat the following words to yourself: "Computer vision. Deep Learning Algorithms. Sensor fusion."

With this "just walk out" technology, Amazon is poised to remove the "service" from service industries, and it can't happen fast enough for this on-the-go guy. Talking to butchers and bakers and the people who work at the deli? UGH. Catching five minutes of a soccer game with the guy who owns the bodega down the street? FUCK THAT. I want my food, I want it wrapped in plastic, I want to pay for it with my phone, and I want all that so I can spend more time crying at my desk.

Speaking of food—what's on offer? The Seattle Times tells us:
The store features ready-to-eat meals and snacks prepared by on-site chefs or local bakeries. There are also essentials such as bread and milk, as well as high-end cheese and chocolate. 
Amazon says there will be well-known brands as well as 'special finds we’re excited to introduce to customers.' That includes an 'Amazon Meal Kit,' which contains ingredients needed to make a meal for two in 30 minutes
Okay fine. If this convenience store helps tech workers with demanding jobs eat better / more locally, then bully for them, I guess.

But one small thing. If these smartstores or phonemarts or Amazones or whatever really cool name we start calling them begin to proliferate, guess who might be disproportionally inconvenienced? According the Ethnic Business Coalition, immigrants and refugees own 53 percent of the country's grocery stores. And, as the Times notes, "The Bureau of Labor Statistics said in a report this year that cashiers were the second-largest occupation, with 3.5 million employed in the U.S." So, you know. them.

by Rich Smith, The Stranger |  Read more:
Image: Amazon
[ed. I think retail salespeople (4.5 mil.) are the most common occupation in the U.S., right before cashiers (3.3 mil), so if this catches on (and no reason to believe it won't) this could actually be a two-fer in terms of wiping out a large segment of the working population. Not to mention self-driving trucks and truck drivers (1.6 mil). Statistics via:]

A photograph taken during the Apollo 16 mission, of a family portrait placed on the lunar surface. On the back, astronaut Charles “Charlie” Duke wrote, “This is the family of astronaut Charlie Duke from planet Earth who landed on the Moon on April 20, 1972.” The image appeared in The Moon: 1968–1972, published in October by T. Adler Books. Courtesy T. Adler Books and NASA, Johnson Space Center & NASA History Division

via:

Mick Manning, Mackerel
via:

Monday, December 5, 2016

Google, Democracy and the Truth About Internet Search

Google is search. It’s the verb, to Google. It’s what we all do, all the time, whenever we want to know anything. We Google it. The site handles at least 63,000 searches a second, 5.5bn a day. Its mission as a company, the one-line overview that has informed the company since its foundation and is still the banner headline on its corporate website today, is to “organise the world’s information and make it universally accessible and useful”. It strives to give you the best, most relevant results. And in this instance the third-best, most relevant result to the search query “are Jews… ” is a link to an article from stormfront.org, a neo-Nazi website. The fifth is a YouTube video: “Why the Jews are Evil. Why we are against them.”

The sixth is from Yahoo Answers: “Why are Jews so evil?” The seventh result is: “Jews are demonic souls from a different world.” And the 10th is from jesus-is-saviour.com: “Judaism is Satanic!”

There’s one result in the 10 that offers a different point of view. It’s a link to a rather dense, scholarly book review from thetabletmag.com, a Jewish magazine, with the unfortunately misleading headline: “Why Literally Everybody In the World Hates Jews.”

I feel like I’ve fallen down a wormhole, entered some parallel universe where black is white, and good is bad. Though later, I think that perhaps what I’ve actually done is scraped the topsoil off the surface of 2016 and found one of the underground springs that has been quietly nurturing it. It’s been there all the time, of course. Just a few keystrokes away… on our laptops, our tablets, our phones. This isn’t a secret Nazi cell lurking in the shadows. It’s hiding in plain sight. (...)

Google isn’t just a search engine, of course. Search was the foundation of the company but that was just the beginning. Alphabet, Google’s parent company, now has the greatest concentration of artificial intelligence experts in the world. It is expanding into healthcare, transportation, energy. It’s able to attract the world’s top computer scientists, physicists and engineers. It’s bought hundreds of start-ups, including Calico, whose stated mission is to “cure death” and DeepMind, which aims to “solve intelligence”.

And 20 years ago it didn’t even exist. When Tony Blair became prime minister, it wasn’t possible to Google him: the search engine had yet to be invented. The company was only founded in 1998 and Facebook didn’t appear until 2004. Google’s founders Sergey Brin and Larry Page are still only 43. Mark Zuckerberg of Facebook is 32. Everything they’ve done, the world they’ve remade, has been done in the blink of an eye.

But it seems the implications about the power and reach of these companies is only now seeping into the public consciousness. I ask Rebecca MacKinnon, director of the Ranking Digital Rights project at the New America Foundation, whether it was the recent furore over fake news that woke people up to the danger of ceding our rights as citizens to corporations. “It’s kind of weird right now,” she says, “because people are finally saying, ‘Gee, Facebook and Google really have a lot of power’ like it’s this big revelation. And it’s like, ‘D’oh.’”

MacKinnon has a particular expertise in how authoritarian governments adapt to the internet and bend it to their purposes. “China and Russia are a cautionary tale for us. I think what happens is that it goes back and forth. So during the Arab spring, it seemed like the good guys were further ahead. And now it seems like the bad guys are. Pro-democracy activists are using the internet more than ever but at the same time, the adversary has gotten so much more skilled.”

Last week Jonathan Albright, an assistant professor of communications at Elon University in North Carolina, published the first detailed research on how rightwing websites had spread their message. “I took a list of these fake news sites that was circulating, I had an initial list of 306 of them and I used a tool – like the one Google uses – to scrape them for links and then I mapped them. So I looked at where the links went – into YouTube and Facebook, and between each other, millions of them… and I just couldn’t believe what I was seeing.

“They have created a web that is bleeding through on to our web. This isn’t a conspiracy. There isn’t one person who’s created this. It’s a vast system of hundreds of different sites that are using all the same tricks that all websites use. They’re sending out thousands of links to other sites and together this has created a vast satellite system of rightwing news and propaganda that has completely surrounded the mainstream media system.

He found 23,000 pages and 1.3m hyperlinks. “And Facebook is just the amplification device. When you look at it in 3D, it actually looks like a virus. And Facebook was just one of the hosts for the virus that helps it spread faster. You can see the New York Times in there and the Washington Post and then you can see how there’s a vast, vast network surrounding them. The best way of describing it is as an ecosystem. This really goes way beyond individual sites or individual stories. What this map shows is the distribution network and you can see that it’s surrounding and actually choking the mainstream news ecosystem.” (...)

But it’s where it goes from here that’s truly frightening. I ask him how it can be stopped. “I don’t know. I’m not sure it can be. It’s a network. It’s far more powerful than any one actor.”

So, it’s almost got a life of its own? “Yes, and it’s learning. Every day, it’s getting stronger.”

The more people who search for information about Jews, the more people will see links to hate sites, and the more they click on those links (very few people click on to the second page of results) the more traffic the sites will get, the more links they will accrue and the more authoritative they will appear. This is an entirely circular knowledge economy that has only one outcome: an amplification of the message. Jews are evil. Women are evil. Islam must be destroyed. Hitler was one of the good guys.

And the constellation of websites that Albright found – a sort of shadow internet – has another function. More than just spreading rightwing ideology, they are being used to track and monitor and influence anyone who comes across their content. “I scraped the trackers on these sites and I was absolutely dumbfounded. Every time someone likes one of these posts on Facebook or visits one of these websites, the scripts are then following you around the web. And this enables data-mining and influencing companies like Cambridge Analytica to precisely target individuals, to follow them around the web, and to send them highly personalised political messages. This is a propaganda machine. It’s targeting people individually to recruit them to an idea. It’s a level of social engineering that I’ve never seen before. They’re capturing people and then keeping them on an emotional leash and never letting them go.”

by Carole Cadwalladr, The Guardian | Read more:
Image:Jonathan Albright

China Amplifies Warning on Taiwan

[ed. Will someone please take this guy's Twitter account away? It's like giving a kid a box of matches in a room full of gasoline. Twitter bans users for hate speech and terroism threats, why not national security? As I recall, the NSA had serious problems with Obama's beloved blackberry when he took office, so restrictions were imposed (and still are). Now we have this. And, don't even get me started on how this compares to the pearl-clutching over Hillary Clinton's non-secure email servers. See also: Twitter Founder Feels 'Complicated' About Donald Trump's Tweeting.]

China warned President-elect Donald J. Trump on Monday that he was risking a confrontation over Taiwan, even as Mr. Trump broadened the dispute with new messages on Twitter challenging Beijing’s trade policies and military activities in the South China Sea.

A front-page editorial in the overseas edition of People’s Daily, the official organ of the Communist Party of China, denounced Mr. Trump for speaking Friday with Taiwan’s president, Tsai Ing-wen, warning that “creating troubles for the China-U.S. relationship is creating troubles for the U.S. itself.” The rebuke was much tougher than the Chinese Foreign Ministry’s initial response to the phone call, which broke with decades of American diplomatic practice.

For his part, Mr. Trump seemed to take umbrage at the idea that he needed China’s approval to speak with Ms. Tsai. In two posts on Twitter, he wrote: “Did China ask us if it was O.K. to devalue their currency (making it hard for our companies to compete), heavily tax our products going into their country (the U.S. doesn’t tax them) or to build a massive military complex in the middle of the South China Sea? I don’t think so!” (...)

The Chinese government’s initial reaction to Mr. Trump’s call has already faced a torrent of criticism on social media from Chinese who complained it was not tough enough. The statement from Foreign Minister Wang Yi, which was relatively low-key given the unprecedented nature of the call, refrained from criticizing Mr. Trump, instead accusing Taiwan of playing a “little trick” on the American president-elect.

That offered Mr. Trump a face-saving way out of the imbroglio, and a chance to de-escalate. But the messages he posted on Twitter late Sunday stepped up the pressure on China’s leaders instead.

by Jane Perlez, NY Times |  Read more:
Image: USGS/Getty

Immune System, Unleashed by Cancer Therapies, Can Attack Organs

As Chuck Peal lay in a Waterbury, Conn., emergency room one Sunday in early September, doctors furiously tried to make sense of his symptoms. Mr. Peal, 61, appeared to be dying, and they were not sure why.

He slipped in and out of consciousness, his blood pressure plummeted, his potassium levels soared and his blood sugar spiked to 10 times the normal level. A doctor suspected a heart attack, but uncertainty left him urgently researching the situation on his phone.

This was not a heart attack. Mr. Peal’s body was attacking itself, a severe reaction by his immune system that was a side effect of a seemingly miraculous cancer treatment aimed at saving his life.

In the seven weeks prior, doctors at Yale had combated Mr. Peal’s melanoma with two of the most promising drugs in cancer treatment today. These medicines work by stimulating the immune system to attack cancer as ferociously as it does other threats, like viruses and bacteria.

These so-called immunotherapy drugs have been hailed as a breakthrough in cancer treatment, attracting billions of research dollars and offering new hope to patients out of options. But as their use grows, doctors are finding that they pose serious risks that stem from the very thing that makes them effective. An unleashed immune system can attack healthy, vital organs: notably the bowel, the liver and the lungs, but also the kidneys, the adrenal and pituitary glands, the pancreas and, in rare cases, the heart.

Doctors at Yale believe immunotherapy is causing a new type of acute-onset diabetes, with at least 17 cases there so far, Mr. Peal’s among them. In cancer clinics around the world, and in drug trials, myriad other side effects are showing up. Studies are finding that severe reactions occur nearly 20 percent of the time with certain drugs, and in more than half of patients when some drugs are used in combination.

Another recent paper found that 30 percent of patients experienced “interesting, rare or unexpected side effects,” with a quarter of the reactions described as severe, life-threatening or requiring hospitalization. Some patients have died, including five in recent months in clinical trials of a new immunotherapy drug being tested by Juno Therapeutics Inc.

The upshot, oncologists and immunologists say, is that the medical field must be more vigilant as these drugs soar in popularity. And they say more research is needed into who is likely to have reactions and how to treat them.

“We are playing with fire,” said Dr. John Timmerman, an oncologist and immunotherapy researcher at the University of California, Los Angeles, who recently lost a patient to side effects. The woman’s immunotherapy drugs had successfully “melted away” her cancer, he said, but some weeks later, she got cold and flulike symptoms and died in the emergency room from an inflammatory response that Dr. Timmerman described as “a mass riot, an uprising” of her immune system.

“We’ve heard about immunotherapy as God’s gift, the chosen elixir, the cure for cancer,” he said. “We haven’t heard much about the collateral damage.”

by Matt Richtel, NY Times |  Read more:
Image: NY Times 

Sunday, December 4, 2016

Ads Don't Work That Way

There's a meme, particularly virulent in educated circles, about how advertising works — how it sways and seduces us, coaxing us gently toward a purchase.

The meme goes something like this:
Rather than attempting to persuade us (via our rational, analytical minds), ads prey on our emotions. They work by creating positive associations between the advertised product and feelings like love, happiness, safety, and sexual confidence. These associations grow and deepen over time, making us feel favorably disposed toward the product and, ultimately, more likely to buy it.
Here we have a theory — a proposed mechanism — of how ads influence consumer behavior. Let's call it emotional inception or just inception, coined after the movie of the same name where specialists try to implant ideas in other people's minds, subconsciously, by manipulating their dreams. In the case of advertising, however, dreams aren't the inception vector, but rather ideas and images, especially ones which convey potent emotions.

The label ("emotional inception") is my own, but the idea should be familiar enough. It's the model of how ads work made popular by Mad Men, and you can find similar accounts all across the web. A write-up at the Atlantic, for example — titled "Why Good Advertising Works (Even When You Think It Doesn't)" — says that
advertising rarely succeeds through argument or calls to action. Instead, it creates positive memories and feelings that influence our behavior over time to encourage us to buy something at a later date.
"The objective [of advertising]," the article continues, "is to seed positive ideas and memories that will attract you to the brand." (...)

This meme or theory about how ads work — by emotional inception — has become so ingrained, at least in my own model of the world, that it was something I always just took on faith, without ever really thinking about it. But now that I have stopped to think about it, I'm shocked at how irrational it makes us out to be. It suggests that human preferences can be changed with nothing more than a few arbitrary images. Even Pavlov's dogs weren't so easily manipulated: they actually received food after the arbitrary stimulus. If ads worked the same way — if a Coke employee approached you on the street offering you a free taste, then gave you a massage or handed you $5 — well then of course you'd learn to associate Coke with happiness.

But most ads are toothless and impotent, mere ink on paper or pixels on a screen. They can't feed you, hurt you, or keep you warm at night. So if a theory (like emotional inception) says that something as flat and passive as an ad can have such a strong effect on our behavior, we should hold that theory to a pretty high burden of proof.

Social scientists have a tool that they use to reason about phenomena like this: Homo economicus. This is an idealized model of human behavior, a hypothetical creature (/caricature) who makes perfectly "rational" decisions, where "rational" is a well-defined game-theoretic concept meaning (roughly) self-interested and utility-maximizing. In other words, a Homo economicus— of which no actual instances exist, but which every real human being approximates to a greater or lesser extent — will always, to the best of its available knowledge, make the decisions which maximize expected outcomes according to its own preferences.

If we (consumers) are swayed by emotional inception, then it seems we're violating this model of economic rationality. Specifically, H. economicus has fixed preferences or fixed goals — in technical jargon, a fixed "utility function." These are exogenous, unalterable by anyone — not the actor him- or herself and especially not third parties. But if inception actually works on us, then in fact our preferences and goals aren't just malleable, but easily malleable. All an advertiser needs to do is show a pretty face next to Product X, and suddenly we're filled with desire for it.

This is an exaggeration of course. More realistically, we need to see an ad multiple times before it eventually starts to rewrite our desires. But the point still stands: external agents can, without our permission, alter the contents of our minds and send us scampering off in service of goals that are not ours.

I know it's popular these days to underscore just how biased and irrational we are, as human creatures — and, to be fair, our minds are full of quirks. But in this case, the inception theory of advertising does the human mind a disservice. It portrays us as far less rational than we actually are. We may not conform to a model of perfect economic behavior, but neither are we puppets at the mercy of every Tom, Dick, and Harry with a billboard. We aren't that easily manipulated.

Ads, I will argue, don't work by emotional inception.

TRUTH IN ADVERTISING

Well then: how do they work?

Emotional inception is one (proposed) mechanism, but in fact there are many such mechanisms. And they're not mutually exclusive: a typical ad will employ a few different techniques at once — most of which are far more straightforward and above-board than emotional inception. Insofar as we respond to these other mechanisms, we're acting fully in accordance with the Homo economicus model of human behavior.

The guiding principle here is that these mechanisms impart legitimate, valuable information. Let's take a look at a few of them.

First, a lot of ads work simply by raising awareness. These ads are essentially telling customers, "FYI, product X exists. Here's how it works. It's available if you need it." Liquid Draino, for example, is a product that thrives on simple awareness, because drains don't clog all that frequently, and if you don't know what Liquid Draino is and what it does, you won't think to use it. But this mechanism is pervasive. Almost every ad works, at least in part, by informing or reminding customers about a product. And if it makes a memorable impression, even better.

Occasionally an ad will attempt overt persuasion, i.e., making an argument. It's naive to think that this is the most common or most powerful mechanism, but it does make an occasional appearance: "4/5 doctors prefer Camels" or "Verizon: America's largest 4G LTE network" and the like. Older ads were especially fond of this technique, but it seems to have fallen out of fashion when advertising hit its modern stride.

Perhaps the most important mechanism used by ads (across the ages) is making promises. These promises can be explicit, in the form of a guarantee or warrantee, but are more often implicit, in the form of a brand image. When a company like Disney makes a name for itself as a purveyor of "family-friendly entertainment," customers come to rely on Disney to provide exactly that. If Disney were ever to violate this trust — by putting too much violence in its movies, for instance — consumers would get angry and (at the margin) buy fewer of Disney's products. So however the promise is conveyed, explicitly or implicitly, the result is that the brand becomes incentivized to fulfill it, and consumers respond (rationally) by buying more of the product, relative to brands that don't put themselves "out there" with similar promises.

There's one more honest ad mechanism to discuss. This one is termed (appropriately) honest signaling, and it's an instance of Marshall McLuhan's famous dictum, "The medium is the message." Here an ad conveys valuable information simply by existing — or more specifically, by existing in a very expensive location. A company that takes out a huge billboard in the middle of Times Square is announcing (subtextually), "We're willing to spend a lot of money on this product. We're committed to it. We're putting money where our mouths are."

Knowing (or sensing) how much money a company has thrown down for an ad campaign helps consumers distinguish between big, stable companies and smaller, struggling ones, or between products with a lot of internal support (from their parent companies) and products without such support. And this, in turn, gives the consumer confidence that the product is likely to be around for a while and to be well-supported. This is critical for complex products like software, electronics, and cars, which require ongoing support and maintenance, as well as for anything that requires a big ecosystem (e.g. Xbox). The same way an engagement ring is an honest token of a man's commitment to his future spouse, an expensive ad campaign is an honest token of a company's commitment to its product line.

So far so good. All of these ad mechanisms work by imparting valuable information. But as we're well aware, not every ad is so straightforward and above-board.

UNDERHANDED ADVERTISING

Consider this one for Corona:


Whatever's going on here, it's not about awareness, persuasion, promises, or honest signaling. In fact this image is almost completely devoid of information in the most literal sense. As Steven Pinker defines it, information is "a correlation between two things that is produced by a lawful process (as opposed to coming about by sheer chance)." In this case, the image is so arbitrary that it can't be conveying any information about Corona per se, as distinct from any other beer. Corona wasn't specifically designed for the beach, nor does 'beach-worthiness' emerge from any distinguishing features of Corona. You could swap in a Budweiser or Heineken and no "information" would be lost.

So instead of conveying information, this ad looks like a textbook case of emotional inception, i.e., creating an arbitrary, Pavlovian association between Corona and the idea of relaxation. The goal, presumably, is to seed us (viewers, consumers) with good memories, so that later, when shuffling down the beer aisle and spotting the Corona box, we'll get the inexplicable warm fuzzies, and then: purchase!

Except I don't think that's what's happening here. I don't think this Corona ad — or any of the thousands of others just like it — is attempting to get away with inception. Something else is going on; some other mechanism is at play.

Let's call this alternate mechanism cultural imprinting, for reasons that I hope will become clear. It's closely related to, but importantly distinct from, emotional inception. And my thesis today is that the effect of cultural imprinting is far larger than the effect of emotional inception (if such a thing even exists at all).

Cultural imprinting is the mechanism whereby an ad, rather than trying to change our minds individually, instead changes the landscape of cultural meanings — which in turn changes how we are perceived by others when we use a product. Whether you drink Corona or Heineken or Budweiser "says" something about you. But you aren't in control of that message; it just sits there, out in the world, having been imprinted on the broader culture by an ad campaign. It's then up to you to decide whether you want to align yourself with it. Do you want to be seen as a "chill" person? Then bring Corona to a party. Or maybe "chill" doesn't work for you, based on your individual social niche — and if so, your winning (EV-maximizing) move is to look for some other beer. But that's ok, because a successful ad campaign doesn't need to work on everybody. It just needs to work on net — by turning "Product X" into a more winning option, for a broader demographic, than it was before the campaign.

Of course cultural imprinting works better for some products than others. What a product "says" about you is only important insofar as other people will notice your use of it — i.e., if there's social or cultural signaling involved. But the class of products for which this is the case is surprisingly large. Beer, soft drinks, gum, every kind of food (think backyard barbecues). Restaurants, coffee shops, airlines. Cars, computers, clothing. Music, movies, and TV shows (think about the watercooler at work). Even household products send cultural signals, insofar as they'll be noticed when you invite friends over to your home. Any product enjoyed or discussed in the presence of your peers is ripe for cultural imprinting.

For each of these products, an ad campaign seeds everyone with a basic image or message. Then it simply steps back and waits — not for its emotional message to take root and grow within your brain, but rather for your social instincts to take over, and for you to decide to use the product (or not) based on whether you're comfortable with the kind of cultural signals its brand image allows you to send.

In this way, cultural imprinting relies on the principle of common knowledge. For a fact to be common knowledge among a group, it's not enough for everyone to know it. Everyone must also know that everyone else knows it — and know that they know that they know it... and so on.

So for an ad to work by cultural imprinting, it's not enough for it to be seen by a single person, or even by many people individually. It has to be broadcast publicly, in front of a large audience. I have to see the ad, but I also have to know (or suspect) that most of my friends have seen the ad too. Thus we will expect to find imprinting ads on billboards, bus stops, subways, stadiums, and any other public location, and also in popular magazines and TV shows — in other words, in broadcast media. But we would not expect to find cultural-imprinting ads on flyers, door tags, or direct mail. Similarly, internet search ads and banner ads are inimical to cultural imprinting because the internet is so fragmented. Everyone lives in his or her own little online bubble. When I see a Google search ad, I have no idea whether the rest of my peers have seen that ad or not.

In a way, cultural imprinting is a form of inception, but it's much shallower than the conventional (Pavlovian) account would have us believe. An ad doesn't need to incept itself all the way into anyone's deep emotional brain; it merely needs to suggest that it might have incepted itself into other people's brains — and then (barring any contrary evidence about what people actually believe) it will slowly work its way into consensus reality, to become part of the cultural landscape.

Unlike inception proper (which I don't think actually exists), cultural imprinting is fully compatible with the Homo economicus model of human decision-making. It leaves our goals fully intact (typically: wanting the respect of our peers), and by imprinting itself on the external cultural landscape, merely changes the optimal means of pursuing those goals. The result is the same — we buy more of the products being advertised — but the pathways of influence are different.

by Kevin Simlar, Melting Asphalt |  Read more:
Images: Mad Men and Corona

Octopuses and the Puzzle of Aging

Around 2008, while snorkeling and scuba diving in my free time, I began watching the unusual group of animals known as cephalopods, the group that includes octopuses, cuttlefish and squid. The first ones I encountered were giant cuttlefish, large animals whose skin changes color so quickly and completely that swimming after them can be like following an aquatic, multi-armed television. Then I began watching octopuses. Despite being mollusks, like clams and oysters, these animals have very large brains and exhibit a curious, enigmatic intelligence.

I followed them through the sea, and also began reading about them, and one of the first things I learned came as a shock: They have extremely short lives — just one or two years.

I was already puzzled by the evolution of large brains in cephalopods, and this discovery made the questions more acute. What is the point of building a complex brain like that if your life is over in a year or two? Why invest in a process of learning about the world if there is no time to put that information to use? An octopus’s or cuttlefish’s life is rich in experience, but it is incredibly compressed.

The particular puzzle of octopus life span opens up a more general one. Why do animals age? And why do they age so differently? A scruffy-looking fish that inhabits the same patch of sea as my cephalopods has relatives who live to 200 years of age. This seems extraordinarily unfair: A dull-looking fish lives for centuries while the cuttlefish, in their chromatic splendor, and the octopuses, in their inquisitive intelligence, are dead before they are 2? There are monkeys the size of a mouse that can live for 15 years, and hummingbirds that can live for over 10. Nautiluses (who are also cephalopods) can live for 20 years. A recent Nature paper reported that despite continuing medical advances, humans appear to have reached a rough plateau at around 115 years, though a few people will edge beyond it. The life spans of animals seem to lack all rhyme or reason.

We tend to think about aging as a matter of bodies wearing out, as automobiles do. But the analogy is not a good one. An automobile’s original parts will indeed wear out, but an adult human is not operating with his or her original parts. Like all animals, we are made of cells that are continually taking in nutrients and dividing, replacing old parts with new ones. If you keep replacing the parts of an automobile with new ones, there is no reason it should ever stop running.

At least in principle, the puzzle of aging has been largely resolved, through some elegant pieces of evolutionary reasoning. Imagine some kind of animal with no tendency to decline in old age. It just keeps going, and keeps reproducing, until some accident or predator gets hold of it. In such a species, like any other, genetic mutations continually arise. Sometimes (very rarely) a mutation occurs that makes organisms better able to survive and reproduce; more often mutations are harmful and are filtered out by natural selection. But in some cases, a mutation arises that acts so late in an organism’s life that its effects are usually irrelevant, since the organism has already died for another reason, such as being eaten. Natural selection will have little effect on that mutation, so it will become either more common in the population, or less common, purely by chance.

Eventually, some mutations of this kind will become common, and everyone will be carrying them around. Then when some lucky individual does succeed in living a long time without being eaten, it will run into the (usually harmful) effects of these late-acting mutations. It will appear to have been “programmed to decline,” because the effects of those lurking mutations will appear on a schedule. The population has now evolved a natural life span.

That idea was sketched in the 1940s by a British immunologist, Sir Peter Medawar. A decade later, the American evolutionist George Williams added a second step. Mutations often have multiple effects, and these can differ in their timing. Consider a mutation that has good effects early in life and bad effects late. If the bad effects come late, after the organism has most likely perished because of external threats, then these bad effects will have less importance than the early benefits. This is a “buy now, pay later” principle, with payments coming due only after you have probably left the scene anyway. So mutations with that combination of effects — helpful early, harmful late — will be beneficial over all and will accumulate in the population. Then if an individual survives all the external threats and reaches old age, it will be hit with the bill.

The Medawar effect and the Williams effect work together. Once each process gets started, it reinforces itself and also magnifies the other. As some mutations are established that lead to age-related decline, they make it even less likely that individuals will live past the age at which those mutations act. This means there is even less selection against mutations that have bad effects only at that advanced age. As a result, that age becomes harder and harder to exceed.

In the light of all this, I think it is becoming clearer how octopuses and other cephalopods came to have their peculiarly poignant combination of features. Like their mollusk relatives, early cephalopods had protective outer shells, which they carried along as they prowled the oceans. Then, in some animals, the shells were abandoned. This had several interlocking effects. First, it gave rise to their unique, outlandish bodies — in the octopus, a body that can take on any shape at will. This created an opportunity for the evolution of finer behavioral control and large nervous systems. But the loss of the shell had another effect: It made the animals vulnerable to predators, especially fish.

That put a premium on the evolution of octopus wiles and camouflage. But there are only so many times those tricks will save the animal. Octopuses can’t expect to survive long. This makes them ideal candidates for the Medawar and Williams effects to compress their natural life spans. As a result, octopuses have ended up with their unusual combination: a large brain and a short life.

This view is supported by the recent discovery of an exception to the usual octopus pattern, an exception that illuminates the rule. The octopuses I’ve been talking about tend to live in shallow water. But in 2014 researchers at the Monterey Bay Aquarium Research Institute released some remarkable images of a deep-sea octopus they had watched with remote-controlled submarines. This one octopus brooded its eggs for over four years. Even allowing for the fact that everything tends to happen slowly at these depths, that’s a very long time. The total life span of this octopus might have been as long as 16 years.

The Medawar-Williams theory predicts that predation risks should be much less severe for this species than they are for shallower-water octopuses with shorter life spans. And the images taken by the Monterey researchers contain a strong clue that this is so: They show an octopus sitting out in the open with its eggs for years on end. It did not find itself a den. This suggests that this species has less to fear from predators than other octopuses do. As a result, evolution has tuned the life span of this species differently.

by Peter Godfrey-Smith, NY Times |  Read more:
Image: Marion Fayolle

Saturday, December 3, 2016

Mother Nature Is Brought to You By ...

[ed. There's hardly a day goes by that I'm not aghast at some new capitulation to corporate capitalism. Check this out: Route 66 has been engineered to play America the Beautiful and the Nationwide jingle.]

This year, parks in several states including Idaho and Washington, and the National Park Service, will be blazing a new trail, figuratively at least, as they begin offering opportunities to advertisers within their borders.

King County in Washington, which manages 28,000 acres of parkland surrounding Seattle, offers a full branding menu: Naming rights or sponsorships may be had for park trails, benches and even trees. “Make our five million visitors your next customers,” the county urges potential advertisers.

King County already partnered with Chipotle to hide 30 giant replica burritos on parkland bearing the logo of the agency and the restaurant chain. People who found the burritos won prizes from Chipotle.

In May, the National Park Service proposed allowing corporate branding as a matter of “donor recognition.” As The Washington Post reported, under new rules set to go into effect at the end of the year, “an auditorium at Yosemite National Park named after Coke will now be permitted” and “visitors could tour Bryce Canyon in a bus wrapped in the Michelin Man.”

The logic behind these efforts is, in its own way, unimpeachable. Many millions of people — that is, “green consumers” — visit parks every day, representing an unrealized marketing opportunity of great value. Yes, parks are meant to be natural, not commercial, but times are tough, or so say the backers of the new schemes.

The spread of advertising to natural settings is just a taste of what’s coming. Over the next decade, prepare for a new wave of efforts to reach some of the last remaining bastions of peace, quiet and individual focus — like schools, libraries, churches and even our homes.

Some of this reflects technological change, but the real reason is the business model of what I call the “attention merchants.” Unlike ordinary businesses, which sell a product, attention merchants sell people to advertisers. They do so either by finding captive audiences (like at a park or school) or by giving stuff away to gather up consumer data for resale.

Once upon a time, this was a business model largely restricted to television and newspapers, where it remained within certain limits. Over the last decade, though, it has spread to nearly every new technology, and started penetrating spaces long thought inviolate.

In school districts in Minnesota and California, student lockers are sometimes covered by large, banner-style advertisements, so that the school hallways are what marketers call a fully immersive experience. Other schools have allowed advertising inside gymnasiums and on report cards and permission slips. The Associated Press reported this year that a high school near South Bend, Ind., “sold the naming rights to its football field to a bank for $400,000, its baseball field to an auto dealership, its softball field to a law firm, its tennis court to a philanthropic couple and its concession stands to a tire and auto-care company and a restaurant.”

Even megachurches, with their large and loyal congregations, have come to see the upside of “relevant” marketing, yielding the bizarre spectacle of product placements in sermons. In one of the first such efforts, pastors in 2005 were offered a chance to win $1,000 and a trip to London if they mentioned “The Chronicles of Narnia” during services. For the 2013 release of “Superman: Man of Steel,” pastors were supplied with notes for a sermon titled “Jesus: The Original Superhero.”

Nor are our workplaces and social spheres immune. The time and energy we spend socializing with friends and family has, almost incredibly, been harnessed for marketing, through the business models of Facebook, Instagram and other social media. At the office, the most successful of the productivity-killing distraction engines, BuzzFeed, brags of luring a “bored at work” network hundreds of millions strong.

Unfortunately, there is worse yet to come: The nation’s most talented engineers now apply themselves to making marketing platforms out of innovations — A.I. assistants like the Amazon Echo or self-driving cars. Here the intrusions will be subtle, even disguised, so as not to trip our defenses, but they will be even more powerful, going after our very decision-making processes. Consider how much we already depend on Siri or Google Maps: What happens when our most trusted tools have mixed motives?

by Tim Wu, NY Times |  Read more:
Image: Tim Enthoven

Killing You Softly With Her Dreams

Arianna Huffington wants to put you to sleep.

In her new book, The Sleep Revolution: Transforming Your Life, One Night at a Time, Huffington dramatically announces that we are in the middle of an unacknowledged sleep crisis. There is a problem in our society, Huffington tells us: we have forgotten how to sleep. Fortunately, sleepless readers need not fear: Huffington’s handy little book is here to show you how to combat sleeplessness.

Sleep Revolution is written in classic Huffington style: part Deepak Chopra, part Oprah, and strung together with quotes from everyone from Persian poet Rumi to art critic Jonathan Crary to even (bafflingly for a self-described progressive), the anti-immigrant, Brexit-enabling, racist former Mayor of London, Boris Johnson.

The writing, it should go without saying, is bad. A chapter begins: “From the beginning of time, people have struggled with sleep.” In fact, from the beginning of time, sophomore English teachers have been taking red pen to any essay that starts with “from the beginning of time.” Her phrasing is often corny and uses too many exclamation points.

Sleep Revolution is less a book than a business plan, a typical product of the can-do inspiration industry made popular by the likes of Andrew Weil and Suzie Orman, the snake oil salespeople of the 21st century. Like them, Huffington first tells you that you have a problem, one you were unaware you had. She then generously reveals the many products that can help alleviate your symptoms, suggesting plenty of expensive solutions. Huffington has learnt her trade from the best hucksters. She absorbs the techniques of assorted rich people’s gurus, like cult leaders Bhagwan Rajneesh and John-Roger, combining new age verbiage with sly admonitions to give up one’s material wealth (into their outstretched hands, of course).

Huffington undoubtedly possesses a kind of brilliance. It lies not in the quality of her thought or writing, but in her ability to understand and exploit the zeitgeist. The ideas in Sleep Revolution, such as they are, are mostly bits and pieces about sleep deprivation and the problems thereof cribbed and culled from a range of sources (likely the product of several intensive hours of Googling). To be sure, they are banal. And yet Huffington’s book is perfect for our moment in time: it arrives just as capitalism is making many of us more sleepless than ever.

Huffington is never so impolite as to mention that capitalism, which has done well by her and made her a multimillionaire, may be to blame for keeping people working long, sleepless hours. She prefers proposing solutions to diagnosing causes. She tells you to leave your smartphone outside your bedroom, to have warm baths, to disengage. Don’t tackle work emails after a certain time.

Her solutions have the convenient consequence of making you a better worker for your employers, without actually raising your material standard of living. After all, she writes, “it would actually be better for business if employees called in tired, got a little more sleep, and then came in a bit late, rather than call in sick a few days later or, worse, show up sick, dragging themselves through the day while infecting others.” Her advice to her fellow bosses is purely expedient: if the worker drones rest, more labor can be wrung out of them.

This approach to sleep is common in the discourse of “self-care,” in which people are constantly admonished to heal themselves with candles, self-affirmation, and long baths but not told that they can actually revolt against the systems that create their exhaustion in the first place. According to a massive amount of sleep literature, the worst thing we do is not sleep enough, yet that same literature never bothers to wonder what might be keeping us up at night.

Yet many people know full well why they can’t sleep. Many of us juggle multiple jobs to cobble together our livings, and the problem of sleeplessness cuts across class barriers. While those with little or no money battle exhaustion as they travel from job to job, even wealthier people are frequently like hamsters in their wheels, constantly working against the clock to hold on to and add to their fortunes. No matter who you are, under competitive capitalism the rule is the same: You sleep, you lose. Marx once pointed out that capital is vampire-like and feeds on dead labor. But that’s somewhat unfair to vampires. After all, unlike vampires, capital never sleeps.

Capitalism has never slept much, and has always relied on the lack of sleep of millions of workers to be as efficient as possible. In fact, until the invention of the eight-hour day and the weekend (both startlingly new ideas, for which workers had to fight hard) “work” as such simply carried on day by draining day. Even the idea of a legally mandated lunch break is astonishingly recent. (...)

The great irony of Huffington’s new enterprises, which promise both sleep and thriving, is that the Huffington Post itself feeds off the sleeplessness of its writers, people who are compelled to stay up all night in order to read and repost pieces about how sleeplessness is ruining their lives. The Huffington Post is notorious for paying not a single cent for most of its contributions, paying writers solely in illusory “publicity.” By building a hugely popular website on unpaid labor, HuffPo played a major role in establishing the pitiful compensation structure currently faced by online writers. If writers can’t sleep, it’s because they make HuffPo rates, i.e. nothing.

The Sleep Revolution is therefore a work of extraordinary gall. There is no consideration of the structural problems with sleeplessness, no critique of the systems which drive people from their beds toward jobs where they nod off to sleep in exhaustion. Arianna Huffington did not invent the web, but she is among those who created the news that never sleeps, in turn created by aggregators working around the clock, so that you might wake up at midnight or 3 or 4 in the morning, entertained by yet another set of links about Kate Middleton in a red dress or a hammock for your head so you can sleep on the train on the way to work.

by Yasmin Nair, Current Affairs | Read more:
Image:Chris Matthews

Friday, December 2, 2016

Politics 101

Learning From Trump in Retrospect

Donald Trump’s Pollster Says the Election Came Down to Five Counties

Why is "the Decimation of Public Schools" a Bad Thing? [ed. See this response: Contra Robinson on Schooling. Excellent discussion all the way around. Why can't our policy discourse be more like this? I've learned more about the pros and cons of school vouchers and their alternatives in these two articles than I ever imagined possible, or dimly cared about. (And it's fascinating. Really. Be sure to read all the way through Scott Alexander's response, it has a lot of meaningful tangents and just keeps getting better and better).]

Going Diamond

I was seven when my parents joined Amway. Our house filled up with Amway products: boxes of Nutrilite™ vitamins, toaster pastries, Glister™ toothpaste, Artistry™ makeup. We washed our hair with Satinique shampoo; we washed our floors with L.O.C. ™ cleaner; we washed our dishes with Amway-brand dish soap; we strained our drinking water through Amway’s filter. Our friends were Amway. Our vocabulary was Amway. We were ‘Directs’ going ‘Diamond.’ We ‘showed The Plan’ to anyone who listened.

We drove to Miami for ‘functions’ at the Fontainebleau Hotel. Thousands of people attended, all packed into the big ballroom with lights turned up and people dancing in the aisles, getting ‘fired up’ to Calloway’s ‘I Wanna Be Rich,’ which blasted over the speakers. We clapped our hands and sang along. (...)

We drove our teal ’88 Oldsmobile Delta to the Bayou Club Estates for our requisite ‘dreambuilding’ and toured the brand-new houses: big mansions with tall, echoing ceilings and screened-in pools, shiny state-of-the-art kitchens, garages big enough for three Mercedes, a golf course in the back, vanity mirrors and crystal fixtures in every bathroom. We drove to the yacht dealer and toured the Princesses and the Prestiges, lying on cabin beds and ascending the wooden stairs to stand on pulpits, gazing toward imagined horizons.

Amway is a multilevel marketing corporation. Some call it a pyramid scheme. In 2015, its parent company, Alticor, claimed transglobal sales of $9.5 billion. It is the biggest direct-selling company in the world. Distributors make money by signing up other distributors and – somewhere in the background – ‘selling’ Amway products. It’s not exactly clear how Amway products should reach the public. That isn’t part of Amway’s marketing plan; The Plan mostly teaches distributors how to sign up other distributors, to whom they then distribute Amway products, who then distribute Amway products to other distributors they sign up, and onward. Amway has been the target, along with its affiliate companies, of multimillion-dollar lawsuits and other legal actions on almost every continent.

Four years after joining Amway, my parents came to their senses. There was L.O.C. ™ cleaner in our closet for years while we pretended Amway never happened.

But every time I drive past the Bayou Club, I can’t help wondering what it would have been like to go Diamond. Once considered the highest Pin Level – above Silver, Gold, Platinum, Ruby, Pearl, Sapphire and Emerald – Diamond status was what I had craved. It was what I’d believed was success. After all, less than 1 percent of Amway distributors go Diamond.
---

Silverthorn Road, Seminole, FL33777
4 bed, 4 bath, 5,144 sq. ft.
$725,000


We’ve gone Diamond. ‘We’re buying a house in the Bayou Club. We’re starting a family,’ we tell the Realtor.

The first we see is in the Estates section. Croton in the front yard, Alexander palms and twisting cypress – all yards are maintained by the Bayou Club’s landscapers, she says. Each yard must coordinate with every other yard, to meet color-palette standards that coordinate with every house. You pay $137 a month for this privilege, another $205 for security and maintenance of common areas.

This house has two stories, an office and a loft, bamboo floors, a three-car garage, a pool.

‘You can see we’re getting the screens fixed,’ the Realtor says, pointing to the men working beyond the glass. She has piercing blue eyes. Processed blonde hair. She has French-tipped nails, diamond rings on all fingers, and a gold-and-diamond necklace. She wears a white semi sheer shirt, black-and-white-printed leisure pants, black eyeliner and heavy mascara. ‘We’re just putting some finishing touches on the place.’

I approach the French doors. The pool is bordered by stocky palms and, beyond them, the twelfth fairway. There is nothing like a yard.

‘Can children play on the golf course?’ I ask.

‘No, it’s private,’ she says. ‘And unless you don’t love your children, you don’t let them play on the golf course because they’ll be golf ball magnets.’

My husband chuckles.

‘And the golf course is private,’ she says again. ‘You have to join the club. If a golfer sees a child out there running around, they will call the golf ranger to chase them because they interfere with the game of play.

‘Do you play?’ she asks my husband.

‘No, but family does.’

‘We pay for golf privileges and we don’t like people on the golf course. We like our fairways nice and even.’

I wonder where the children play. The front yard is tiny. There’s barely any grass.

‘So the kids play out front,’ she continues. ‘And you know what? They do. When a child goes outside, he brings other kids out. We’re very strict about our speed limit here.’

‘I noticed the speed bumps,’ I say.

‘There are no speed bumps,’ she says, and I feel embarrassed. ‘If you came through Bardmoor, next door, there are bumps, but there are no bumps in Bayou Club. A lot of people have low-profile cars. We control our speed through our rover, who shoots radar. The fines are strict.’

‘Is there a neighborhood watch?’ asks my husband.

‘We have two security guards: one that roves the community 24/7 and one that stays at the gate,’ she says. ‘It’s not a hundred percent safe because if somebody wanted to come through Bardmoor, hop that fence in the middle of the night, and intrude on your house, nothing’s going to stop them. That gate out front is not going to stop them.’

‘It’s hardly even a gate,’ I say.

‘Your car won’t get through it,’ she says. ‘They might steal your jewelry, but they’re not stealing any big items.’

‘It has the illusion of security,’ says my husband.
---

If it’s not your family who brings you in, it’s probably a friend. For my dad, it was a manager at one of the car dealerships for which he handled advertising. The man’s business comprised almost half of my dad’s income. Over time, they’d developed a friendship. You’d think my dad would be immune to Amway, given his familiarity with advertising’s insidious ways. But how does the saying go? A good salesman can sell you your own grandmother.

My parents and I were solidly middle class when we collided with Amway. We owned our home. We lived in a safe neighborhood where I could play outside without supervision and walk home alone after the sun went down. We always kept an excess of food in the house. I got new shoes whenever I outgrew my old pair. I received new toys when my old ones broke and new books when I finished reading the ones I had. I went to gymnastics practice four times a week, singing lessons once a week, camp over the summer, and back-to-school shopping in the fall. We didn’t need Amway.

But that didn’t matter. In Amway, there’s no such thing as contentment.

If you’re happy with what you have, you haven’t dreamed, says Amway. Your life could be faster, shinier, brighter, more spacious – don’t settle for less. Join Amway.

You could drive a Jaguar instead of your crappy Oldsmobile. You could build a custom home – don’t settle for that two-bit shotgun you have. If you’re proud of what you’ve accomplished so far in your life, don’t be. Think bigger. Do better. If you don’t believe you can – trust Amway. Amway believes in you.

Nothing was wrong with our life before Amway – we didn’t join it to fill a void. We were happy, until we were told we could be happier.

by Sarah Gerard, Granta |  Read more:
Image: Sarah Gerard

Linda Vachon
via:

'Jackie'

On Nov. 25, 1963, three days after becoming the world’s most famous widow, Jacqueline Kennedy slipped on a mourning veil. A diaphanous shroud reaching to her waist, it moved lightly as she walked behind her husband’s coffin in the cortege that traveled from the White House to St. Matthew’s Cathedral. The veil was transparent enough to reveal her pale face, though not entirely, ensuring that she was at once visible and obscured. “I don’t like to hear people say that I am poised and maintaining a good appearance,” she later said. “I am not a movie actress.”

Intensely affecting and insistently protean, the film “Jackie” is a reminder that for a time she was bigger than any star, bigger than Marilyn or Liz. She was the Widow — an embodiment of grief, symbol of strength, tower of dignity and, crucially, architect of brilliant political theater. Hers was also a spectacularly reproducible image. It’s no wonder that shortly after President John F. Kennedy died, Andy Warhol started on more than 300 portraits of the Widow, juxtaposing photographs of her taken before and after the assassination. She smiles in a few, in others she looks frozen (or is it stoic?); the ones that pop are tight close-ups. They look like frames for an unfinished motion picture.

“Jackie” doesn’t try to complete that impossible, apparently unfinishable movie, the never-ending epic known as “The Assassination of President John F. Kennedy and What It Means to History.” Instead, set largely after his death, it explores the intersection of the private and the public while ruminating on the transformation of the past into myth. It also pulls off a nice representational coup because it proves that the problem known as the Movie Wife — you know her, the little lady hovering at the edge of both the frame and story — can be solved with thought and good filmmaking. And as in Warhol’s Jackie portraits, John F. Kennedy is somewhat of a bit player here.

Jack suaves in now and again, flashing his big teeth (he’s played by an uncanny look-alike, Caspar Phillipson), but as the film’s title announces, it’s all about her. Jackie (Natalie Portman, perfect) first appears at the Kennedy compound in Hyannis Port, Mass. It’s soon after Jack’s death and she’s taken refuge in another white house, this one along Nantucket Sound. If its large windows suggest transparency, her tight face and coiled body relay that she has other plans for the unnamed journalist (Billy Crudup), who’s come to write about how she feels and what it means. In some roles, Ms. Portman stiffens up and never seems to get out of her head; in “Jackie” this works as a character trait.

The journalist is a chilly, unsympathetic fictional gloss on the writer Theodore H. White. On Nov. 29, 1963, one week after cradling her dying husband’s head in her lap, Mrs. Kennedy gave an interview to White that he said lasted about four hours. Originally titled “For President Kennedy: An Epilogue,” White’s article ran in Life magazine and was an exemplar of impressively marketable mythmaking — it inaugurated the Camelot fairy tale. White knew Kennedy, having written “The Making of the President, 1960,” an account of his presidential campaign. But the Widow was another matter entirely, and in his interview notes White scrawled the words “What does a woman think?” (...)

The White interview thrusts the story into the past, teleporting Jackie, for instance, onto Air Force One, where — with her back to the camera — she primps in a mirror while practicing an apparent speech in Spanish for the imminent Dallas trip. Dressed in her pink Chanel suit, she puts on her pillbox hat, as if ready for her entrance. The suit’s bright color gives the film a visual jolt, much like the deep-red roses that someone places in Jackie’s arms after she and Jack deplane. Some of the most famous photos from that day, like those of Lyndon B. Johnson being sworn in on Air Force One, are in black and white, so it’s easy to miss that the smudges that later appeared on the pink suit were splatters of blood.

by Manohla Dargis, NY Times |  Read more:
Image: Natalie Portman in "Jackie"

While We Weren’t Looking, Snapchat Revolutionized Social Networks

Snap Inc., the parent company of the popular photo-messaging and storytelling app Snapchat, is having a productive autumn.

A couple of weeks ago, Snap filed confidential documents for a coming stock offering that could value the firm at $30 billion, which would make it one of the largest initial public offerings in recent years. Around the same time, it began selling Spectacles, sunglasses that can record video clips, which have become one of the most sought-after gadgets of the season.

And yet, even when it’s grabbing headlines, it often seems as if Snap gets little respect.

Though Snapchat has overtaken Twitter in terms of daily users to become one of the most popular social networks in the world, it has not attracted the media attention that the 140-character platform earns, perhaps because journalists and presidential candidates don’t use it very much. Snapchat’s news division has become a popular and innovative source of information for young people, but it is rarely mentioned in the hand-wringing over how social media affected the presidential election.

And because Snapchat is used primarily by teenagers and 20-somethings, and it seems deliberately designed to frustrate anyone over 25, it is often dismissed as a frivolity by older people (especially readers of a certain newspaper based in New York who have my email address).

This is all wrong. If you secretly harbor the idea that Snapchat is frivolous or somehow a fad, it’s time to re-examine your certainties. In fact, in various large and small ways, Snap has quietly become one of the world’s most innovative and influential consumer technology companies.

Snap, which is based far outside the Silicon Valley bubble, in the Venice neighborhood of Los Angeles, is pushing radically new ideas about how humans should interact with computers. It is pioneering a model of social networking that feels more intimate and authentic than the Facebook-led ideas that now dominate the online world. Snap’s software and hardware designs, as well as its marketing strategies, are more daring than much of what we’ve seen from tech giants, including Apple.

Snap’s business model, which depends on TV-style advertising that (so far) offers marketers fewer of the data-targeted options pioneered by web giants like Google, feels refreshingly novel. And perhaps most important, its model for entertainment and journalism values human editing and curation over stories selected by personalization algorithms — and thus represents a departure from the filtered, viral feeds that dominate much of the rest of the online news environment.

Snap is still relatively small; its 150 million daily user base pales in comparison to Facebook’s 1.2 billion, and its success is far from assured. In its novelty, it can sometimes veer toward the bizarre and inscrutable. And it’s not obvious that all of its advances are positive. (For instance, I’m not sure that it’s always better for our relationships to lose a record of our chats with friends.)

Yet it’s no wonder that Facebook and its subsidiaries appear obsessed with imitating Snap. As a font of ideas that many in the tech industry hadn’t considered before, Snap isn’t just popular, but also increasingly important.

“Regardless of what happens, they’ve reshaped the social media landscape,” said Joseph B. Bayer, a communications professor at Ohio State University who has studied Snapchat’s impact on how people communicate. “They’re making risky moves, trying to rethink what people want online as opposed to taking what’s already been done and adding a new flash.”

Techies value disruption, and it’s difficult to think of another online company that has shuffled the status quo as consistently as Snap has over the past few years.

Before Snapchat, the industry took for granted that everything users posted to the internet should remain there by default. Saving people’s data — and then constantly re-examining it to create new products and advertising — is the engine that supports behemoths like Google and Facebook.

At its founding in 2011, Snap pushed a new way: By default, the pictures posted through Snapchat are viewable for only a short time. At the time, it was a head-scratching idea, one that many assumed was good only for sexting. To the tech industry’s surprise, disappearing messages captivated users who had been afraid that their momentary digital actions might follow them around forever.

Snapchat’s “ephemeral” internet — which has since been imitated by lots of other companies, including, most recently, Instagram — did not just usher in a new idea for online privacy. It also altered what had once been considered a sacred law of online interaction: virality.

Every medium that has ever been popular online — from email to the web to social networks like Facebook — has been pervaded by things that are passed along from one user to another. This is not the case on Snapchat. Though Snapchat has introduced some limited means of forwarding people’s snaps, the short life of every snap means there is no obvious means for any single piece of content to become a viral hit within the app. (...)

There is, instead, a practiced authenticity. The biggest stars — even Kylie Jenner — get ahead by giving you deep access to their real lives. As a result, much of what you see on Snapchat feels less like a performance than on other networks. People aren’t fishing for likes and follows and reshares. For better or worse, they’re trying to be real.

by Farhad Manjoo, NY Times |  Read more:
Image: Rebecca Smeyne