Tuesday, September 29, 2015

Side Boob and Insensibility

The family of Phaeton had long been settled in London’s Canary Wharf. The topiary of their hedge funds was in splendid order, and, for many generations, they had lived in so respectable a manner, as to engage the general good opinion of their surrounding acquaintance. Though an indulgent and obliging father, Sir Thomas Phaeton was well aware that his daughters, Elinor and Marianne, were two of the silliest girls in England. While their contemporaries were scuttling up trees to protest attacks on the environment, or making their insouciant way into comfy corners of corrupt corporations, Sir Thomas’s offspring were posing for selfies, trying to get an audition for Big Brother or The Apprentice, tweeting, twerking, tweezing, tattooing, drinking vodka-laced frappuccinos, and watching Danish TV series about murdered women. They could neither boil an egg nor butter up a boss. Their education was minimal, their aspirations absurd, their spats legendary.

It had occurred to Sir Thomas on occasion that their expensive schooling, at Bedales and Benenden, had been insufficient to instruct them in the intricacies of adult life. To redress these deficits, he sometimes made an aggravating effort to persuade his daughters to read a book. He himself favored the novels of the long eighteenth century. But they refused even self-help books: they needed no “help.” They preferred more immediate sources of merriment and deviltry, and spent their days (and nights) with models and rock stars. The Phaeton girls could have been models themselves, had they displayed more passivity, more poise, and more pouts; and they would have excelled at the guitar, had they ever learnt.

Following a tempestuous decade of marriage, it was noted that Lady Phaeton now lived elsewhere. But it was a surprise to many when she was discovered subsisting amongst the glitterati to be found, in decreasing numbers and increasing decrepitude, in Biarritz (which, to her daughters, seemed horrifically uncool). She left in her place a widowed sister, though Aunt Norris had little more interest than their mother in tending to either Sir Thomas or the girls, who, in their turn, ignored their aunt whenever possible. This left Sir Thomas in the position of sole protector of the two flibbertigibbets, who nonetheless could charm him, when they applied themselves to the task. Why would anyone wish to harm these beatific beings, Sir Thomas wondered jovially, as they spooned lobster pâté onto more and more crackers for him, in the hopes of a handout.

The girls enlarged their set of acquaintances to include stand-up comedians with god complexes, VIPs at the loucher end of the spectrum, humble sycophants, newspaper magnates, Conservative politicians, and aristocratic wannabes. But the sisters had their enemies too. Paparazzi stirred into action whenever they left their three-story penthouse (adjoining the equally well-proportioned London residence which their father shared, resignedly, with Aunt Norris). The object of the paparazzi’s assiduity was to get a photo of those two zany Phaeton chicks looking zany.

Sir Thomas was forced to await, on tenterhooks, the inevitable slaughter, by media, of his darlings; but when it finally came, it was an embarrassment, not just to him, or to Marianne and Elinor, but to the country at large.

The Phaeton girls had successfully evaded censure for two, three, perhaps four years of high living. Despite trashing every nightclub in the British Isles and beyond, slurring their speech on talk shows, and shoplifting heritage carrots from Harrods’ Food Hall, the worst of the crimes of which Marianne and Elinor had yet been accused were cellulite sins, muffin-top miseries, Chihuahua cruelties, and occasionally going about color-uncoordinated. The ups and downs of their love lives had been finely milled for scandal, but none could be found: their boyfriends were all rotters, to a man—but so were everyone else’s. (In a society in which just about everything is ill judged, it can be hard to find the right way to go wrong.) Ominously, though, as Sir Thomas would later recall to his chagrin, there had once been a curious accusation of “cleavage overload” hurled at his daughters, which might have served as a warning of the imminent debacle. Both girls had laughed it off, however, ridiculing the notion that anyone could ever get tired of breasts.

But finally, there transpired the biggest sartorial transgression currently known to humankind. England, a nation already famed for sexual confusion, was suddenly saturated with disturbing photographic evidence of sleaze. The center of the controversy was Marianne, as Sir Thomas might have guessed it would be—Marianne, who had always had the least fashion sense of the two (though neither daughter could ever have been said to dress sensibly). Her crime? The exposure of a “side boob.”

by Alexander McCall Smith, The Baffler | Read more:
Image: Imgur

It’s Sleazy, It’s Totally Illegal, and Yet It Could Become the Future of Retirement

Over 100 years ago in America — before Social Security, before IRAs, corporate pensions and 401(k)s — there was a ludicrously popular (and somewhat sleazy) retirement scheme called the tontine.

At their peak, around the turn of the century, tontines represented nearly two-thirds of the American insurance market, holding about 7.5 percent of national wealth. It’s estimated that by 1905, there were 9 million tontine policies active in a nation of only 18 million households. Tontines became so popular that historians credit them for single-handedly underwriting the ascendance of the American insurance industry.

The downfall of the tontine was equally dramatic. Not long after 1900, a spectacular set of scandals wiped the tontine from the nation’s consciousness. To this day, tontines remain outlawed, and their name is synonymous with greed and corruption. Their memory lives on mostly in fiction, where they invariably propel some murderous plot. (There’s even a "Simpsons" episode in this genre.)

Tontines, you see, operate on a morbid principle: You buy into a tontine alongside many other investors. The entire group is paid at regular intervals. The key twist: As your fellow investors die, their share of the payout gets redistributed to the remaining survivors.

In a tontine, the longer you live, the larger your profits — but you are profiting precisely off other people’s deaths. Even in their heyday, tontines were regarded as somewhat repugnant for this reason.

Now, a growing chorus of economists and lawyers is wondering if the world wasn’t too hasty in turning its back on tontines. These financial arrangements, they say, have aspects that make a lot of sense despite their history of disrepute.

Some academics even argue that with a few new upgrades, a modern tontine would be particularly suited to soothing the frustrations of 21st-century retirement. It could help people properly finance their final years of life, a time that is often wracked with terribly irrational choices. Tontines could even be a cheaper, less risky way for companies to resurrect the pension.

“This might be the iPhone of retirement products,” says Moshe Milevsky, an associate professor of finance at York University in Toronto who has become one of the tontine’s most outspoken boosters.

by Jeff Guo, WP |  Read more:
Image: bigstockphoto

Sunday, September 27, 2015

Stop Googling. Let’s Talk.

College students tell me they know how to look someone in the eye and type on their phones at the same time, their split attention undetected. They say it’s a skill they mastered in middle school when they wanted to text in class without getting caught. Now they use it when they want to be both with their friends and, as some put it, “elsewhere.”

These days, we feel less of a need to hide the fact that we are dividing our attention. In a 2015 study by the Pew Research Center, 89 percent of cellphone owners said they had used their phones during the last social gathering they attended. But they weren’t happy about it; 82 percent of adults felt that the way they used their phones in social settings hurt the conversation.

I’ve been studying the psychology of online connectivity for more than 30 years. For the past five, I’ve had a special focus: What has happened to face-to-face conversation in a world where so many people say they would rather text than talk? I’ve looked at families, friendships and romance. I’ve studied schools, universities and workplaces. When college students explain to me how dividing their attention plays out in the dining hall, some refer to a “rule of three.” In a conversation among five or six people at dinner, you have to check that three people are paying attention — heads up — before you give yourself permission to look down at your phone. So conversation proceeds, but with different people having their heads up at different times. The effect is what you would expect: Conversation is kept relatively light, on topics where people feel they can drop in and out.

Young people spoke to me enthusiastically about the good things that flow from a life lived by the rule of three, which you can follow not only during meals but all the time. First of all, there is the magic of the always available elsewhere. You can put your attention wherever you want it to be. You can always be heard. You never have to be bored. When you sense that a lull in the conversation is coming, you can shift your attention from the people in the room to the world you can find on your phone. But the students also described a sense of loss.

One 15-year-old I interviewed at a summer camp talked about her reaction when she went out to dinner with her father and he took out his phone to add “facts” to their conversation. “Daddy,” she said, “stop Googling. I want to talk to you.” A 15-year-old boy told me that someday he wanted to raise a family, not the way his parents are raising him (with phones out during meals and in the park and during his school sports events) but the way his parents think they are raising him — with no phones at meals and plentiful family conversation. One college junior tried to capture what is wrong about life in his generation. “Our texts are fine,” he said. “It’s what texting does to our conversations when we are together that’s the problem.”

It’s a powerful insight. Studies of conversation both in the laboratory and in natural settings show that when two people are talking, the mere presence of a phone on a table between them or in the periphery of their vision changes both what they talk about and the degree of connection they feel. People keep the conversation on topics where they won’t mind being interrupted. They don’t feel as invested in each other. Even a silent phone disconnects us.

In 2010, a team at the University of Michigan led by the psychologist Sara Konrath put together the findings of 72 studies that were conducted over a 30-year period. They found a 40 percent decline in empathy among college students, with most of the decline taking place after 2000.

Across generations, technology is implicated in this assault on empathy. We’ve gotten used to being connected all the time, but we have found ways around conversation — at least from conversation that is open-ended and spontaneous, in which we play with ideas and allow ourselves to be fully present and vulnerable. But it is in this type of conversation — where we learn to make eye contact, to become aware of another person’s posture and tone, to comfort one another and respectfully challenge one another — that empathy and intimacy flourish. In these conversations, we learn who we are.

Of course, we can find empathic conversations today, but the trend line is clear. It’s not only that we turn away from talking face to face to chat online. It’s that we don’t allow these conversations to happen in the first place because we keep our phones in the landscape.

In our hearts, we know this, and now research is catching up with our intuitions. We face a significant choice. It is not about giving up our phones but about using them with greater intention. Conversation is there for us to reclaim. For the failing connections of our digital world, it is the talking cure.

by Sherry Turkle, NY Times |  Read more:
Image: Yann Kebbi

Samsung-Oculus Consumer Virtual Reality Headset to Cost $99

[ed. My prediction for Christmas gift of the year (along with drones, drones and more drones). Also, a plea: could we please stop with the Christmas marketing and decorations in September?] 

The Invisible Labor of Fashion Blogging

Earlier this month, the biannual circus that is New York Fashion Week saw non-stop coverage on social media via Instagram, Twitter, and Snapchat, with the scene repeating itself in London, Milan, and Paris through early October. Though coverage of designers, models, A-listers, and celebrities was in no short supply in mainstream and industry publications, there was another formidable yet familiar force on the scene: fashion bloggers.

It’s been nearly a decade since these independent voices “took over the tents,” as Women’s Wear Daily proclaimed of the fashion blogosphere’s first wave in the mid-aughts. While industry veterans initially saw the inclusion of bloggers as an invasion, the latter’s presence at runway shows and designer fetes no longer draws the ire it once did. Instead, both designers and fashion editors recognize the power of bloggers. The resurgence of Birkenstocks, the frenzy over fringe, and the ubiquity of off-the-shoulder styles are among the recent trends that have been brought into the mainstream by a collective of online tastemakers, whose scrappy origins grow more and more distant every year. With annual incomes of top-ranking bloggers climbing into the seven-figure range, it’s not surprising that they’re frequently hailed as savvy entrepreneurs.

In the popular imagination, blogging has become a viable career path with legions of aspirants. As many other creative workers struggle to find stable and fulfilling careers, bloggers and others with digital clout seem to have shaped their careers with ease. The impeccably curated online presences of these young women—fashion blogging is heavily skewed female—seem to offer hope and a sense of control in an economy marked by persistent instability and precarious employment conditions.

But this idealized profession is less glamorous than it first appears. In a new study to be published this fall in the journal Social Media + Society, we examine the gap between the rhetoric and reality of fashion blogging. (Our analysis of 760 Instagram images by 38 top-ranked female professionals is part of a larger, multi-year project on the subject.) Pro-bloggers, we learned, must continually reconcile a series of competing demands: They have to appear authentic but also remain on brand, stay creative while tracking metrics, and satisfy both their readers and the retail brands that bankroll them. Many work up to 100 hours a week, and the flood of new bloggers means companies increasingly expect to not have to pay for partnerships. Meanwhile, the nature of the job requires obscuring the hard work and discipline that goes into crafting the perfect persona online.

by Brooke Erin Duffy and Emily Hund, The Atlantic |  Read more:
Image: John Taggart / Reuters

Saturday, September 26, 2015

A Facelift for Shakespeare

The Oregon Shakespeare Festival will announce next week that it has commissioned translations of all 39 of the Bard’s plays into modern English, with the idea of having them ready to perform in three years. Yes, translations—because Shakespeare’s English is so far removed from the English of 2015 that it often interferes with our own comprehension.

Most educated people are uncomfortable admitting that Shakespeare’s language often feels more medicinal than enlightening. We have been told since childhood that Shakespeare’s words are “elevated” and that our job is to reach up to them, or that his language is “poetic,” or that it takes British actors to get his meaning across.

But none of these rationalizations holds up. Much of Shakespeare goes over our heads because, even though we recognize the words, their meaning often has changed significantly over the past four centuries.

In “Hamlet,” when Polonius famously advises Laertes to “neither a borrower nor a lender be,” much of what he says before that point reaches our modern ears in a fragmentary state at best. In the lines, “These few precepts in thy memory / Look thou character,” look means “make sure that,” and character is a verb, meaning “to write.” Polonius is telling Laertes, in short, “Note these things well.”

He goes on to say: “Take each man’s censure, but reserve thy judgment,” which seems to mean that you should let other people criticize you but refrain from judging them—strange advice. But by “take censure” Shakespeare meant “evaluate,” so that Polonius is really saying “assess” other men but don’t jump to conclusions about them.

We can piece these meanings together, of course, by reading the play and consulting stacks of footnotes. But Shakespeare didn’t intend for us to do that. He wrote plays for performance. We’re supposed to be able to hear and understand what’s spoken on the stage, in real time.

That’s hard when we run up against a passage like this one from “King Lear,” when Edmund is dismissing those who look down on him for his low origins:

Why “bastard”? Wherefore “base”?
When my dimensions are as well compact,
My mind as generous, and my shape as true
As honest madam’s issue?


Isn’t it odd for someone to present being “well compact” as a selling point? But for Shakespeare, compact meant “constructed.” And why would Edmund defend himself against the charge of illegitimacy by noting his generosity? Because in Shakespeare’s day, generous could mean “noble.” Nor did madam then have the shady connotation that it does today.

Understanding generous to mean “noble” is not a matter of appreciating elevated language: We cannot reach up to a meaning that is no longer available to us. Nor is there anything poetic in knowing that character was once a verb meaning “to write”: In 2015, that usage is simply opaque, and being British doesn’t help matters.

The idea of translating Shakespeare into modern English has elicited predictable resistance in the past. To prove that the centuries were not so formidable a divide, the actor and author Ben Crystal has documented that only about 10% of the words that Shakespeare uses are incomprehensible in modern English. But that argument is easy to turn on its head. When every 10th word makes no sense—it’s no accident that the word decimate started as meaning “to reduce by a 10th” and later came to mean “to destroy”—a playgoer’s experience is vastly diluted.

It is true that translated Shakespeare is no longer Shakespeare in the strictest sense. But are we satisfied with Shakespeare’s being genuinely meaningful only to an elite few unless edited to death or carefully excerpted, with most of the rest of us genuflecting in the name of “culture” and keeping our confusion to ourselves? Should we have to pore laboriously over Shakespeare on the page before seeing his work performed?

by John H. McWhorter, WSJ |  Read more:
Image: Pep Montserrat

Adidas Tubular X Knit
via:
[ed. Nice.]

via: here and here

Friday, September 25, 2015

In Memoriam: Yogi Berra

As a boy of 8 and 9 and 10, growing up in the Bronx, I was a big New York Yankees fan. When you grow up in the Bronx, that’s really all there is to brag about. A zoo and the Yankees.

Nearly every game aired on channel 11 WPIX, and I watched as many as I could, which was nearly all of them.

The Yankees are by far the most successful team in the history of American sports. Not even close. They’re probably the most successful team in the world. For this reason, rooting for the Yankees has often been equated with rooting for a large, wealthy corporation like IBM or GM. I’ve always thought it’s a very poor analogy.

Rooting for the Yankees is actually like rooting for the United States. Each in their own way, the Yankees and United States are the 300 lb. gorilla, that most powerful of entities winning far more than anyone else. Their wealth creates many advantages. Supporters expect them to win, and they usually do. Opponents absolutely revel in their defeats.

All that success means you will be adored by some non-natives who are tired of losing and want to bask in your glory, even if it must be from afar. But mostly you are hated. Anywhere you go in America, some people love the Yankees and many more hate them. Just like the United States is either loved or hated everywhere else in the world.

Who hates IBM?

And just as U.S. history, so stuffed with victory, is chock full of famous figures, so too is Yankee lore replete with famous men in pinstripes.

There are 53 former Yankee players, managers, and executives in the Baseball Hall of Fame, just over 1/6 of the Hall’s total membership.

Can I name them all? Of course not. That’s like naming all the presidents. I have a Ph.D. in history and I still get bogged down once I reach the 1840s (who comes after Van Buren?), and can’t resume a steady line until I re-emerge with Buchannan in 1856; you know, the guy before Lincoln.

For the average person, there are the biggies: Washington, Lincoln, a couple of Roosevelts and so forth.

For Yankee fans naming their club’s Hall of Famers is actually tougher than naming presidents. There have only been 43 presidents. So most fans know a bunch but not all of them, and then everyone knows the biggies, the Washingtons and Lincolns of baseball.

You don’t have to be a Yankees fan. Hell, you don’t even have to know anything about baseball. You’ve all heard of these guys because they transcend baseball. They’re part of American culture.

Babe Ruth, Lou Gehrig, Joe DiMaggio, Micky Mantle, and Yogi Berra. Those five.

Ruth is probably the single greatest baseball player of all time and still the most famous American athlete who ever lived; we’ll see how famous Michael Jordan is nearly 70 years after his death. Gherig’s got a disease named after him and hardly anyone knows its actual name (amyotrophic lateral sclerosis). DiMaggio became a memorable lyric in a seminal Simon and Garfunkle song thirty years after he was the topic of his own hit song. The Mick’s boyish good looks and runaway success made him a poster boy of mid-century American baby boomer aspirations. And Yogi had a cartoon bear named after him.

Yogi also said all that stuff. Things you’ve heard that you may or may not have realized he said. Or stuff you thought he said that he may not have said.

Best known is “It ain’t over til it’s over,” which is among the most famous of American axioms, and which he actually said, while managing the New York Mets in 1973. But there are a lot of others.
  • When you come to a fork in the road, take it (giving directions to his home).
  • You can observe a lot by just watching.
  • No one goes there nowadays, it’s too crowded (speaking of the Copa Cabana nightclub).
  • Baseball is ninety percent mental and the other half is physical.
  • A nickel ain’t worth a dime anymore.
  • Always go to other people’s funerals, otherwise they won’t come to yours.
  • We made too many wrong mistakes.
Or, as only Yogi could put it, speaking to the phenomenon of misattribution: I really didn’t say everything I said.

But he really did say that.

by The Public Professor |  Read more:
Image: uncredited

Justin Angelos
via:

Thursday, September 24, 2015

Sex: The Kotaku Review

If you’re already a fan of Sex—and there are plenty of you out there—you probably don’t need this review. But if you find yourself on the fence about whether to try this much-heralded, much-argued-over activity, pull up a chair! We’ve got a lot to discuss.

Like many other extended franchise juggernauts, Sex has been around in some form or another for a long time. Originally released as an open-source application and carefully iterated upon over the years, it’s been through its fair share of reimaginings, reboots, and back-to-basics redesigns. Today’s Sex is the most technically advanced version yet, but as we all know, it takes more than eye-popping visuals and high-tech peripherals to make for a truly meaningful experience.

Sex is best understood as a freeform co-op experience where partners work together to achieve one or more user-defined goals. It’s most often played in groups of two, but sometimes more (or less). Broadly speaking, each match-up follows a similar structure–all players are helping one another to achieve a similar goal, and if they work well together, every player can “win.” Take a closer look, though, and you’ll see how creative Sex teams can be, combining inventive techniques with high-level mechanical mastery to achieve unusual but no less satisfying victories.

Aficionados will be pleased to hear that the Sex’s visual presentation is as great as ever–even though it doesn’t seem to have progressed much as of late. Then again, why mess with something that’s already working so well? Today’s Sex features advanced graphical techniques like soft body physics and subsurface scattering; these were incredible when they were first introduced, and they stand the test of time. But with technological innovations coming faster than ever and innovative new VR technology on the horizon, it’ll be important for Sex to step up its technology in the coming years to keep pace.

As true gamers know, it’s gameplay that matters most. The mechanics undergirding Sex are deceptively simple–even if you’ve never played, you probably already understand the fundamentals. There’s some stroking, and sliding, and slapping, and smacking, and, well, you know. All of that. The beauty of Sex is that those basic actions can be combined in all sorts of interesting ways. Sex embraces what game designers call the property of “emergence,” i.e. the designed opportunity for varied combinations of simple components to create a complex end result.

Despite those strong fundamentals, Sex is not without its share of technical issues. Sex can, and often does, fall prey to many of the same kinds of bugs and glitches we’ve seen in other multiplayer games: synchronization errors, dropped connections, poor response times, and the like. Some people seem to wait around forever in the matchmaking lobby, never getting to the actual game.

by Matthew S. Burns, Kotaku | Read more:
Image: Shutterstock

The Now Generation

Wednesday, September 23, 2015


[ed. It is.]
via:

Disconfirming Books

Yesterday the The New York Times had a fascinating piece about how ebook sales, contra Aggregation Theory, are actually declining even as publishers and book stores are thriving on the back of print:
Five years ago, the book world was seized by collective panic over the uncertain future of print. As readers migrated to new digital devices, ebook sales soared, up 1,260 percent between 2008 and 2010, alarming booksellers that watched consumers use their stores to find titles they would later buy online. Print sales dwindled, bookstores struggled to stay open, and publishers and authors feared that cheaper ebooks would cannibalize their business… 
But the digital apocalypse never arrived, or at least not on schedule. While analysts once predicted that ebooks would overtake print by 2015, digital sales have instead slowed sharply. Now, there are signs that some ebook adopters are returning to print, or becoming hybrid readers, who juggle devices and paper. Ebook sales fell by 10 percent in the first five months of this year, according to the Association of American Publishers, which collects data from nearly 1,200 publishers. Digital books accounted last year for around 20 percent of the market, roughly the same as they did a few years ago. 
Ebooks’ declining popularity may signal that publishing, while not immune to technological upheaval, will weather the tidal wave of digital technology better than other forms of media, like music and television.
First off, I’m not necessarily surprised that publishers haven’t all gone bankrupt en masse. Much like the music labels publishers have always provided more than distribution, including funding (using a venture capital-like process where one hit pays for a bunch of losers), promotion (discovery is the biggest challenge in a world of abundance, and breaking through is expensive), and expertise (someone needs to do the editing, layout, cover design, etc.). And, as long as there is any print business at all, distribution still matters to a degree given the economics of writing a book: very high fixed costs with minimal marginal costs, which dictates as wide a reach as possible.

Still, none of this explains why ebooks have been stopped in their tracks, and that’s where this discussion gets interesting: not only is it worth thinking about the ebook answer specifically, but also are there broader takeaways that explain what the theory got wrong, and how it can be made better?

EBOOK LESSONS TO BE LEARNED

I think there are three things to be learned from the plateauing in ebook sales:

Price: The first thing to consider about ebooks — and the New York Times’ article touches on this — is that they’re not any cheaper than printed books; indeed, in many cases they are more expensive. The Wall Street Journal wrote earlier this month:
When the world’s largest publishers struck e-book distribution deals with Amazon.com Inc. over the past several months, they seemed to get what they wanted: the right to set the prices of their titles and avoid the steep discounts the online retail giant often applies. But in the early going, that strategy doesn’t appear to be paying off. Three big publishers that signed new pacts with Amazon— Lagardere SCA’s Hachette Book Group, News Corp’s HarperCollins Publishers and CBS Corp.’s Simon & Schuster—reported declining e-book revenue in their latest reporting periods.
Pricing is certainly an art — go too low and you leave money on the table, go too high and you lose too many customers — and there is obviously a case to be made (and Amazon has made it) that in the case of books there is significant elasticity (i.e. price has a significant impact on purchase decisions). Then again, while e-book sales have fallen, they’ve stayed the same percentage of overall book sales — about 20% — which potentially means that the price change didn’t really have an effect at all (more on this in a bit).

What is more interesting about the pricing issue, though, is that the publishers have removed what is traditionally one of digital’s advantages: that it is cheaper. That means the chief advantage of ebooks is that they are more convenient to acquire and store, and that’s about it. And, by extension, that raises the question about just how much lower prices play a role in the success of other aggregators.

User Experience: Note what is lacking when it comes to ebook’s advantages: the user experience. True, some people certainly prefer an e-reader (or their phone or tablet), but a physical book has its advantages as well: relative indestructibility, and little regret if it is destroyed or lost; tangibility, both in regards to feel and in the ability to notate; the ability to share or borrow; and, of course, the fact a book is an escape from the screens we look at nearly constantly. At the very best the user experience comparison (excluding the convenience factor) is a push; I’d argue it tilts towards physical books.

This is in marked contrast to many of the other industries mentioned above. When it comes to media, choosing a show on demand or an individual song is vastly preferable to a programming guide or a CD. Similarly, Uber is better than a taxi in nearly every way, particularly when it comes to payments; Airbnb offers far more selection and rooms that simply aren’t possible through hotel chains; Amazon has superior selection and superior prices, with delivery to your doorstep to boot. It’s arguable the user experience is undervalued in my Aggregation Theory analysis.

Modularization: Notice, though, that there is something in common to all of my user experience examples: what matters is not only that the aggregators are digital, but also that they broke up the incumbent offering to its atomic unit. Netflix offered shows, not channels; first iTunes then Spotify offered songs, not albums; Uber offered the ability to chart individual cars on-demand; Airbnb offered rooms, not hotels; Amazon offers every product, not just the ones that will fit in a bricks-and-mortar retail store.

Ebooks, on the other hand, well, they’re pretty much the same thing as physical books, except they need an expensive device to read them on, while books have their own built-in screen that is both disposable and of a superior resolution (no back-lighting though).

by Ben Thompson, Stratechery |  Read more:
Image: Stratechery

When Dinner Proves Divisive: One Strategy, Many Dishes

[ed. See also: Best Weeknight Recipes]

Back when I cooked only to please myself and one or two other consenting adults, choosing recipes was a breeze. Nothing was off limits. Dishes with olives, stinky cheeses, bitter greens and mushrooms — sometimes all of the above — were on regular rotation. Then I began cooking for kids (picky, omnivorous and otherwise). With them came their nut-allergic friends, vegan guitar teachers and chile-fearing in-laws. Forced to adapt my NC-17 cooking style to a G-rated audience, I paged through cookbooks in search of “crowd pleasers” that proved elusive.

Eventually, I realized that the quest for a perfect recipe that pleases everyone at the table, including oneself, was fruitless.

But in the process, a workaround solution emerged: recipes that could be configured to produce many different dishes at one meal. Like Transformers or fantasy football teams, these meals are both modular and complete, constructed from parts that can be added or subtracted from at whim.

Suddenly, my weeknight repertoire increased exponentially. It’s easier on the cook when the week assumes a familiar pattern — pasta one night, a main-course salad another night, beans on a third — but to prevent boredom, the dishes themselves needn’t be exactly the same. (Unless, of course, the culinary conservative in your household demands otherwise.)

Just like taco night or baked-potato night, the meal starts with a base element: pasta, beans, fluffy greens. After that, it’s about piling on, or politely passing along, the garnishes.

The definition of a garnish may need some stretching: This is not a shy sprinkling of parsley or a scattering of sesame seeds. The garnish that makes a meal must be full-throated and filling. Half of a ripe avocado is a garnish. Likewise, a soft-yolk egg (boiled, poached or fried). Bacon lardons, shredded chicken and diced steak. Crushed chiles and leftover roasted vegetables. With enough garnishes, even the plainest of plain foods — pasta with butter and cheese — can balloon into a lively meal.

by Julia Moskin, NY Times |  Read more:
Image: Melina Hammer

Tuesday, September 22, 2015


RLoN Wang, Frame of mind
via:

Death to the Internet! Long Live the Internet!

Net neutrality, cultural decay, the corporate web, classism, & the decline of western civilization — all in one convenient essay!

Even now it’s a struggle to clearly remember that ecstatic time of positive internet esprit de corps before money and narcissism utterly dominated the culture. Those ancient ‘90s to early oughts before endlessly aggressive advertising, encyclopedic terms of service, incessant tracking, the constant need to register everywhere, subversive clickbait, the legions of trolls, threats of doxxing, careers ended by a single tweet, and all those untiring spam bots which attempt to plague every digital inch of it.

Difficult to explain to anyone under twenty-five who did not directly witness the foundational times. Or anyone over twenty-five who did not participate. Or to anyone right now who uses only Facebook and Amazon. That lost age has become the Old West of the internet: a brief memory before once verdant lands were dominated and overrun by exploitative business interests and ignorant bumbling settlers. You can’t go back, and there’s no museum for an experience. That early culture was ineffable and fleeting. Not unlike, say: the concept of lifetime job security, which no longer even seems plausible.

Now, of course, plenty of happy and creative people still use the internet (at least, to like, buy an appliance or a book or something) but they don’t make up most of internet culture; that majority of online participation which sets the social standards, creates the original content, and is now broadly, inescapably corralled by social media. Those who spend more than 20 hours a week actively participating online (like me) who are forced into the corporate tide, or relegated to the sluggish unknown hinterlands. (...)

Need we wonder why the book “Nineteen Eighty-Four” remains so relevant? Even thirty years after Steve Jobs commemorated the futuristic date by ironically pretending to destroy the entrenched corporate power structure. The same man who turned out to be one of the most proprietary-minded technologists ever to influence popular computing culture. The person who cemented the sale of style over utility, which continues to unendingly trick people. Selling the trappings of refined taste instead of core pragmatism. Like how the classic campaign to “Think Different” fetishized intellectual and artistic rebellion in order to ironically sell a massmarket consumer product. And it worked amazingly. People have been strongly influenced to desire a unique personal experience and an individualized version of success instead of a shared communal growth. So in this fragmented and increasingly de-localized culture, everyone becomes the protagonist of their own little narcissistic adventure instead of a powerful collective assisting each other for the greater good. And because not everyone can be that one-in-billion genius, much existential disappointment has been ingrained once it was set as the highest goal.

This is advantageous to business interests because unsatisfied people are more susceptible to the sale of solutions to combat unhappiness. And this emotional and cultural development also makes it easier to dehumanize others, to be jealous of their successes, and feel left out when not receiving high accolades. Creating the much lamented vicious cycle of kindergarten graduation ceremonies and participation trophies which has wrought themost egotistical generation ever recorded. It also has an oligarchic benefit of justifying power held in the small circles of the moneyed class, because success, even if born into, is often assumed to be deserved.

So it’s no coincidence that wealthy special interests have gained massive control over democracy by incentivizing and preaching the supremacy of individual gains over communal interests. Unlike a more simplistic fascism, this grants minority power to the upperclass by motivating the populace to work hard towards individual goals and individual distractions without requiring the classic top-down crushing social conformity which is more obvious and easier to fight. Instead, the insidious dreams of grand individual success, in spite of all contrary indications, keeps everyone’s broader rewards lowered. It’s like a lottery for human desires: many pay in and get essentially nothing while a tiny few win it all so as to demonstrate it is supposedly possible. Justified elite power is the cultural root of corruption, as Thomas Jefferson ironically understood, and must be fought with repeated revolution.

We all recognize a nebulous natural cynicism these days found not only in the post-apocalyptic and zombie fictions so symbolically appealing to our collective unconscious, but also in the simple facts of a historically deadlocked legislature, a rampantly scare-mongering media, the rise once again of an excessively wealthy upperclass, and the corruption of debt-based higher learning. That last being perhaps the most intellectually disheartening, as the ivory tower repeatedly demonstrates its moral bankruptcy by a reliance on horrific levels of tuition, exploitive wasteful sporting, shoddy oversight of publishing, general lack of moral center, and a scattered vision for the future (pigeon-holed rather correctly by conservatives as often out of touch). Much could perhaps be excused by the inevitable corruption of institutionalization, but where is the forethought of previous generations? Why must we rely on impulsive social media and a polarized profit-oriented mass media for our appraisals of the future?

If Obama’s unpredictable election proved anything it’s that positive ideological movements are so frightening to the moneyed establishment they’ll foster complete obstruction to thwart even the simple belief that hope and change are actually possible. Generating cynicism aids complacency, because it’s difficult for a person dealing with all their own daily struggles to constantly study the complex system and renew the idealism required to force political change, especially during periods of nominally acceptable economic stability. Revolutions are motivated by hunger and heavy oppression, generally years after the slow and determined rise of a stratified class system (a pattern which has plagued us since the dawn of civilization).

For thirty years now capitalism’s trickle-down variant has been systematically attempting to recreate an intransigent system of wealth and privilege. Conservative propaganda has assured us that if the rich succeed, everyone benefits. But how long must this ludicrous delusion be perpetuated? Is not the entire history of civil humanity a testament to the popular misery of allowing an upper class minority to rule? This should be especially poignant in a country which was designed to break hereditary dominance and unrepresentative power. Yet here we are again, watching civilization repeat its famous pattern, locking the populace into hard work and distraction without sharing in the full rewards. America chugs along with its bread and circuses, like a late-season Happy Days episode, where the original magic is gone but the characters continue acting out a hollow version of the thing we used to love and cherish. So goes sitcoms… so goes the world wide web… so goes civilization…

The rise of an entirely corporate internet is just one more idealistic casualty of allowing the amoral dollar to inform every aspect our lives. Market efficiencies, so touted by the right, can generate competition between otherwise possible monopolies, but function best only in fields of limited and uncoordinated resources. They are not necessitated to everything, and especially something as nearly immaterial and gigantic as cyberspace, where supply and demand do not function normally; a place where capitalism has often struggled to find what it can sell. Where demand has to be generated artificially with subtle and disguised viral marketing to trick and deceive us. The newest things you didn’t know you needed but all the cool kids have. Since wealth expands to dominate all emerging cultural forms, it works to control even the nearly limitless virtual environments formed of patterned energy and communal human consciousness.

In the same manner that liberty gets subsumed for security, creativity often dies upon the altar of sales. Advertising’s goal is convincing and deceiving, not compassion. It is the art of propaganda and should constantly be doubted. Excessive needs, worries, and calamities are fostered so that new cures and products can be sold. Just as rulers create fear to limit freedom, so corporations must generate the need for increased consumption.

Cultivating social anxiety can make warrantless wiretapping, indefinite detention, terrorist watchlists, illegal foreign prisons, preemptive perpetual war, pushbutton murder by drone, and being bathed in x-rays at every airport seem incrementally acceptable. If you pile on the impediments slowly, and each seems necessary at the time, they morph into those inevitable and accepted hassles of modern life. Such as how general anxiety generates the sale of status items, snake-oil cures, distracting entertainments, and self-help regimes — it’s the creep of supposed necessity. Just like websites becoming overrun with advertisements, click-bait, registering, tracking, profiling, and endless general noise. In return for which we get increasingly bland and controlled services. With all these small losses, the cultural whole is diminished.

by Nicholas Kerkhoff, Medium | Read more:
Image: uncredited

The Dimming of the Light


With its revolutionary heat and rational cool, French thought once dazzled the world. Where did it all go wrong?

There are many things we have come to regard as quintessentially French: Coco Chanel’s little black dress, the love of fine wines and gastronomy, the paintings of Auguste Renoir, the smell of burnt rubber in the Paris Métro. Equally distinctive is the French mode and style of thinking, which the Irish political philosopher Edmund Burke described in 1790 as ‘the conquering empire of light and reason’. He meant this as a criticism of the French Revolution, but this expression would undoubtedly have been worn as a badge of honour by most French thinkers from the Enlightenment onwards.

Indeed, the notion that rationality is the defining quality of humankind was first celebrated by the 17th-century thinker René Descartes, the father of modern French philosophy. His skeptical method of reasoning led him to conclude that the only certainty was the existence of his own mind: hence his ‘cogito ergo sum’ (‘I think, therefore I am’). This French rationalism was also expressed in a fondness for abstract notions and a preference for deductive reasoning, which starts with a general claim or thesis and eventually works its way towards a specific conclusion – thus the consistent French penchant for grand theories. As the essayist Emile Montégut put it in 1858: ‘There is no people among whom abstract ideas have played such a great role, and whose history is rife with such formidable philosophical tendencies.’

The French way of thinking is a matter of substance, but also style. This is most notably reflected in the emphasis on rhetorical elegance and analytical lucidity, often claimed to stem from the very properties of the French language: ‘What is not clear,’ affirmed the writer Antoine de Rivarol in 1784, somewhat ambitiously, ‘is not French.’ Typically French, too, is a questioning and adversarial tendency, also arising from Descartes’ skeptical method. The historian Jules Michelet summed up this French trait in 1974 in the following way: ‘We gossip, we quarrel, we expend our energy in words; we use strong language, and fly into great rages over the smallest of subjects.’ A British Army manual issued before the Normandy landings in 1944 sounded this warning about the cultural habits of the natives: ‘By and large, Frenchmen enjoy intellectual argument more than we do. You will often think that two Frenchmen are having a violent quarrel when they are simply arguing about some abstract point.’

Yet even this disputatiousness comes in a very tidy form: the habit of dividing issues into two. It is not fortuitous that the division of political space between Left and Right is a French invention, nor that the distinction between presence and absence lies at the heart of Jacques Derrida’s philosophy of deconstruction. French public debate has been framed around enduring oppositions such as good and evil, opening and closure, unity and diversity, civilisation and barbarity, progress and decadence, and secularism and religion.

Underlying this passion for ideas is a belief in the singularity of France’s mission. This is a feature of all exceptionalist nations, but it is rendered here in a particular trope: that France has a duty to think not just for herself, but for the whole world. In the lofty words of the author Jean d’Ormesson, writing in the magazine Le Point in 2011: ‘There is at the heart of Frenchness something which transcends it. France is not only a matter of contradiction and diversity. She also constantly looks over her shoulder, towards others, and towards the world which surrounds her. More than any nation, France is haunted by a yearning towards universality.’

This specification of a distinct French way of thinking is not rooted in a claim about Gallic ‘national character’. These ideas are not a genetic inheritance, but rather the product of specific social and political factors. The Enlightenment, for example, was a cultural phenomenon which spread rationalist ideas across Europe and the Americas. But in France, from the mid-18th century, this intellectual movement produced a particular type of philosophical radicalism, which was articulated by a remarkable group of thinkers, the philosophes. Thanks to the influence of the likes of Voltaire, Diderot and Rousseau, the French version of rationalism took on a particularly anti-clerical, egalitarian and transformative quality. These subversive precepts also circulated through another French cultural innovation, the salon: this private cultural gathering flourished in high society, contributing to the dissemination of philosophical and artistic ideas among French elites, and the empowerment of women.

This intellectual effervescence challenged the established order of the ancien régime during the second half of the 18th century. It also gave a particularly radical edge to the French Revolution, compared, notably, with its American counterpart. Thus, 1789 was not only a landmark in French thought, but the culmination of the Enlightenment’s philosophical radicalism: it gave rise to a new republican political culture, and enduringly associated the very idea of Frenchness with novelty and resistance to oppression. It also crystallised an entirely original way of thinking about the public sphere, centred around general principles such as the ‘Declaration of the Rights of Man’, the civic conception of the nation (resting on shared values as opposed to blood ties), the ideals of liberty, equality and fraternity, and the notions of the general interest and popular sovereignty.

One might object that, despite this common and lasting revolutionary heritage, the French have remained too diverse and individualistic to be characterised in terms of a general mind-set. Yet there are two decisive reasons why it is possible – and indeed necessary – to speak of a collective French way of thinking. Firstly, since the Enlightenment, France has granted a privileged role to thinkers, recognising them as moral and spiritual guides to society – a phenomenon reflected in the very notion of the ‘intellectual’, which is a late-19th-century (French) invention. Public intellectuals exist elsewhere, of course, but in France they enjoy an unparalleled degree of visibility and social legitimacy.

Secondly, to an extent that is also unique in modern Western culture, France’s major cultural bodies – from the State to the great institutions of secondary and higher education, the major academies, the principal publishing houses, and the leading press organs – are all concentrated in Paris. This cultural centralisation extends to the school curriculum (all high-school students have to study philosophy up to the baccalauréat), and this explains how and why French ways of thought have exhibited such a striking degree of stylistic consistency.

by Sudhir Hazareesingh, Aeon | Read more:
Image: Jean-Paul Sartre and Simone de Beauvoir having lunch at the "La Coupole" Brasserie, December 1973. Photo by Guy Le Querrec/Magnum