Thursday, May 15, 2014

Love Me Tinder

That fall, his relationship of two and a half years finally ended, and Eli found himself single again. He was 27 years old, losing the vestigial greenness of his youth. He wanted to have sex with some women, and he wanted some stories to tell. He updated his dating profiles. He compiled his photos. He experimented with taglines. He downloaded all the apps. He knew the downsides—the perfidy of the deceptive head shot, the seductress with the intellect of a fence post—but he played anyway. He joined every free dating service demographically available to him.

Around the same time, somewhere across town, a woman named Katherine1 shut down her OkCupid account. She had approached Internet dating assertively, had checked the box that read “Short-term dating” and the one that read “Casual sex.” Then a casual encounter had turned menacing, and Katherine decided she no longer wanted to pursue sex with total strangers. But she had a problem: She liked the adventure, she had the usual human need for other humans, and she needed the convenience of meeting people online. Katherine was 37, newly single, with family obligations and a full-time job. Most of her friends were married. She needed something new.

When Katherine and Eli downloaded Tinder in October 2013, they joined millions of Americans interested in trying the fastest-growing mobile dating service in the country. Tinder does not give out statistics about the number of its users, but the app has grown from being the plaything of a few hundred Los Angeles party kids to a multinational phenomenon in less than a year. Unlike the robot yentas of yore (Match.com, OkCupid, eHarmony), which out-competed one another with claims of compatibility algorithms and secret love formulas, the only promise Tinder makes is to show you the other users in your immediate vicinity. Depending on your feelings for these people, you swipe them to the left (meaning “no thanks”) or to the right (“yes, please”). Two people who swipe each other to the right will “match.” Your matches accrue in a folder, and often that's the end of the story. Other times you start texting. The swiping phase is as lulling in its eye-glazing repetition as a casino slot machine, the chatting phase ideal for idle, noncommittal flirting. In terms of popularity, Tinder is a massive and undeniable success. Whether it works depends on your idea of “working.”

For Katherine, still wary from her bad encounter, Tinder offered another advantage. It uses your pre-existing Facebook network and shows which friends, if any, you have in common with the person in the photo. On October 16, Eli appeared on her phone. He was cute. He could tell a joke. (His tagline made her laugh.) They had one friend in common, and they both liked Louis C.K. (“Who doesn't like Louis C.K.?” Eli says later. “Oh, you also like the most popular comedian in America?”) She swiped him to the right. Eli, who says he would hook up with anybody who isn't morbidly obese or in the middle of a self-destructive drug relapse, swipes everyone to the right. A match!

He messaged first. “Sixty-nine miles away??” he asked.

“I'm at a wedding in New Jersey,” she replied.

So, Eli said to himself, she's lonely at a wedding in New Jersey.

Eli: “So why you on Tinder?”

Katherine: “To date. You?”

Eli said it was an “esteem” thing. It had taught him that “women find me more attractive than I think.” Unfortunately for Katherine, he told her he didn't have a lot of time to date. He worked two jobs. They wanted different things. It therefore read as mock bravado when Eli wrote, “But you ever just want to fuck please please holler at me cool???” He added his number.

Katherine waited an hour to respond. Then: “Ha.” And then, one minute later, “I will.” And: “I kinda do.”

Eli: “Please please do. ;)”

Katherine liked that he was younger. He was funny. He did not, like one guy, start the conversation with “Don't you want to touch my abs?” He said “please.” Eli liked that Katherine was older. Katherine wrote: “You can't be psycho or I will tell [name of mutual friend].” He sympathized with that, too.

The parameters were clear. They arranged to meet.
···
I first signed up for Tinder in May but found it skewed too young. (I'm 32.) When I looked again in mid-October, everything had changed. I swiped through people I knew from college, people I might've recognized from the train. I saw it had gone global when a friend in England posted a Tinder-inspired poem on her Facebook page (“and here are we, He and Me, our flat-screen selves rendered 3D”). I started to check it regularly. The more I used it, the more I considered how much it would have helped me at other times in my life—to make friends in grad school, to meet people after moving to a new city. It seemed possible that one need never be isolated again.

In December, I flew out to Los Angeles, where Tinder is based, to visit the company's offices and meet two of its founders, Sean Rad and Justin Mateen, both 27. (The third is Jonathan Badeen, the engineer who built the app.) Rad is the chief executive officer; Mateen is chief marketing officer. They are also best friends, share a resemblance to David Schwimmer, and have been known to show up for work in the same outfit. I was staying only a mile from Tinder's offices in West Hollywood, and within forty-eight hours both founders showed up on my Tinder feed. Other memorable appearances on my feed in Los Angeles included a guy holding a koala bear, a guy and his Yorkshire terrier, in matching sweaters, and a pipe-smoking dandy with a Rasputin beard, horn-rimmed glasses, and a gold ring the exact shape and size of a cicada.

Rad and Mateen are local boys. They both grew up in Beverly Hills, although they attended different private schools. They first encountered each other at 14, when Sean made a play for Justin's girlfriend. (“We met because we both liked the same girl—but the girl was my girlfriend,” says Justin.) They reconnected at USC, and then both started independent companies. Justin's was a “social network for celebrities.” Sean's was Adly, a platform that allows companies to advertise via celebrities' social networks. He sold the majority of his stake in 2012. “I didn't want to be in the ad business,” he says. He also didn't want to make things for computers. “Computers are going extinct,” he says. “Computers are just work devices.” For people his age, the primary way to interface with the technical world was through a mobile device.

Rad and Mateen have shared business ideas with each other for years, and every idea begins with a problem. The key to solving the problem that interested Tinder: “I noticed that no matter who you are, you feel more comfortable approaching somebody if you know they want you to approach them,” says Sean. They had both experienced the frustration of sending smoke signals through social media. “There are people that want to get to know you who don't know you, so they're resorting to Facebook,” explains Justin. When those advances or friendings or followings are unwanted, they say, the overtures can seem a little “creepy.” (Consider, for example, the long-standing mystery of the Facebook “poke.”) Sean was interested in the idea of the “double opt-in”—some establishment of mutual interest that precedes interaction.

And so Tinder entered a fossilizing industry. Most of the big players (including Match.com, Plenty of Fish, OkCupid, eHarmony, Manhunt, JDate, and Christian Mingle) established themselves before billions of humans carried miniature satellite-connected data processors in their pockets, before most people felt comfortable using their real names to seek companionship online, and before a billion people joined Facebook—before Facebook even existed. Tinder's major advantages come from exploiting each of these recent developments. The company also managed to accrue, in less than a year of existence, the only truly important asset of any dating site: millions and millions of users.

by Emily Witt, GQ |  Read more:
Image:uncredited

The Last King of the American Middlebrow


The set of “Jeopardy!” was given a shiny makeover last year, in preparation for the quiz show’s fiftieth birthday. The new look features that familiar, top-heavy, bubble-letter logo and a gorgeously tacky sunset backdrop that provides an odd companion to the sexless blue-and-white board. In fluorescent spirit, though, the set is unchanged since the 1980s, much like the program itself.

Contestants come and go. Here’s our returning champion. He’s from Maryland, she’s from Chicago. Some, like this year’s star, Arthur Chu, even briefly become big deals. But it is Alex Trebek who has remained the centerpiece. His extended tenure as America’s senior-most faculty member has made Americans forget that he’s playing a part; a few years ago, Trebek was voted the eighth-most-trusted person in the United States, sandwiched between Bill and Melinda Gates. “He’s like a Ward Cleaver figure,” says Ken Jennings, the most successful “Jeopardy!” contestant ever. “But for the past thirty years.” This month, in fact, the host will mark three decades as the face and voice of “Jeopardy!”; like the show’s theme music, he is almost post-iconic, such a known entity that he’s just there.

He might not be there for much longer, though. “Jeopardy!” continues to draw around 25 million viewers per week, making it second among syndicated game shows (behind only “Wheel of Fortune,” which “Jeopardy!” fans dismiss as little better than a televised jumble puzzle). But Trebek has hinted that he will retire when his contract runs out in 2016. Already he has outlasted Jay Leno and now David Letterman as one of the final one-man TV brands from the time before DVRs. Audience shares and attention spans, however, aren’t the only things that have changed during his run. In the Internet era, knowing a little about a lot provides diminished cachet: You don’t have to retain facts when they can just be Googled. These days there’s a throwback charm to the whole “Jeopardy!” enterprise and the appeal, in Trebek’s late-career performances, of a simple job well done. (...)

Trebek has built his career on an air of erudition. But offscreen he is equally invested in a second persona, one that colleagues say has emerged more in recent years. This other Trebek is “much less of a ‘Masterpiece Theater’ guy,” is how a staffer puts it. “He’s more of a getting-his-hands-dirty guy.”

So even as Trebek might casually draw the solar system during conversation—because of course that’s a thing the host of “Jeopardy!” would do—he also takes pride in noting that he was almost kicked out of boarding school as a boy. He sometimes brags about his breakfast of Snickers and Diet Pepsi and likes to talk about the rec-league hockey games he suited up for with Dave Coulier (the “Full House” star who was not John Stamos, Bob Saget, or an Olsen twin). He has had two heart attacks but sums up his current exercise regimen as “I drink.” Yes, Trebek can describe for you the 1928 Mouton that he once tasted. But, he is quick to joke: “I’m not a true wine connoisseur. I’m just a drinker.”

Fact, according to Trebek: His favorite place in Los Angeles these days is Home Depot. He and his second wife, Jean, whom Trebek married in 1990 when she was 26 and he was a 49-year-old divorcĂ©, live in a nice house in the Valley, and he spends a great deal of time thinking about maintenance and improvements (which he accomplishes with the help of “Manuel and Miguel”). Trebek says that when he gets up in the middle of the night—he has terrible insomnia—he will lie awake for hours plotting how to fix the sliver of light peeking through his window, and all the other home-repair projects he wants to tackle next.

by Noreen Malone, TNR |  Read more:
Image: Ian Allen

Massive Attack

Wednesday, May 14, 2014

The Snowden Saga Begins

[ed. Wow. Want to know what a true patriot looks like these days? (hint: not some phony tea bagger dressed up in guns and slogans)]

Make no mistake: it’s been the year of Edward Snowden. Not since Daniel Ellsberg leaked the Pentagon Papers during the Vietnam War has a trove of documents revealing the inner workings and thinking of the U.S. government so changed the conversation. In Ellsberg’s case, that conversation was transformed only in the United States. Snowden has changed it worldwide. From six-year-olds to Angela Merkel, who hasn’t been thinking about the staggering ambitions of the National Security Agency, about its urge to create the first global security state in history and so step beyond even the most fervid dreams of the totalitarian regimes of the last century? And who hasn’t been struck by how close the agency has actually come to sweeping up the communications of the whole planet? Technologically speaking, what Snowden revealed to the world -- thanks to journalist Glenn Greenwald and filmmaker Laura Poitras -- was a remarkable accomplishment, as well as a nightmare directly out of some dystopian novel.

From exploiting backdoors into the Internet’s critical infrastructure and close relationships with the planet's largest tech companies to performing economic espionage and sending spy avatars into video games, the NSA has been relentless in its search for complete global omniscience, even if that is by no means the same thing as omnipotence. It now has the ability to be a hidden part of just about any conversation just about anywhere. Of course, we don’t yet know the half of it, since no Edward Snowden has yet stepped forward from the inner precincts of the Defense Intelligence Agency, the CIA, the National Geospatial Intelligence Agency, or other such outfits in the "U.S. intelligence community." Still, what we do know should take our collective breath away. And we know it all thanks to one young man, hounded across the planet by the U.S. government in an “international manhunt.”

As an NSA contractor, Snowden found himself inside the blanket of secrecy that has fallen across our national security state since 9/11 and there he absorbed an emerging principle on which this country was never founded: that “they” know what’s best for us, and that, in true Orwellian fashion, our ignorance is our strength. Increasingly, this has become Washington's twenty-first-century mantra, which is not to be challenged. Hence, the extremity of the outrage, as well as the threats and fantasies of harm, expressed by those in power (or their recently retired channelers) toward Snowden.

One brave young man with his head firmly fastened on his shoulders found himself trapped in Moscow and yet never lost his balance, his good sense, or his focus. As Jonathan Schell wrote in September 2013, “What happened to Snowden in Moscow diagramed the new global reality. He wanted to leave Russia, but the State Department, in an act of highly dubious legality, stripped him of his passport, leaving him -- for purposes of travel, at least -- stateless. Suddenly, he was welcome nowhere in the great wide world, which shrank down to a single point: the transit lounge at Sheremetyevo [Airport]. Then, having by its own action trapped him in Russia, the administration mocked and reviled him for remaining in an authoritarian country. Only in unfree countries was Edward Snowden welcome. What we are pleased to call the ‘free world’ had become a giant prison for a hero of freedom.”

And of course, there was also a determined journalist, who proved capable of keeping his focus on what mattered while under fierce attack, who never took his eyes off the prize. I’m talking, of course, about Glenn Greenwald. Without him (and the Guardian, Laura Poitras, and Barton Gellman of the Washington Post), “they” would be observing us, 24/7, but we would not be observing them. This small group has shaken the world.

This is publication day for Greenwald’s new book, No Place to Hide: Edward Snowden, the NSA, and the U.S. Security State, about his last near-year swept away by the Snowden affair. It’s been under wraps until now for obvious reasons. Today, TomDispatch is proud, thanks to the kindness of Greenwald’s publisher, Metropolitan Books, to be releasing an adapted, much shortened version of its first chapter on how this odyssey of our American moment began. (...)

----

On December 1, 2012, I received my first communication from Edward Snowden, although I had no idea at the time that it was from him.

The contact came in the form of an email from someone calling himself Cincinnatus, a reference to Lucius Quinctius Cincinnatus, the Roman farmer who, in the fifth century BC, was appointed dictator of Rome to defend the city against attack. He is most remembered for what he did after vanquishing Rome’s enemies: he immediately and voluntarily gave up political power and returned to farming life. Hailed as a “model of civic virtue,” Cincinnatus has become a symbol of the use of political power in the public interest and the worth of limiting or even relinquishing individual power for the greater good.

Glenn Greenwald, TomDispatch |  Read more:
Image: via:

Duilio Barnabè (1914-1961) Still Life with Fruit and Oranges
via:

Videophony

[ed. I read this story today (Anti-Surveillance Mask Lets You Pass as Someone Else) and it reminded me of this memorable passage from David Foster Wallace's Infinite Jest.]

l) It turned out that there was something terribly stressful about visual telephone interfaces that hadn't been stressful at all about voice-only interfaces. Videophone consumers seemed suddenly to realize that they'd been subject to an insidious but wholly marvelous delusion about conventional voice-only telephony. They'd never noticed it before, the delusion — it's like it was so emotionally complex that it could be countenanced only in the context of its loss. Good old traditional audio-only phone conversations allowed you to presume that the person on the other end was paying complete attention to you while also permitting you not to have to pay anything even close to complete attention to her. A traditional aural-only conversation — utilizing a hand- held phone whose earpiece contained only 6 little pinholes but whose mouthpiece (rather significantly, it later seemed) contained (62) or 36 little pinholes — let you enter a kind of highway-hypnotic semi-attentive fugue: while conversing, you could look around the room, doodle, fine-groom, peel tiny bits of dead skin away from your cuticles, compose phone-pad haiku, stir things on the stove; you could even carry on a whole separate additional sign-language-and-exaggerated-facial-expression type of conversation with people right there in the room with you, all while seeming to be right there attending closely to the voice on the phone. And yet — and this was the retrospectively marvelous part — even as you were dividing your attention between the phone call and all sorts of other idle little fuguelike activities, you were somehow never haunted by the suspicion that the person on the other end's attention might be similarly divided. During a traditional call, e.g., as you let's say performed a close tactile blemish- scan of your chin, you were in no way oppressed by the thought that your phonemate was perhaps also devoting a good percentage of her attention to a close tactile blemish-scan. It was an illusion and the illusion was aural and aurally supported: the phone-line's other end's voice was dense, tightly compressed, and vectored right into your ear, enabling you to imagine that the voice's owner's attention was similarly compressed and focused . . . even though your own attention was not, was the thing. This bilateral illusion of unilateral attention was almost infantilely gratifying from an emotional standpoint: you got to believe you were receiving somebody's complete attention without having to return it. Regarded with the objectivity of hindsight, the illusion appears arational, almost literally fantastic: it would be like being able both to lie and to trust other people at the same time.

Video telephony rendered the fantasy insupportable. Callers now found they had to compose the same sort of earnest, slightly overintense listener's expression they had to compose for in-person exchanges. Those callers who out of unconscious habit succumbed to fuguelike doodling or pants-crease-adjustment now came off looking rude, absentminded, or childishly self- absorbed. Callers who even more unconsciously blemish-scanned or nostril-explored looked up to find horrified expressions on the video-faces at the other end. All of which resulted in videophonic stress.

Even worse, of course, was the traumatic expulsion-from-Eden feeling of looking up from tracing your thumb's outline on the Reminder Pad or adjusting the old Unit's angle of repose in your shorts and actually seeing your videophonic interfacee idly strip a shoelace of its gumlet as she talked to you, and suddenly realizing your whole infantile fantasy of commanding your partner's attention while you yourself got to fugue-doodle and make little genital-adjustments was deluded and insupportable and that you were actually commanding not one bit more attention than you were paying, here. The whole attention business was monstrously stressful, video callers found.

(2) And the videophonic stress was even worse if you were at all vain. I.e. if you worried at all about how you looked. As in to other people. Which all kidding aside who doesn't. Good old aural telephone calls could be fielded without makeup, toupee, surgical prostheses, etc. Even without clothes, if that sort of thing rattled your saber. But for the image-conscious, there was of course no such answer-as-you-are informality about visual-video telephone calls, which consumers began to see were less like having the good old phone ring than having the doorbell ring and having to throw on clothes and attach prostheses and do hair- checks in the foyer mirror before answering the door.

But the real coffin-nail for videophony involved the way callers' faces looked on their TP screen, during calls. Not their callers' faces, but their own, when they saw them on video. It was a three-button affair:, after all, to use the TP's cartridge-card's Video-Record option to record both pulses in a two-way visual call and play the call back and see how your face had actually looked to the other person during the call. This sort of appearance-check was no more resistible than a mirror. But the experience proved almost universally horrifying. People were horrified at how their own faces appeared on a TP screen. It wasn't just 'Anchorman's Bloat,' that well-known impression of extra weight that video inflicts on the face. It was worse. Even with high-end TPs' high-def viewer-screens, consumers perceived something essentially blurred and moist-looking about their phone-faces, a shiny pallid indefiniteness that struck them as not just unflattering but somehow evasive, furtive, untrustworthy, unlikable. In an early and ominous InterLace/G.T.E. focus-group survey that was all but ignored in a storm of entrepreneurial sci-fi-tech enthusiasm, almost 60% of respondents who received visual access to their own faces during videophonic calls specifically used the terms untrustworthy, unlikable, or hard to like in describing their own visage's appearance, with a phenomenally ominous 71 % of senior-citizen respondents specifically comparing their video-faces to that of Richard Nixon during the Nixon-Kennedy debates of B.S. 1960.

The proposed solution to what the telecommunications industry's psychological consultants termed Video-Physiognomic Dysphoria (or VPD) was, of course, the advent of High-Definition Masking; and in fact it was those entrepreneurs who gravitated toward the production of high-definition videophonic imaging and then outright masks who got in and out of the short-lived videophonic era with their shirts plus solid additional nets.

Mask-wise, the initial option of High-Definition Photographic Imaging — i.e. taking the most flattering elements of a variety of flattering multi-angle photos of a given phone-consumer and — thanks to existing image-configuration equipment already pioneered by the cosmetics and law-enforcement industries — combining them into a wildly attractive high-def broadcastable composite of a face wearing an earnest, slightly overintense expression of complete attention — was quickly supplanted by the more inexpensive and byte-economical option of (using the exact same cosmetic-and-FBI software) actually casting the enhanced facial image in a form-fitting polybutylene-resin mask, and consumers soon found that the high up-front cost of a permanent wearable mask was more than worth it, considering the stress- and VFD-reduction benefits, and the convenient Velcro straps for the back of the mask and caller's head cost peanuts; and for a couple fiscal quarters phone/cable companies were able to rally VPD-afflicted consumers' confidence by working out a horizontally integrated deal where free composite-and-masking services came with a videophone hookup. The high-def masks, when not in use, simply hung on a small hook on the side of a TP's phone- console, admittedly looking maybe a bit surreal and discomfiting when detached and hanging there empty and wrinkled, and sometimes there were potentially awkward mistaken-identity snafus involving multi-user family or company phones and the hurried selection and attachment of the wrong mask taken from some long row of empty hanging masks — but all in all the masks seemed initially like a viable industry response to the vanity,-stress,-and-Nixonian-facial-image problem.

(2 and maybe also 3) But combine the natural entrepreneurial instinct to satisfy all sufficiently high consumer demand, on the one hand, with what appears to be an almost equally natural distortion in the way persons tend to see themselves, and it becomes possible to account historically for the speed with which the whole high-def-videophonic-mask thing spiralled totally out of control. Not only is it weirdly hard to evaluate what you yourself look like, like whether you're good-looking or not — e.g. try looking in the mirror and determining where you stand in the attractiveness-hierarchy with anything like the objective ease you can determine whether just about anyone else you know is good-looking or not — but it turned out that consumers' instinctively skewed self-perception, plus vanity-related stress, meant that they began preferring and then outright demanding videophone masks that were really quite a lot better-looking than they themselves were in person. High-def mask-entrepreneurs ready and willing to supply not just verisimilitude but aesthetic enhancement — stronger chins, smaller eye-bags, air-brushed scars and wrinkles — soon pushed the original mimetic-mask-entrepreneurs right out of the market. In a gradually unsubtlizing progression, within a couple more sales-quarters most consumers were now using masks so undeniably better-looking on videophones than their real faces were in person, transmitting to one another such horrendously skewed and enhanced masked images of themselves, that enormous psychosocial stress began to result, large numbers of phone-users suddenly reluctant to leave home and interface personally with people who, they feared, were now habituated to seeing their far-better-looking masked selves on the phone and would on seeing them in person suffer (so went the callers' phobia) the same illusion-shattering aesthetic disappointment that, e.g., certain women who always wear makeup give people the first time they ever see them without makeup.

by David Foster Wallace, Infinite Jest via Kickstarter |  Read more:
Image: via:

This Plastic Is Made Of Shrimp Shells

There ain't no reason why we can't replace plastic with something biodegradable. Here's one option: a material called shrilk. It is made from a chemical in shrimp shells called chitosan, a version of chitin--the second-most abundant organic material on the planet, found in fungal cells, insect exoskeletons, and butterfly wings.

Researchers at Harvard's Wyss Institute for Biologically Inspired Engineering said the material could be relatively easily manufactured in mass quantities and used to make large 3D objects. The material breaks down within a "few weeks" of being thrown away, and provides nutrients for plants, according to a statement.

Chitosan can be obtained from shrimp shells, which are usually discarded, but also used to manufacture makeup and fertilizer. Fortunately, people with shellfish allergies don't seem to react to chitosan, according to a study of chitosan-coated bandages.

by Douglas Main, Popular Mechanics |  Read more:
Image: US Government / Wikimedia commons

The Robot Car of Tomorrow May Just Be Programmed to Hit You

Suppose that an autonomous car is faced with a terrible decision to crash into one of two objects. It could swerve to the left and hit a Volvo sport utility vehicle (SUV), or it could swerve to the right and hit a Mini Cooper. If you were programming the car to minimize harm to others–a sensible goal–which way would you instruct it go in this scenario?

As a matter of physics, you should choose a collision with a heavier vehicle that can better absorb the impact of a crash, which means programming the car to crash into the Volvo. Further, it makes sense to choose a collision with a vehicle that’s known for passenger safety, which again means crashing into the Volvo.

But physics isn’t the only thing that matters here. Programming a car to collide with any particular kind of object over another seems an awful lot like a targeting algorithm, similar to those for military weapons systems. And this takes the robot-car industry down legally and morally dangerous paths.

Even if the harm is unintended, some crash-optimization algorithms for robot cars would seem to require the deliberate and systematic discrimination of, say, large vehicles to collide into. The owners or operators of these targeted vehicles would bear this burden through no fault of their own, other than that they care about safety or need an SUV to transport a large family. Does that sound fair?

What seemed to be a sensible programming design, then, runs into ethical challenges. Volvo and other SUV owners may have a legitimate grievance against the manufacturer of robot cars that favor crashing into them over smaller cars, even if physics tells us this is for the best. (...)

The problem is starkly highlighted by the next scenario, also discussed by Noah Goodall, a research scientist at the Virginia Center for Transportation Innovation and Research. Again, imagine that an autonomous car is facing an imminent crash. It could select one of two targets to swerve into: either a motorcyclist who is wearing a helmet, or a motorcyclist who is not. What’s the right way to program the car?

In the name of crash-optimization, you should program the car to crash into whatever can best survive the collision. In the last scenario, that meant smashing into the Volvo SUV. Here, it means striking the motorcyclist who’s wearing a helmet. A good algorithm would account for the much-higher statistical odds that the biker without a helmet would die, and surely killing someone is one of the worst things auto manufacturers desperately want to avoid.

But we can quickly see the injustice of this choice, as reasonable as it may be from a crash-optimization standpoint. By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet. Meanwhile, we are giving the other motorcyclist a free pass, even though that person is much less responsible for not wearing a helmet, which is illegal in most U.S. states.
By deliberately crashing into that motorcyclist, we are in effect penalizing him or her for being responsible, for wearing a helmet.

Not only does this discrimination seem unethical, but it could also be bad policy. That crash-optimization design may encourage some motorcyclists to not wear helmets, in order to not stand out as favored targets of autonomous cars, especially if those cars become more prevalent on the road. Likewise, in the previous scenario, sales of automotive brands known for safety may suffer, such as Volvo and Mercedes Benz, if customers want to avoid being the robot car’s target of choice.

by Patrick Lin, Wired |  Read more:
Image: US DOT

Tuesday, May 13, 2014


Bo Bae Kim, South Korea.
via:

[ed. 'Life is all about Dopeness'. Yes, that pretty much sums up my philosophy, too.]
Kanye West
via:

Enough Is Enough: Stop Wasting Money on Vitamin and Mineral Supplements

[ed. First antimicrobial wipes, then aspirin, and now multivitamins. Pretty soon even alcohol and cigarettes will be labeled bad for your health. I've always thought stressing your system a litte bit was a good thing - making you stronger in the long run. Up to a point, of course.]

Three articles in this issue address the role of vitamin and mineral supplements for preventing the occurrence or progression of chronic diseases. First, Fortmann and colleagues (1) systematically reviewed trial evidence to update the U.S. Preventive Services Task Force recommendation on the efficacy of vitamin supplements for primary prevention in community-dwelling adults with no nutritional deficiencies. After reviewing 3 trials of multivitamin supplements and 24 trials of single or paired vitamins that randomly assigned more than 400 000 participants, the authors concluded that there was no clear evidence of a beneficial effect of supplements on all-cause mortality, cardiovascular disease, or cancer.

Second, Grodstein and coworkers (2) evaluated the efficacy of a daily multivitamin to prevent cognitive decline among 5947 men aged 65 years or older participating in the Physicians’ Health Study II. After 12 years of follow-up, there were no differences between the multivitamin and placebo groups in overall cognitive performance or verbal memory. Adherence to the intervention was high, and the large sample size resulted in precise estimates showing that use of a multivitamin supplement in a well-nourished elderly population did not prevent cognitive decline. Grodstein and coworkers’ findings are compatible with a recent review (3) of 12 fair- to good-quality trials that evaluated dietary supplements, including multivitamins, B vitamins, vitamins E and C, and omega-3 fatty acids, in persons with mild cognitive impairment or mild to moderate dementia. None of the supplements improved cognitive function.

Third, Lamas and associates (4) assessed the potential benefits of a high-dose, 28-component multivitamin supplement in 1708 men and women with a previous myocardial infarction participating in TACT (Trial to Assess Chelation Therapy). After a median follow-up of 4.6 years, there was no significant difference in recurrent cardiovascular events with multivitamins compared with placebo (hazard ratio, 0.89 [95% CI, 0.75 to 1.07]). The trial was limited by high rates of nonadherence and dropouts.

Other reviews and guidelines that have appraised the role of vitamin and mineral supplements in primary or secondary prevention of chronic disease have consistently found null results or possible harms (56). Evidence involving tens of thousands of people randomly assigned in many clinical trials shows that β-carotene, vitamin E, and possibly high doses of vitamin A supplements increase mortality (67) and that other antioxidants (6), folic acid and B vitamins (8), and multivitamin supplements (1, 5) have no clear benefit.

Despite sobering evidence of no benefit or possible harm, use of multivitamin supplements increased among U.S. adults from 30% between 1988 to 1994 to 39% between 2003 to 2006, while overall use of dietary supplements increased from 42% to 53% (9). Longitudinal and secular trends show a steady increase in multivitamin supplement use and a decline in use of some individual supplements, such as β-carotene and vitamin E. The decline in use of β-carotene and vitamin E supplements followed reports of adverse outcomes in lung cancer and all-cause mortality, respectively. In contrast, sales of multivitamins and other supplements have not been affected by major studies with null results, and the U.S. supplement industry continues to grow, reaching $28 billion in annual sales in 2010. Similar trends have been observed in the United Kingdom and in other European countries.

The large body of accumulated evidence has important public health and clinical implications. Evidence is sufficient to advise against routine supplementation, and we should translate null and negative findings into action. The message is simple: Most supplements do not prevent chronic disease or death, their use is not justified, and they should be avoided. This message is especially true for the general population with no clear evidence of micronutrient deficiencies, who represent most supplement users in the United States and in other countries (9).

by Eliseo Guallar, MD, DrPH; Saverio Stranges, MD, PhD; Cynthia Mulrow, MD, MSc, Senior Deputy Editor; Lawrence J. Appel, MD, MPH; and Edgar R. Miller III, MD, PhD, Annals of Internal Medicine |  Read more: 
Image: via:

Why You Won’t Be the Person You Expect to Be


When we remember our past selves, they seem quite different. We know how much our personalities and tastes have changed over the years. But when we look ahead, somehow we expect ourselves to stay the same, a team of psychologists said Thursday, describing research they conducted of people’s self-perceptions.

They called this phenomenon the “end of history illusion,” in which people tend to “underestimate how much they will change in the future.” According to their research, which involved more than 19,000 people ages 18 to 68, the illusion persists from teenage years into retirement.

“Middle-aged people — like me — often look back on our teenage selves with some mixture of amusement and chagrin,” said one of the authors, Daniel T. Gilbert, a psychologist at Harvard. “What we never seem to realize is that our future selves will look back and think the very same thing about us. At every age we think we’re having the last laugh, and at every age we’re wrong.”

Other psychologists said they were intrigued by the findings, published Thursday in the journal Science, and were impressed with the amount of supporting evidence. Participants were asked about their personality traits and preferences — their favorite foods, vacations, hobbies and bands — in years past and present, and then asked to make predictions for the future. Not surprisingly, the younger people in the study reported more change in the previous decade than did the older respondents.

But when asked to predict what their personalities and tastes would be like in 10 years, people of all ages consistently played down the potential changes ahead.

Thus, the typical 20-year-old woman’s predictions for her next decade were not nearly as radical as the typical 30-year-old woman’s recollection of how much she had changed in her 20s. This sort of discrepancy persisted among respondents all the way into their 60s.

And the discrepancy did not seem to be because of faulty memories, because the personality changes recalled by people jibed quite well with independent research charting how personality traits shift with age. People seemed to be much better at recalling their former selves than at imagining how much they would change in the future.

Why? Dr. Gilbert and his collaborators, Jordi Quoidbach of Harvard and Timothy D. Wilson of the University of Virginia, had a few theories, starting with the well-documented tendency of people to overestimate their own wonderfulness.

by John Tierney, NY Times |  Read more:
Image: via:

The Smooth Path to Pearl Harbor

The heated rhetoric of recent months suggests that interpreting the behavior of both China and Japan during the war years will become increasingly controversial. Meanwhile, the tensions between the two countries could destabilize the American-dominated postwar order in East Asia. We may be about to witness the most important moment of change in the relations among the powers in the region since the events that led to Pearl Harbor in 1941.

In this atmosphere, understanding the reasons for Japan’s decision to go to war in the Pacific has an urgency that goes beyond the purely historical. Fortunately, Japan 1941: Countdown to Infamy, by the Japanese historian Eri Hotta, proves an outstanding guide to that devastating decision. In lucid prose, Hotta meticulously examines a wide range of primary documents in Japanese to answer the question: Why did Japan find itself on the brink of war in December 1941?

The answer begins long before the year of the book’s title. In the 1920s, Japan gave many signs of being integrated into international society. It had taken part, albeit in a limited way, in World War I and had been one of the victorious nations at the Paris Peace Conference of 1919. Its parliamentary democracy was young but appeared promising: in 1925, a new law greatly widened the male franchise. The country had become a part of the global trading system, and Japan’s external policy was defined by the liberal internationalism of Foreign Minister Shidehara Kijuro.

Yet interwar Japan was ambivalent about its status in the world, perceiving itself as an outsider in the Western-dominated global community, and aware that the bonds among different parts of its own society were fraying. The Western victors of 1919 had refused Japanese demands for a racial equality clause as part of the peace settlement, confirming the opinions of many of Tokyo’s policymakers that they would never be treated as the peers of their white allies. At home, labor unrest and an impoverished countryside showed that Japan’s society was unstable under the surface. After the devastating earthquake in Japan’s Kanto region in 1923, riots broke out against members of the local Korean population, who were falsely accused of arson and robbery. In 1927, one of the finest writers of the era, Akutagawa Ryunosuke (whose short story “In a Grove” became the basis of Kurosawa’s film Rashomon), took his own life. In his will, he declared that he was suffering from “a vague insecurity.”

Japan’s sense of insecurity was real but by no means vague, and expressed itself most vividly in the drive toward building an empire. In the early twentieth century, Japan was the only non-Western country to have its own colonies. In 1895, Japan won a war against China and was ceded Taiwan; it gained territorial and railway rights in Manchuria in 1905 at the end of its war with Russia; and in 1910, it fully annexed Korea. The depression devastated Japan’s economy after 1929, and its leaders became obsessed with the idea of expanding further onto the Asian mainland.

Japanese civilian politics also started to fall apart as the military began to make its own policy. In 1931, two officers of the locally garrisoned Japanese Kwantung Army in the south of China set off an explosion on a railway line near the city of Shenyang (then Mukden) in Manchuria, the northeastern region of China. Within days, they prepared the way for the Japanese conquest of the entire region. Protests from a commission sent by the League of Nations had no effect other than causing Japan to quit the League.

By the mid-1930s, much of northern China was essentially under Japanese influence. Then, on July 7, 1937, a small-scale clash between local Chinese and Japanese troops at the Marco Polo Bridge in Wanping, a small village outside Beijing, escalated. The Japanese prime minister, Prince Konoe, used the clash to make further territorial demands on China. Chiang Kai-shek, leader of the Nationalist government, decided that the moment had come to confront Japan rather than appease it, and full-scale war broke out between the two sides.

Within eighteen months, China and Japan were locked in a stalemate. The Japanese quickly overran eastern China, the most prosperous and advanced part of the country. But they were unable to subdue guerrilla activity in the countryside or eliminate the Communists based in the north. Nor did Chiang’s government show any inclination to surrender: by moving to the southwestern city of Chongqing, his Chinese Nationalists dug in for a long war against Japan, desperately hoping to attract allies to their cause, but gaining little response over the long years until 1940. Yet between them the Nationalist and Communist forces had more than half a million troops in China. The United States, increasingly concerned that all Asia might fall into Japan’s hands, began to assist China and impose sanctions on Japan. At that point, desperate to resolve their worsening situation, Japan embarked on the path to the attack on Pearl Harbor on December 7, 1941, and four years of war with the United States and its allies.

Hotta makes it unambiguously clear that the blame for the war lies entirely at Japan’s door. The feeling of inevitability in Tokyo was a product of the Japanese policymakers’ own blinkered perspectives. One of the most alarming revelations in her book is the weak-mindedness of the doves and skeptics, who refused to confront the growing belligerence of most of their colleagues.

by Rana Mitter, NY Review of Books |  Read more:
Image: Heinrich Hoffmann/Ullstein Bild/Granger Collection

Monday, May 12, 2014


[ed. Yikes.]
World Hairdressing Championships - in Pictures.
Image: Arne Dedert/EPA

The Soul-Killing Structure of the Modern Office

Picture Leonardo DiCaprio heading stolidly to work at the start of two of his most alliterative movies. In Revolutionary Road, set in 1955, he’s Frank Wheeler, a fedora’d nobody who takes a train into Manhattan and the elevator to a high floor in an International-style skyscraper. He smokes at his desk, slips out for a two-martini lunch, and gets periodically summoned to the executive den where important company decisions are made. Wheeler is a cog, but he is an enviable cog—by appearances, he has achieved everything a man is supposed to want in postwar America.

In The Wolf of Wall Street, set in the late 1980s, DiCaprio is a failed broker named Jordan Belfort who follows a classified ad to a Long Island strip mall, where a group of scrappy penny-stock traders cold-call their marks and drive home in sedans. His office need not be a status symbol, since prestige for stock traders is about domination, not conformity; if you become a millionaire, who cares if you did it in the Chrysler Building or your garage?

Watch these films back-to-back, and you’ll see DiCaprio traverse the recent history of the American workplace. A white-collar job used to be a signal of ambition and stability far beyond that offered by farm, factory, or retail work. But what was once a reward has become a nonnecessity—a mere company mailing address. Highways are now stuffed with sand-colored, dark-windowed cubicle barns arranged in groups like unopened moving boxes. Barely anyone who works in this kind of place expects to spend a career in that building, but no matter where you go, you can expect variations on the same fluorescent lighting, corporate wall art, and water coolers.

In his new book, Cubed: A Secret History of the Workplace, Nikil Saval claims that 60 percent of Americans still make their money in cubicles, and 93 percent of those are unhappy to do so. But rather than indict these artless workspaces, Saval traces the intellectual history of our customizable pens to find that they’re the twisted end result of utopian thinking. “The story of white-collar work hinges on promises of freedom and uplift that have routinely been betrayed,” he writes. Above all, Cubed is a graveyard of social-engineering campaigns.

Saval, an editor at n+1, traces the modern office’s roots back to the bookkeeping operations of the early industrial revolution, where clerks in starched collars itemized stuff produced by their blue-collar counterparts. Saval describes these cramped spaces as the birthplace of a new ethic of “self-improvement.” A clerkship was a step up from manual labor, and the men lucky enough to pursue it often found themselves detached—from the close-knit worlds of farming or factory work and even from their fellow clerks, who were now just competition. In Saval’s telling, this is where middle-class anxiety began. (...)

My first office job started in the summer of 2007. I’d just graduated from college, and I took the light rail to the outer suburbs of Baltimore and walked half a mile to my desk. The McCormick & Company factory was nearby, so each day smelled like a different spice. In that half-mile (sidewalk-free, of course), I passed three other corporate campuses and rarely saw anyone coming or going. I worked in a cubicle of blue fabric and glass partitions and reported to the manager with the nearest window. For team meetings, we’d head into a room with a laminate-oak table and a whiteboard. If it was warm, I’d take lunch at a wooden picnic table in the parking lot, the only object for miles that looked like weather could affect it. In my sensible shoes and flat-front khakis, I’d listen to the murmur of Interstate 83 from just over a tree-lined highway barrier, the air smelling faintly of cumin or allspice. This was not a sad scene, but it was an empty one, and I was jolted back to it when I read Saval’s assertion that post-skyscraper office design “had to be eminently rentable. … The winners in this new American model weren’t office workers or architects, not even executives or captains of industry, but real estate speculators.”

Freelancers are expected to account for 40 percent to 50 percent of the American workforce by 2020. Saval notes a few responses to this sea change, such as “co-working” offices for multiple small companies or self-employed people to share. But he never asks why the shift is under way or why nearly a quarter of young people in America now expect to work for six or more companies. These are symptoms of the recession, and the result of baby boomers delaying retirement to make up for lost savings. But they’re also responses to businesses’ apparent feelings toward their employees. It’s not so much the blandness of corporate architecture, which can have a kind of antiseptic beauty; it’s the transience of everything in sight, from the computer-bound work to the floor plans designed so that any company can move right in when another ends its lease or bellies up.When everything is so disposable, why would anyone expect or want to stay?

by John Lingan, American Prospect |  Read more:
Image: CubeSpace/Asa Wilson

And once the storm is over you won’t remember how you made it through, how you managed to survive. You won’t even be sure, in fact, whether the storm is really over. But one thing is certain. When you come out of the storm you won’t be the same person who walked in. That’s what this storm’s all about.

Haruki Murakami, Kafka on the Shore

The Rise of Corporate Impunity

On the evening of Jan. 27, Kareem Serageldin walked out of his Times Square apartment with his brother and an old Yale roommate and took off on the four-hour drive to Philipsburg, a small town smack in the middle of Pennsylvania. Despite once earning nearly $7 million a year as an executive at Credit Suisse, Serageldin, who is 41, had always lived fairly modestly. A previous apartment, overlooking Victoria Station in London, struck his friends as a grown-up dorm room; Serageldin lived with bachelor-pad furniture and little of it — his central piece was a night stand overflowing with economics books, prospectuses and earnings reports. In the years since, his apartments served as places where he would log five or six hours of sleep before going back to work, creating and trading complex financial instruments. One friend called him an "investment-banking monk."

Serageldin's life was about to become more ascetic. Two months earlier, he sat in a Lower Manhattan courtroom adjusting and readjusting his tie as he waited for a judge to deliver his prison sentence. During the worst of the financial crisis, according to prosecutors, Serageldin had approved the concealment of hundreds of millions in losses in Credit Suisse's mortgage-backed securities portfolio. But on that November morning, the judge seemed almost torn. Serageldin lied about the value of his bank's securities — that was a crime, of course — but other bankers behaved far worse. Serageldin's former employer, for one, had revised its past financial statements to account for $2.7 billion that should have been reported. Lehman Brothers, AIG, Citigroup, Countrywide and many others had also admitted that they were in much worse shape than they initially allowed. Merrill Lynch, in particular, announced a loss of nearly $8 billion three weeks after claiming it was $4.5 billion. Serageldin's conduct was, in the judge's words, "a small piece of an overall evil climate within the bank and with many other banks." Nevertheless, after a brief pause, he eased down his gavel and sentenced Serageldin, an Egyptian-born trader who grew up in the barren pinelands of Michigan's Upper Peninsula, to 30 months in jail. Serageldin would begin serving his time at Moshannon Valley Correctional Center, in Philipsburg, where he would earn the distinction of being the only Wall Street executive sent to jail for his part in the financial crisis.

American financial history has generally unfolded as a series of booms followed by busts followed by crackdowns. After the crash of 1929, the Pecora Hearings seized upon public outrage, and the head of the New York Stock Exchange landed in prison. After the savings-and-loan scandals of the 1980s, 1,100 people were prosecuted, including top executives at many of the largest failed banks. In the '90s and early aughts, when the bursting of the Nasdaq bubble revealed widespread corporate accounting scandals, top executives from WorldCom, Enron, Qwest and Tyco, among others, went to prison.

The credit crisis of 2008 dwarfed those busts, and it was only to be expected that a similar round of crackdowns would ensue. In 2009, the Obama administration appointed Lanny Breuer to lead the Justice Department's criminal division. Breuer quickly focused on professionalizing the operation, introducing the rigor of a prestigious firm like Covington & Burling, where he had spent much of his career. He recruited elite lawyers from corporate firms and the Breu Crew, as they would later be known, were repeatedly urged by Breuer to "take it to the next level."

But the crackdown never happened. Over the past year, I've interviewed Wall Street traders, bank executives, defense lawyers and dozens of current and former prosecutors to understand why the largest man-made economic catastrophe since the Depression resulted in the jailing of a single investment banker — one who happened to be several rungs from the corporate suite at a second-tier financial institution. Many assume that the federal authorities simply lacked the guts to go after powerful Wall Street bankers, but that obscures a far more complicated dynamic. During the past decade, the Justice Department suffered a series of corporate prosecutorial fiascos, which led to critical changes in how it approached white-collar crime. The department began to focus on reaching settlements rather than seeking prison sentences, which over time unintentionally deprived its ranks of the experience needed to win trials against the most formidable law firms. By the time Serageldin committed his crime, Justice Department leadership, as well as prosecutors in integral United States attorney's offices, were de-emphasizing complicated financial cases — even neglecting clues that suggested that Lehman executives knew more than they were letting on about their bank's liquidity problem. In the mid-'90s, white-collar prosecutions represented an average of 17.6 percent of all federal cases. In the three years ending in 2012, the share was 9.4 percent. (Read the Department of Justice's response to ProPublica's inquiries.)

After the evening drive to Philipsburg, Serageldin checked into a motel. He didn't need to report to Moshannon Valley until 2 p.m. the next day, but he was advised to show up early to get a head start on his processing. Moshannon is a low-security facility, with controlled prisoner movements, a bit tougher than the one portrayed on "Orange Is the New Black." Friends of Serageldin's worried about the violence; he was counseled to keep his head down and never change the channel on the TV no matter who seemed to be watching. Serageldin, who is tall and thin with a regal bearing, was largely preoccupied with how, after a decade of 18-hour trading days, he would pass the time. He was planning on doing math-problem sets and studying economics. He had delayed marrying his longtime girlfriend, a private-equity executive in London, but the plan was for her to visit him frequently.

Other bankers have spoken out about feeling unfairly maligned by the financial crisis, pegged as "banksters" by politicians and commentators. But Serageldin was contrite. "I don't feel angry," he told me in early winter. "I made a mistake. I take responsibility. I'm ready to pay my debt to society." Still, the fact that the only top banker to go to jail for his role in the crisis was neither a mortgage executive (who created toxic products) nor the C.E.O. of a bank (who peddled them) is something of a paradox, but it's one that reflects the many paradoxes that got us here in the first place.

by Jesse Eisinger, Pro Publica |  Read more:
Image: Javier Jaen

Daily Aspirin Regimen Not Safe for Everyone: FDA

Taking an aspirin a day can help prevent heart attack and stroke in people who have suffered such health crises in the past, but not in people who have never had heart problems, according to the U.S. Food and Drug Administration.

"Since the 1990s, clinical data have shown that in people who have experienced a heart attack, stroke or who have a disease of the blood vessels in the heart, a daily low dose of aspirin can help prevent a reoccurrence," Dr. Robert Temple, deputy director for clinical science at the FDA, said in an agency news release.

A low-dose tablet contains 80 milligrams (mg) of aspirin, compared with 325 mg in a regular strength tablet.

However, an analysis of data from major studies does not support the use of aspirin as a preventive medicine in people who have not had a heart attack, stroke or heart problems. In these people, aspirin provides no benefits and puts them at risk for side effects such as dangerous bleeding in the brain or stomach, the FDA said.

by Robert Priedt, WebMD | Read more:
Image: uncredited