Wednesday, August 12, 2015

The Spike: What Lies Behind the New Heroin Epidemic?

[ed. One city. Millions of syringes.]

Heroin use, which had been relatively stable through most of the decade, began to spike in the late 2000s throughout the United States. Cheap and plentiful, the drug is a staple commodity in an underground market that is as big as the globe and as intimate as your arm. And while heroin has enjoyed widespread popularity since the end of World War II, demand has soared in the past five years as jonesing prescription opioid addicts like Lance have migrated to the street. (...)

Researchers trace the rise in heroin use, in part, to the doctor’s office. In the late 1990s, there was a shift in health-care philosophy that emphasized treating patients’ pain rather than just the underlying ailments causing it. Opioids that had previously been restricted to ailments like cancer or physical trauma suddenly became widely available for more broadly defined problems like chronic pain. At the same time, Purdue Pharma introduced and aggressively marketed OxyContin (the brand name for oxycodone), a painkiller designed to gradually release opioids into the body.

As the sheer amount of opioids prescribed to Americans suddenly jumped, the drugs naturally found their way onto the street.

Users quickly figured out how to circumvent the drug’s time-delay feature, making oxy the vehicle of choice for people who wanted to get high on prescription drugs. “The Gucci, the drug that people wanted,”—people like Lance—“was OxyContin,” says Banta-Green. (...)

Withdrawal from opioids isn’t lethal, as it can be with alcohol or benzodiazepines (Valium), but it is deeply unpleasant, particularly for people with the kind of trauma or poverty that might drive them to drug abuse in the first place. Medication-assisted detox can ease the withdrawal by manipulating the brain receptors that trigger cravings. But without meds, a seasoned opioid addict can expect perhaps a week of snot, sweating, vomiting, nausea, and hot- and cold-flashes, plus—and often more importantly—the resurfacing of painful emotions that had previously been repressed by their drug use. Some addicts do manage to white-knuckle their way out of opioid addiction, but many—separated from friends and resources—are overwhelmed by the painful emptiness of their sober lives. Others, recognizing themselves as “addicts” who are a scourge on their friends and family, fall into a cycle of despair that heroin is particularly good at feeding.

For those on the frontlines of the new heroin epidemic, it’s that loss of hope that is nearly as dangerous as the drug itself. (...)

The holy grail of Murphy’s work, he says, is to reverse that exclusion—to welcome drug users back into the human community. “Our job ... is to convince them that they’re worth something,” he says, because “then you will make different choices” than someone who revels in self-destruction. So the Alliance tries to meet users where they’re at instead of telling them where they should be. Sometimes this looks like the abstinence that Lance tried and failed to achieve; other times, it’s finding a way to stabilize their drug use.

Murphy’s motto, he says, is “Be the best damn drug user that you can be.”

He shows me the Alliance’s supply room. Brown cardboard boxes are piled up to the ceiling, packed so deep there’s barely room for us to shimmy between them. Boxes of syringes are stacked in towering brown columns.

The Alliance gives out a lot of syringes—about 3.2 million per year to King County residents, says Murphy, and collects back as many as 5 million used ones. About a million of the former go to suburban users, he says—a demographic that he saw rapidly grow starting around 2010. That would have been around the same time that Lance, and thousands of others, began their migration from Big Pharma to black tar.

“For me, it was a really sad and stressful time,” says Murphy. For a couple of months, the phone at the Alliance was ringing off the hook from prescription users asking for help. “We were getting multiple calls every week,” says Murphy, from frightened suburbanites trying to figure out how to buy heroin. Callers would say “I’m so scared” and “You gotta help me.” But Murphy couldn’t: The Alliance doesn’t hook people up with drugs. “It was hard to hear all these young folks in this really chaotic and traumatic experience,” he says. “We saw these folks quickly change into injection drug users, sometimes on the streets, sometimes in the suburbs.” Stable drug use, says Murphy, was transforming into unstable drug use, and quality-controlled drugs were being replaced by heroin off the street. “Our delivery service really skyrocketed, to where in the Eastside and North King County, we do over a million syringes a year just delivering to the suburbs. The suburbs have just as much injection drug use as the city.

“The average drug user,” he says, “was much younger, and much more, let’s say, lack of city smarts or street smarts. It was really sad, that whole story and that generation. There wasn’t really a lot of older drug users to help teach them. They were left on their own.”

None of this seems fair to Murphy. “We give people OxyContin,” he says, referring to society at large, “which is essentially legal heroin, and then we tell them that they can’t have it anymore and the only way they can get it is street heroin. We also let drug cartels be our FDA on what’s quality control. We allow people to ingest horrible cuts of drugs, with people getting horrible allergic reactions to stuff it’s [mixed] with.” Criminalization, he says, only drives people further into addiction, cutting them off from the social bonds that can help addicts to cope with undiluted reality.

“It’s not that hard to figure out that beating a human being up isn’t helpful,” says Murphy. “It’s not that hard to figure out that stripping someone of their rights and dignities by taking them to jail is a detriment to society.”

by Casey Jaywork, Seattle Weekly | Read more:
Image: Barry Blankenship

Tuesday, August 11, 2015

War

A World Without Work

For centuries, experts have predicted that machines would make workers obsolete. That moment may finally be arriving. Could that be a good thing?

In the past few years, even as the United States has pulled itself partway out of the jobs hole created by the Great Recession, some economists and technologists have warned that the economy is near a tipping point. When they peer deeply into labor-market data, they see troubling signs, masked for now by a cyclical recovery. And when they look up from their spreadsheets, they see automation high and low—robots in the operating room and behind the fast-food counter. They imagine self-driving cars snaking through the streets and Amazon drones dotting the sky, replacing millions of drivers, warehouse stockers, and retail workers. They observe that the capabilities of machines—already formidable—continue to expand exponentially, while our own remain the same. And they wonder: Is any job truly safe?

Futurists and science-fiction writers have at times looked forward to machines’ workplace takeover with a kind of giddy excitement, imagining the banishment of drudgery and its replacement by expansive leisure and almost limitless personal freedom. And make no mistake: if the capabilities of computers continue to multiply while the price of computing continues to decline, that will mean a great many of life’s necessities and luxuries will become ever cheaper, and it will mean great wealth—at least when aggregated up to the level of the national economy.

But even leaving aside questions of how to distribute that wealth, the widespread disappearance of work would usher in a social transformation unlike any we’ve seen. If John Russo is right, then saving work is more important than saving any particular job. Industriousness has served as America’s unofficial religion since its founding. The sanctity and preeminence of work lie at the heart of the country’s politics, economics, and social interactions. What might happen if work goes away? (...)

What does the “end of work” mean, exactly? It does not mean the imminence of total unemployment, nor is the United States remotely likely to face, say, 30 or 50 percent unemployment within the next decade. Rather, technology could exert a slow but continual downward pressure on the value and availability of work—that is, on wages and on the share of prime-age workers with full-time jobs. Eventually, by degrees, that could create a new normal, where the expectation that work will be a central feature of adult life dissipates for a significant portion of society.

After 300 years of people crying wolf, there are now three broad reasons to take seriously the argument that the beast is at the door: the ongoing triumph of capital over labor, the quiet demise of the working man, and the impressive dexterity of information technology.

Labor’s losses. One of the first things we might expect to see in a period of technological displacement is the diminishment of human labor as a driver of economic growth. In fact, signs that this is happening have been present for quite some time. The share of U.S. economic output that’s paid out in wages fell steadily in the 1980s, reversed some of its losses in the ’90s, and then continued falling after 2000, accelerating during the Great Recession. It now stands at its lowest level since the government started keeping track in the mid‑20th century.

A number of theories have been advanced to explain this phenomenon, including globalization and its accompanying loss of bargaining power for some workers. But Loukas Karabarbounis and Brent Neiman, economists at the University of Chicago, have estimated that almost half of the decline is the result of businesses’ replacing workers with computers and software. In 1964, the nation’s most valuable company, AT&T, was worth $267 billion in today’s dollars and employed 758,611 people. Today’s telecommunications giant, Google, is worth $370 billion but has only about 55,000 employees—less than a tenth the size of AT&T’s workforce in its heyday.

The spread of nonworking men and underemployed youth. The share of prime-age Americans (25 to 54 years old) who are working has been trending down since 2000. Among men, the decline began even earlier: the share of prime-age men who are neither working nor looking for work has doubled since the late 1970s, and has increased as much throughout the recovery as it did during the Great Recession itself. All in all, about one in six prime-age men today are either unemployed or out of the workforce altogether. This is what the economist Tyler Cowen calls “the key statistic” for understanding the spreading rot in the American workforce. Conventional wisdom has long held that under normal economic conditions, men in this age group—at the peak of their abilities and less likely than women to be primary caregivers for children—should almost all be working. Yet fewer and fewer are.

Economists cannot say for certain why men are turning away from work, but one explanation is that technological change has helped eliminate the jobs for which many are best suited. Since 2000, the number of manufacturing jobs has fallen by almost 5 million, or about 30 percent.

Young people just coming onto the job market are also struggling—and by many measures have been for years. Six years into the recovery, the share of recent college grads who are “underemployed” (in jobs that historically haven’t required a degree) is still higher than it was in 2007—or, for that matter, 2000. And the supply of these “non-college jobs” is shifting away from high-paying occupations, such as electrician, toward low-wage service jobs, such as waiter. More people are pursuing higher education, but the real wages of recent college graduates have fallen by 7.7 percent since 2000. In the biggest picture, the job market appears to be requiring more and more preparation for a lower and lower starting wage. The distorting effect of the Great Recession should make us cautious about overinterpreting these trends, but most began before the recession, and they do not seem to speak encouragingly about the future of work.

The shrewdness of software. One common objection to the idea that technology will permanently displace huge numbers of workers is that new gadgets, like self-checkout kiosks at drugstores, have failed to fully displace their human counterparts, like cashiers. But employers typically take years to embrace new machines at the expense of workers. The robotics revolution began in factories in the 1960s and ’70s, but manufacturing employment kept rising until 1980, and then collapsed during the subsequent recessions. Likewise, “the personal computer existed in the ’80s,” says Henry Siu, an economist at the University of British Columbia, “but you don’t see any effect on office and administrative-support jobs until the 1990s, and then suddenly, in the last recession, it’s huge. So today you’ve got checkout screens and the promise of driverless cars, flying drones, and little warehouse robots. We know that these tasks can be done by machines rather than people. But we may not see the effect until the next recession, or the recession after that.”

Some observers say our humanity is a moat that machines cannot cross. They believe people’s capacity for compassion, deep understanding, and creativity are inimitable. But as Erik Brynjolfsson and Andrew McAfee have argued in their book The Second Machine Age, computers are so dexterous that predicting their application 10 years from now is almost impossible. Who could have guessed in 2005, two years before the iPhone was released, that smartphones would threaten hotel jobs within the decade, by helping homeowners rent out their apartments and houses to strangers on Airbnb? Or that the company behind the most popular search engine would design a self-driving car that could soon threaten driving, the most common job occupation among American men?

In 2013, Oxford University researchers forecast that machines might be able to perform half of all U.S. jobs in the next two decades. The projection was audacious, but in at least a few cases, it probably didn’t go far enough. For example, the authors named psychologist as one of the occupations least likely to be “computerisable.” But some research suggests that people are more honest in therapy sessions when they believe they are confessing their troubles to a computer, because a machine can’t pass moral judgment. Google and WebMD already may be answering questions once reserved for one’s therapist. This doesn’t prove that psychologists are going the way of the textile worker. Rather, it shows how easily computers can encroach on areas previously considered “for humans only.”

by Derek Thompson, The Atlantic |  Read more:
Image: Adam Levey

Design Thinking Comes of Age

There’s a shift under way in large organizations, one that puts design much closer to the center of the enterprise. But the shift isn’t about aesthetics. It’s about applying the principles of design to the way people work.

This new approach is in large part a response to the increasing complexity of modern technology and modern business. That complexity takes many forms. Sometimes software is at the center of a product and needs to be integrated with hardware (itself a complex task) and made intuitive and simple from the user’s point of view (another difficult challenge). Sometimes the problem being tackled is itself multi-faceted: Think about how much tougher it is to reinvent a health care delivery system than to design a shoe. And sometimes the business environment is so volatile that a company must experiment with multiple paths in order to survive.

I could list a dozen other types of complexity that businesses grapple with every day. But here’s what they all have in common: People need help making sense of them. Specifically, people need their interactions with technologies and other complex systems to be simple, intuitive, and pleasurable.

A set of principles collectively known as design thinking—empathy with users, a discipline of prototyping, and tolerance for failure chief among them—is the best tool we have for creating those kinds of interactions and developing a responsive, flexible organizational culture.

What Is a Design-Centric Culture?

If you were around during the late-1990s dot-com craze, you may think of designers as 20-somethings shooting Nerf darts across an office that looks more like a bar. Because design has historically been equated with aesthetics and craft, designers have been celebrated as artistic savants. But a design-centric culture transcends design as a role, imparting a set of principles to all people who help bring ideas to life. Let’s consider those principles.

Focus on users’ experiences, especially their emotional ones.

To build empathy with users, a design-centric organization empowers employees to observe behavior and draw conclusions about what people want and need. Those conclusions are tremendously hard to express in quantitative language. Instead, organizations that “get” design use emotional language (words that concern desires, aspirations, engagement, and experience) to describe products and users. Team members discuss the emotional resonance of a value proposition as much as they discuss utility and product requirements.

A traditional value proposition is a promise of utility: If you buy a Lexus, the automaker promises that you will receive safe and comfortable transportation in a well-designed high-performance vehicle. An emotional value proposition is a promise of feeling: If you buy a Lexus, the automaker promises that you will feel pampered, luxurious, and affluent. In design-centric organizations, emotionally charged language isn’t denigrated as thin, silly, or biased. Strategic conversations in those companies frequently address how a business decision or a market trajectory will positively influence users’ experiences and often acknowledge only implicitly that well-designed offerings contribute to financial success.

The focus on great experiences isn’t limited to product designers, marketers, and strategists—it infuses every customer-facing function. Take finance. Typically, its only contact with users is through invoices and payment systems, which are designed for internal business optimization or predetermined “customer requirements.” But those systems are touch points that shape a customer’s impression of the company. In a culture focused on customer experience, financial touch points are designed around users’ needs rather than internal operational efficiencies.

by Jon Kolko, HBR |  Read more:
Image: via:

The Happiness Machine

How Google became such a great place to work.

A few years ago, Google’s human resources department noticed a problem: A lot of women were leaving the company. Like the majority of Silicon Valley software firms, Google is staffed mostly by men, and executives have long made it a priority to increase the number of female employees. But the fact that women were leaving Google wasn’t just a gender equity problem—it was affecting the bottom line. Unlike in most sectors of the economy, the market for top-notch tech employees is stretched incredibly thin. Google fights for potential workers with Apple, Facebook, Amazon, Microsoft, and hordes of startups, so every employee’s departure triggers a costly, time-consuming recruiting process.

Then there was the happiness problem. Google monitors its employees’ well-being to a degree that can seem absurd to those who work outside Mountain View. The attrition rate among women suggested there might be something amiss in the company’s happiness machine. And if there’s any sign that joy among Googlers is on the wane, it’s the Google HR department’s mission to figure out why and how to fix it.

Google calls its HR department People Operations, though most people in the firm shorten it to POPS. The group is headed by Laszlo Bock, a trim, soft-spoken 40-year-old who came to Google six years ago. Bock says that when POPS looked into Google’s woman problem, it found it was really a new mother problem: Women who had recently given birth were leaving at twice Google’s average departure rate. At the time, Google offered an industry-standard maternity leave plan. After a woman gave birth, she got 12 weeks of paid time off. For all other new parents in its California offices, but not for its workers outside the state, the company offered seven paid weeks of leave.

So in 2007, Bock changed the plan. New mothers would now get five months off at full pay and full benefits, and they were allowed to split up that time however they wished, including taking some of that time off just before their due date. If she likes, a new mother can take a couple months off after birth, return part time for a while, and then take the balance of her time off when her baby is older. Plus, Google began offering the seven weeks of new-parent leave to all its workers around the world.

Google’s lavish maternity and paternity leave plans probably don’t surprise you. The company’s swank perks—free gourmet food, on-site laundry, Wi-Fi commuting shuttles—are legendary in the corporate world, and they’ve driven a culture of ever-increasing luxuries for tech workers. This week, for the fourth consecutive year, Google was named the best company to work for by Fortune magazine; Microsoft was No. 75, while Apple, Amazon, and Facebook didn’t even make the list.

At times Google’s largesse can sound excessive—noble but wasteful from a bottom-line perspective. In August, for example, Forbes disclosed one previously unannounced Google perk—when an employee dies, the company pays his spouse or domestic partner half of his salary for a decade. Yet it would be a mistake to conclude that Google doles out such perks just to be nice. POPS rigorously monitors a slew of data about how employees respond to benefits, and it rarely throws money away. The five-month maternity leave plan, for instance, was a winner for the company. After it went into place, Google’s attrition rate for new mothers dropped down to the average rate for the rest of the firm. “A 50 percent reduction—it was enormous!” Bock says. What’s more, happiness—as measured by Googlegeist, a lengthy annual survey of employees—rose as well. Best of all for the company, the new leave policy was cost-effective. Bock says that if you factor in the savings in recruitment costs, granting mothers five months of leave doesn’t cost Google any more money.

The change in maternity leave exemplifies how POPS has helped Google become the country’s best employer. Under Bock, Google’s HR department functions more like a rigorous science lab than the pesky hall monitor most of us picture when we think of HR. At the heart of POPS is a sophisticated employee-data tracking program, an effort to gain empirical certainty about every aspect of Google’s workers’ lives—not just the right level of pay and benefits but also such trivial-sounding details as the optimal size and shape of the cafeteria tables and the length of the lunch lines.

In the last couple years, Google has even hired social scientists to study the organization. The scientists—part of a group known as the PiLab, short for People & Innovation Lab—run dozens of experiments on employees in an effort to answer questions about the best way to manage a large firm. How often should you remind people to contribute to their 401(k)s, and what tone should you use? Do successful middle managers have certain skills in common—and can you teach those skills to unsuccessful managers? Or, for that matter, do managers even matter—can you organize a company without them? And say you want to give someone a raise—how should you do it in a way that maximizes his happiness? Should you give him a cash bonus? Stock? A raise? More time off?

by Farhad Manjoo, Slate |  Read more:
Image: Google

Algorithmic Trading: The Play-at-Home Version

[ed. I wonder how his morning went today.]

After more than 100 hours of coding over three months, Mike Soule was finally ready to switch on his project. He didn’t know what to expect. If things went right, he could be on his way to financial success. If things went wrong, he could lose his savings.

His creation wasn’t a new mobile app or e-commerce store. It was a computer program that would buy and sell currencies 24 hours a day, five days a week.

DIY’s newest frontier is algorithmic trading. Spurred on by their own curiosity and coached by hobbyist groups and online courses, thousands of day-trading tinkerers are writing up their own trading software and turning it loose on the markets.

“It’s definitely one of those things where you are like, ‘Is this going to work?’” said Mr. Soule, who is a student at University of Nevada, Reno, and a network administrator at Tahoe Forest Health system. “When it finally started trading, wow, wow. I don’t know if that is what I expected, but I did it.”

Interactive Brokers Group Inc. actively solicits at-home algorithmic traders with services to support their transactions. YouTube videos from traders and companies explaining the basics have tens of thousands of views. More than 170,000 people enrolled in a popular online course, “Computational Investing,” taught by Georgia Institute of Technology professor Tucker Balch. Only about 5% completed it, but at an algorithmic trading event in New York in April, three people asked him for his autograph.

“College professors very rarely get asked for their autographs,” Mr. Balch said.

To learn more about the fundamentals of algorithmic trading, Alexander Sommerwatched Mr. Balch’s video lectures.

Now, every weekday morning before work, Mr. Sommer wakes up in Vienna to an email summarizing his coming trades for the day. The email is generated by his custom-built trading platform, which automatically places trades throughout the day using the algorithms he and his three trading partners developed. The four jointly trade about $200,000 of their own money on S&P 500 and Nasdaq Composite stocks.

by Austen Hufford, WSJ |  Read more:
Image: Severin Koller

Monday, August 10, 2015


Dalibor Levíček, The world according to assholes with spray paint
via:

How to Flirt Best: The Perceived Effectiveness of Flirtation Techniques

Flirting is considered a universal and essential aspect of human interaction (Eibl-Eibesfeldt & Hass, 1967; Luscombe, 2008). Individuals, both married and single, flirt. Additionally, flirtation can be used for either courtship initiation or quasi-courtship purposes. (...)

Men and women alike use nonverbal signals, such as direct glancing, space-maximization movements, and automanipulations, in relevant mate-selection contexts (Renninger et al., 2004). The nonverbal courtship signaling involved in flirtation serves a useful purpose. Women use subtle indicators of male interest to help them pace the course of any potential relationship while they assess a man’s willingness and ability to donate resources. Therefore, the task for women is to express enough interest to elicit courtship behavior, but not to elicit a level of interest that leads a man to skip courtship behavior, while men attempt to display their status, health, strength, and intelligence in a desired, unintimidating way. From an evolutionary perspective flirting can be thought of as a product of our evolved mate acquisition adaptations. (...)

Since sexual access is crucial for male mate selection and securing a commitment is most important for women’s mate selection, one might expect a woman’s actions that are suggestive of sexual accessibility to be the most effective way to flirt with a man. Conversely, since women typically desire a long term commitment, a man’s actions that are suggestive of a willingness to commit may be the most effective way for a man to flirt with a woman. Yet, there is a void in the attraction literature. Recent research has not examined this. It is important to ascertain which flirtatious actions are most effective as this knowledge will further enhance the knowledge base regarding flirtation, and further strengthen the knowledge base regarding human attraction. Since evolutionary theory based research can account for many aspects of mate attraction, yet has not examined the effectiveness of overt flirtation tactics, it is important to determine if evolutionary theory can also account for the overt tactics that are most effective for flirting with members of the opposite sex.

T. Joel Wade and Jennifer Slemp, Interpersona | Read more:
Image: via:
h/t new shelton wet/dry

Sunday, August 9, 2015


Jonas Wood, Interior with Fireplace
via:

Tinder and the Dawn of the “Dating Apocalypse”

[ed. The feeling I get from reading this is simply 'ick'. Maybe the story is a bit hyperbolic, and maybe there are alternatives, but there's no denying relationships are more technology-driven these days, and not just for dating (Facebook, Twitter, Instagram, Snapchat, all "social media" really) -- all powered by the omnipresent smartphone. Is that good or bad? Does it even matter?]

It’s a balmy night in Manhattan’s financial district, and at a sports bar called Stout, everyone is Tindering. The tables are filled with young women and men who’ve been chasing money and deals on Wall Street all day, and now they’re out looking for hookups. Everyone is drinking, peering into their screens and swiping on the faces of strangers they may have sex with later that evening. Or not. “Ew, this guy has Dad bod,” a young woman says of a potential match, swiping left. Her friends smirk, not looking up.

“Tinder sucks,” they say. But they don’t stop swiping.

At a booth in the back, three handsome twentysomething guys in button-downs are having beers. They are Dan, Alex, and Marty, budding investment bankers at the same financial firm, which recruited Alex and Marty straight from an Ivy League campus. (Names and some identifying details have been changed for this story.) When asked if they’ve been arranging dates on the apps they’ve been swiping at, all say not one date, but two or three: “You can’t be stuck in one lane … There’s always something better.” “If you had a reservation somewhere and then a table at Per Se opened up, you’d want to go there,” Alex offers.

“Guys view everything as a competition,” he elaborates with his deep, reassuring voice. “Who’s slept with the best, hottest girls?” With these dating apps, he says, “you’re always sort of prowling. You could talk to two or three girls at a bar and pick the best one, or you can swipe a couple hundred people a day—the sample size is so much larger. It’s setting up two or three Tinder dates a week and, chances are, sleeping with all of them, so you could rack up 100 girls you’ve slept with in a year.”

He says that he himself has slept with five different women he met on Tinder—“Tinderellas,” the guys call them—in the last eight days. Dan and Marty, also Alex’s roommates in a shiny high-rise apartment building near Wall Street, can vouch for that. In fact, they can remember whom Alex has slept with in the past week more readily than he can.

“Brittany, Morgan, Amber,” Marty says, counting on his fingers. “Oh, and the Russian—Ukrainian?”

“Ukrainian,” Alex confirms. “She works at—” He says the name of a high-end art auction house. Asked what these women are like, he shrugs. “I could offer a résumé, but that’s about it … Works at J. Crew; senior at Parsons; junior at Pace; works in finance … ”

“We don’t know what the girls are like,” Marty says.

“And they don’t know us,” says Alex.

And yet a lack of an intimate knowledge of his potential sex partners never presents him with an obstacle to physical intimacy, Alex says. Alex, his friends agree, is a Tinder King, a young man of such deft “text game”—“That’s the ability to actually convince someone to do something over text,” Marty explains—that he is able to entice young women into his bed on the basis of a few text exchanges, while letting them know up front he is not interested in having a relationship.

“How does he do it?,” Marty asks, blinking. “This guy’s got a talent.”

But Marty, who prefers Hinge to Tinder (“Hinge is my thing”), is no slouch at “racking up girls.” He says he’s slept with 30 to 40 women in the last year: “I sort of play that I could be a boyfriend kind of guy,” in order to win them over, “but then they start wanting me to caremore … and I just don’t.”

“Dude, that’s not cool,” Alex chides in his warm way. “I always make a point of disclosing I’m not looking for anything serious. I just wanna hang out, be friends, see what happens … If I were ever in a court of law I could point to the transcript.” But something about the whole scenario seems to bother him, despite all his mild-mannered bravado. “I think to an extent it is, like, sinister,” he says, “ ‘cause I know that the average girl will think that there’s a chance that she can turn the tables. If I were like, Hey, I just wanna bone, very few people would want to meet up with you …

“Do you think this culture is misogynistic?” he asks lightly. (...)

Mobile dating went mainstream about five years ago; by 2012 it was overtaking online dating. In February, one study reported there were nearly 100 million people—perhaps 50 million on Tinder alone—using their phones as a sort of all-day, every-day, handheld singles club, where they might find a sex partner as easily as they’d find a cheap flight to Florida. “It’s like ordering Seamless,” says Dan, the investment banker, referring to the online food-delivery service. “But you’re ordering a person.”

The comparison to online shopping seems an apt one. Dating apps are the free-market economy come to sex. The innovation of Tinder was the swipe—the flick of a finger on a picture, no more elaborate profiles necessary and no more fear of rejection; users only know whether they’ve been approved, never when they’ve been discarded. OkCupid soon adopted the function. Hinge, which allows for more information about a match’s circle of friends through Facebook, and Happn, which enables G.P.S. tracking to show whether matches have recently “crossed paths,” use it too. It’s telling that swiping has been jocularly incorporated into advertisements for various products, a nod to the notion that, online, the act of choosing consumer brands and sex partners has become interchangeable.

“It’s instant gratification,” says Jason, 26, a Brooklyn photographer, “and a validation of your own attractiveness by just, like, swiping your thumb on an app. You see some pretty girl and you swipe and it’s, like, oh, she thinks you’re attractive too, so it’s really addicting, and you just find yourself mindlessly doing it.” “Sex has become so easy,” says John, 26, a marketing executive in New York. “I can go on my phone right now and no doubt I can find someone I can have sex with this evening, probably before midnight.”

by Nancy Jo Sales, Variety |  Read more:
Image: Justin Bishop

Lindsey Carr, Status Soup
via:

Hanging daicho account book Meiji period
via:

What is Phenomenology?

Phenomenology is commonly understood in either of two ways: as a disciplinary field in philosophy, or as a movement in the history of philosophy.

The discipline of phenomenology may be defined initially as the study of structures of experience, or consciousness. Literally, phenomenology is the study of “phenomena”: appearances of things, or things as they appear in our experience, or the ways we experience things, thus the meanings things have in our experience. Phenomenology studies conscious experience as experienced from the subjective or first person point of view. This field of philosophy is then to be distinguished from, and related to, the other main fields of philosophy: ontology (the study of being or what is), epistemology (the study of knowledge), logic (the study of valid reasoning), ethics (the study of right and wrong action), etc.

The historical movement of phenomenology is the philosophical tradition launched in the first half of the 20th century by Edmund Husserl, Martin Heidegger, Maurice Merleau-Ponty, Jean-Paul Sartre, et al. In that movement, the discipline of phenomenology was prized as the proper foundation of all philosophy — as opposed, say, to ethics or metaphysics or epistemology. The methods and characterization of the discipline were widely debated by Husserl and his successors, and these debates continue to the present day. (The definition of phenomenology offered above will thus be debatable, for example, by Heideggerians, but it remains the starting point in characterizing the discipline.)

In recent philosophy of mind, the term “phenomenology” is often restricted to the characterization of sensory qualities of seeing, hearing, etc.: what it is like to have sensations of various kinds. However, our experience is normally much richer in content than mere sensation. Accordingly, in the phenomenological tradition, phenomenology is given a much wider range, addressing the meaning things have in our experience, notably, the significance of objects, events, tools, the flow of time, the self, and others, as these things arise and are experienced in our “life-world”.

Phenomenology as a discipline has been central to the tradition of continental European philosophy throughout the 20th century, while philosophy of mind has evolved in the Austro-Anglo-American tradition of analytic philosophy that developed throughout the 20th century. Yet the fundamental character of our mental activity is pursued in overlapping ways within these two traditions. Accordingly, the perspective on phenomenology drawn in this article will accommodate both traditions. The main concern here will be to characterize the discipline of phenomenology, in a contemporary purview, while also highlighting the historical tradition that brought the discipline into its own.

Basically, phenomenology studies the structure of various types of experience ranging from perception, thought, memory, imagination, emotion, desire, and volition to bodily awareness, embodied action, and social activity, including linguistic activity. The structure of these forms of experience typically involves what Husserl called “intentionality”, that is, the directedness of experience toward things in the world, the property of consciousness that it is a consciousness of or about something. According to classical Husserlian phenomenology, our experience is directed toward — represents or “intends” — things only through particular concepts, thoughts, ideas, images, etc. These make up the meaning or content of a given experience, and are distinct from the things they present or mean.

The basic intentional structure of consciousness, we find in reflection or analysis, involves further forms of experience. Thus, phenomenology develops a complex account of temporal awareness (within the stream of consciousness), spatial awareness (notably in perception), attention (distinguishing focal and marginal or “horizonal” awareness), awareness of one's own experience (self-consciousness, in one sense), self-awareness (awareness-of-oneself), the self in different roles (as thinking, acting, etc.), embodied action (including kinesthetic awareness of one's movement), purpose or intention in action (more or less explicit), awareness of other persons (in empathy, intersubjectivity, collectivity), linguistic activity (involving meaning, communication, understanding others), social interaction (including collective action), and everyday activity in our surrounding life-world (in a particular culture).

Furthermore, in a different dimension, we find various grounds or enabling conditions — conditions of the possibility — of intentionality, including embodiment, bodily skills, cultural context, language and other social practices, social background, and contextual aspects of intentional activities. Thus, phenomenology leads from conscious experience into conditions that help to give experience its intentionality. Traditional phenomenology has focused on subjective, practical, and social conditions of experience. Recent philosophy of mind, however, has focused especially on the neural substrate of experience, on how conscious experience and mental representation or intentionality are grounded in brain activity. It remains a difficult question how much of these grounds of experience fall within the province of phenomenology as a discipline. Cultural conditions thus seem closer to our experience and to our familiar self-understanding than do the electrochemical workings of our brain, much less our dependence on quantum-mechanical states of physical systems to which we may belong. The cautious thing to say is that phenomenology leads in some ways into at least some background conditions of our experience.

by Stanford Encyclopedia of Philosophy |  Read more:
Image: via:

Saturday, August 8, 2015

The President Defends His Iran Plan

On Wednesday at American University, Barack Obama made the case for the Iran nuclear agreement, and against its critics, in a long and detailed speech. The official transcript is here; the C-Span video is here. Later that afternoon, the president met in the Roosevelt Room of the White House with nine journalists to talk for another 90 minutes about the thinking behind the plan, and its likely political and strategic effects.

The Atlantic’s Jeffrey Goldberg was one of the people at that session, and he plans to write about some aspects of the discussion. Slate’s Fred Kaplan was another, and his report is here. I was there as well and will try to convey some of the texture and highlights.

Procedural note: The session was on the record, so reporters could quote everything the president said. We were allowed to take notes in real time, including typing them out on computers, but we were not allowed to use audio recorders. Direct quotes here have been checked against an internal transcript the White House made.

Nothing in the substance of Obama’s remarks would come as a surprise to people who heard his speech earlier that day or any of his comments in the weeks since the Iran deal was struck—most notably, his answers at the very long press conference he held last month. Obama made a point of this constancy. Half a dozen times, he began answers with, “As I said in the speech...” When one reporter observed that the American University address “reads like a lot of your other speeches,” Obama cut in to say jauntily, “I’m pretty consistent!,” which got a laugh.

But although the arguments are familiar, it is still different to hear them in a conversational rather than formal-oratorical setting. Here are some of the aspects that struck me.

Intellectual and Strategic Confidence

This is one micron away from the trait that Obama-detractors consider his arrogance and aloofness, so I’ll try to be precise about the way it manifested itself.

On the arguments for and against the deal, Obama rattled them off as he did in his speech and at his all-Iran July 15 press conference: You think this deal is flawed? Give me a better alternative. You think its inspection provisions are weak? Look at the facts and you’ll see that they’re more intrusive and verifiable than any other ever signed. You think because Iran’s government is extremist and anti-Semitic we shouldn’t negotiate with it? It’s because Iran has been an adversary that we need to negotiate limits, just as Richard Nixon and Ronald Reagan did with the evil and threatening Soviet Union. You think that rejecting this deal will somehow lead to a “better” deal? Well, let’s follow the logic and see why you’re wrong.

It’s the follow the logic theme I want to stress. Obama is clearly so familiar with these arguments that he was able to present them rapid-fire and as if each were a discrete paragraph in a legal brief. (At other times he spoke with great, pause-filled deliberation, marking his way through the sentence word by word.) And most paragraphs in that brief seemed to end, their arguments don’t hold up or, follow the logic or, it doesn’t make sense or, I don’t think you’ll find the weakness in my logic. You’ll see something similar if you read through his AU speech.

There is practically no other big strategic point on which the U.S., Russia, and China all agree—but they held together on this deal. (“I was surprised that Russia was able to compartmentalize the Iran issue, in light of the severe tensions that we have over Ukraine,” Obama said.) The French, Germans, and British stayed together too, even though they don’t always see eye-to-eye with America on nuclear issues. High-stakes measures don’t often get through the UN Security Council on a 15-0 vote; this deal did.The context for Obama’s certainty is his knowledge that in the rest of the world, this agreement is not controversial at all.

Some hardliners in Iran don’t like the agreement, as Obama frequently points out, and it has ramifications for many countries in the Middle East. But in Washington, only two blocs are actively urging the U.S. Congress to reject it. One is of course the U.S. Republican Party. The other is the Netanyahu administration in Israel plus a range of Israelis from many political parties—though some military and intelligence officials in Israel have dissented from Benjamin Netanyahu’s condemnation of the deal.

Obama has taken heat for pointing out in his speech that “every nation in the world that has commented publicly, with the exception of the Israeli government, has expressed support.” But that’s the plain truth. As delivered, this line of his speech was very noticeably stressed in the way I show:
I recognize that Prime Minister Netanyahu disagrees—disagrees strongly. I do not doubt his sincerity. But I believe he is wrong. … And as president of the United States, it would be an abrogation of my constitutional duty to act against my best judgment simply because it causes temporary friction with a dear friend and ally.
To bring this back to the theme of confidence: In this conversation, as in the speech, Obama gave Netanyahu and other Israeli critics credit for being sincere but misinformed. As for the GOP? Misinformed at best. “The fact that there is a robust debate in Congress is good,” he said in our session. “The fact that the debate sometimes seems unanchored to facts is not so good. ... [We need] to return to some semblance of bipartisanship and soberness when we approach these problems.” (I finished this post while watching the Fox News GOP debate, which gave “semblance of bipartisanship and soberness” new meaning.)

Obama’s intellectual confidence showed through in his certainty that if people looked at the facts and logic, they would come down on his side. His strategic confidence came through in his asserting that as a matter of U.S. national interest, “this to me is not a close call—and I say that based on having made a lot of tough calls.” Most foreign-policy judgments, he said, ended up being “judgments based on percentages,” and most of them “had hair,” the in-house term for complications. Not this one, in his view:

“When I see a situation like this one, where we can achieve an objective with a unified world behind us, and we preserve our hedge against it not working out, I think it would be foolish—even tragic—for us to pass up on that opportunity.”

If you agree with the way Obama follows these facts to these conclusions, as I do, you’re impressed by his determination to fight this out on the facts (rather than saying, 2009 fashion, “We’ll listen to good ideas from all sides”). If you disagree, I can see how his Q.E.D./brainiac certainty could grate.

by James Fallows, The Atlantic |  Read more:
Image: Jonathan Ernst / Reuters