Monday, August 10, 2015
How to Flirt Best: The Perceived Effectiveness of Flirtation Techniques
Flirting is considered a universal and essential aspect of human interaction (Eibl-Eibesfeldt & Hass, 1967; Luscombe, 2008). Individuals, both married and single, flirt. Additionally, flirtation can be used for either courtship initiation or quasi-courtship purposes. (...)
Men and women alike use nonverbal signals, such as direct glancing, space-maximization movements, and automanipulations, in relevant mate-selection contexts (Renninger et al., 2004). The nonverbal courtship signaling involved in flirtation serves a useful purpose. Women use subtle indicators of male interest to help them pace the course of any potential relationship while they assess a man’s willingness and ability to donate resources. Therefore, the task for women is to express enough interest to elicit courtship behavior, but not to elicit a level of interest that leads a man to skip courtship behavior, while men attempt to display their status, health, strength, and intelligence in a desired, unintimidating way. From an evolutionary perspective flirting can be thought of as a product of our evolved mate acquisition adaptations. (...)
Since sexual access is crucial for male mate selection and securing a commitment is most important for women’s mate selection, one might expect a woman’s actions that are suggestive of sexual accessibility to be the most effective way to flirt with a man. Conversely, since women typically desire a long term commitment, a man’s actions that are suggestive of a willingness to commit may be the most effective way for a man to flirt with a woman. Yet, there is a void in the attraction literature. Recent research has not examined this. It is important to ascertain which flirtatious actions are most effective as this knowledge will further enhance the knowledge base regarding flirtation, and further strengthen the knowledge base regarding human attraction. Since evolutionary theory based research can account for many aspects of mate attraction, yet has not examined the effectiveness of overt flirtation tactics, it is important to determine if evolutionary theory can also account for the overt tactics that are most effective for flirting with members of the opposite sex.
T. Joel Wade and Jennifer Slemp, Interpersona | Read more:
Image: via:
Men and women alike use nonverbal signals, such as direct glancing, space-maximization movements, and automanipulations, in relevant mate-selection contexts (Renninger et al., 2004). The nonverbal courtship signaling involved in flirtation serves a useful purpose. Women use subtle indicators of male interest to help them pace the course of any potential relationship while they assess a man’s willingness and ability to donate resources. Therefore, the task for women is to express enough interest to elicit courtship behavior, but not to elicit a level of interest that leads a man to skip courtship behavior, while men attempt to display their status, health, strength, and intelligence in a desired, unintimidating way. From an evolutionary perspective flirting can be thought of as a product of our evolved mate acquisition adaptations. (...)
Since sexual access is crucial for male mate selection and securing a commitment is most important for women’s mate selection, one might expect a woman’s actions that are suggestive of sexual accessibility to be the most effective way to flirt with a man. Conversely, since women typically desire a long term commitment, a man’s actions that are suggestive of a willingness to commit may be the most effective way for a man to flirt with a woman. Yet, there is a void in the attraction literature. Recent research has not examined this. It is important to ascertain which flirtatious actions are most effective as this knowledge will further enhance the knowledge base regarding flirtation, and further strengthen the knowledge base regarding human attraction. Since evolutionary theory based research can account for many aspects of mate attraction, yet has not examined the effectiveness of overt flirtation tactics, it is important to determine if evolutionary theory can also account for the overt tactics that are most effective for flirting with members of the opposite sex.
T. Joel Wade and Jennifer Slemp, Interpersona | Read more:
Image: via:
h/t new shelton wet/dry
Sunday, August 9, 2015
Tinder and the Dawn of the “Dating Apocalypse”
[ed. The feeling I get from reading this is simply 'ick'. Maybe the story is a bit hyperbolic, and maybe there are alternatives, but there's no denying relationships are more technology-driven these days, and not just for dating (Facebook, Twitter, Instagram, Snapchat, all "social media" really) -- all powered by the omnipresent smartphone. Is that good or bad? Does it even matter?]

“Tinder sucks,” they say. But they don’t stop swiping.
At a booth in the back, three handsome twentysomething guys in button-downs are having beers. They are Dan, Alex, and Marty, budding investment bankers at the same financial firm, which recruited Alex and Marty straight from an Ivy League campus. (Names and some identifying details have been changed for this story.) When asked if they’ve been arranging dates on the apps they’ve been swiping at, all say not one date, but two or three: “You can’t be stuck in one lane … There’s always something better.” “If you had a reservation somewhere and then a table at Per Se opened up, you’d want to go there,” Alex offers.
“Guys view everything as a competition,” he elaborates with his deep, reassuring voice. “Who’s slept with the best, hottest girls?” With these dating apps, he says, “you’re always sort of prowling. You could talk to two or three girls at a bar and pick the best one, or you can swipe a couple hundred people a day—the sample size is so much larger. It’s setting up two or three Tinder dates a week and, chances are, sleeping with all of them, so you could rack up 100 girls you’ve slept with in a year.”
He says that he himself has slept with five different women he met on Tinder—“Tinderellas,” the guys call them—in the last eight days. Dan and Marty, also Alex’s roommates in a shiny high-rise apartment building near Wall Street, can vouch for that. In fact, they can remember whom Alex has slept with in the past week more readily than he can.
“Brittany, Morgan, Amber,” Marty says, counting on his fingers. “Oh, and the Russian—Ukrainian?”
“Ukrainian,” Alex confirms. “She works at—” He says the name of a high-end art auction house. Asked what these women are like, he shrugs. “I could offer a résumé, but that’s about it … Works at J. Crew; senior at Parsons; junior at Pace; works in finance … ”
“We don’t know what the girls are like,” Marty says.
“And they don’t know us,” says Alex.
And yet a lack of an intimate knowledge of his potential sex partners never presents him with an obstacle to physical intimacy, Alex says. Alex, his friends agree, is a Tinder King, a young man of such deft “text game”—“That’s the ability to actually convince someone to do something over text,” Marty explains—that he is able to entice young women into his bed on the basis of a few text exchanges, while letting them know up front he is not interested in having a relationship.
“How does he do it?,” Marty asks, blinking. “This guy’s got a talent.”
But Marty, who prefers Hinge to Tinder (“Hinge is my thing”), is no slouch at “racking up girls.” He says he’s slept with 30 to 40 women in the last year: “I sort of play that I could be a boyfriend kind of guy,” in order to win them over, “but then they start wanting me to caremore … and I just don’t.”
“Dude, that’s not cool,” Alex chides in his warm way. “I always make a point of disclosing I’m not looking for anything serious. I just wanna hang out, be friends, see what happens … If I were ever in a court of law I could point to the transcript.” But something about the whole scenario seems to bother him, despite all his mild-mannered bravado. “I think to an extent it is, like, sinister,” he says, “ ‘cause I know that the average girl will think that there’s a chance that she can turn the tables. If I were like, Hey, I just wanna bone, very few people would want to meet up with you …
“Do you think this culture is misogynistic?” he asks lightly. (...)
Mobile dating went mainstream about five years ago; by 2012 it was overtaking online dating. In February, one study reported there were nearly 100 million people—perhaps 50 million on Tinder alone—using their phones as a sort of all-day, every-day, handheld singles club, where they might find a sex partner as easily as they’d find a cheap flight to Florida. “It’s like ordering Seamless,” says Dan, the investment banker, referring to the online food-delivery service. “But you’re ordering a person.”
The comparison to online shopping seems an apt one. Dating apps are the free-market economy come to sex. The innovation of Tinder was the swipe—the flick of a finger on a picture, no more elaborate profiles necessary and no more fear of rejection; users only know whether they’ve been approved, never when they’ve been discarded. OkCupid soon adopted the function. Hinge, which allows for more information about a match’s circle of friends through Facebook, and Happn, which enables G.P.S. tracking to show whether matches have recently “crossed paths,” use it too. It’s telling that swiping has been jocularly incorporated into advertisements for various products, a nod to the notion that, online, the act of choosing consumer brands and sex partners has become interchangeable.
“It’s instant gratification,” says Jason, 26, a Brooklyn photographer, “and a validation of your own attractiveness by just, like, swiping your thumb on an app. You see some pretty girl and you swipe and it’s, like, oh, she thinks you’re attractive too, so it’s really addicting, and you just find yourself mindlessly doing it.” “Sex has become so easy,” says John, 26, a marketing executive in New York. “I can go on my phone right now and no doubt I can find someone I can have sex with this evening, probably before midnight.”
by Nancy Jo Sales, Variety | Read more:
Image: Justin Bishop
What is Phenomenology?
Phenomenology is commonly understood in either of two ways: as a disciplinary field in philosophy, or as a movement in the history of philosophy.
The discipline of phenomenology may be defined initially as the study of structures of experience, or consciousness. Literally, phenomenology is the study of “phenomena”: appearances of things, or things as they appear in our experience, or the ways we experience things, thus the meanings things have in our experience. Phenomenology studies conscious experience as experienced from the subjective or first person point of view. This field of philosophy is then to be distinguished from, and related to, the other main fields of philosophy: ontology (the study of being or what is), epistemology (the study of knowledge), logic (the study of valid reasoning), ethics (the study of right and wrong action), etc.
The historical movement of phenomenology is the philosophical tradition launched in the first half of the 20th century by Edmund Husserl, Martin Heidegger, Maurice Merleau-Ponty, Jean-Paul Sartre, et al. In that movement, the discipline of phenomenology was prized as the proper foundation of all philosophy — as opposed, say, to ethics or metaphysics or epistemology. The methods and characterization of the discipline were widely debated by Husserl and his successors, and these debates continue to the present day. (The definition of phenomenology offered above will thus be debatable, for example, by Heideggerians, but it remains the starting point in characterizing the discipline.)
In recent philosophy of mind, the term “phenomenology” is often restricted to the characterization of sensory qualities of seeing, hearing, etc.: what it is like to have sensations of various kinds. However, our experience is normally much richer in content than mere sensation. Accordingly, in the phenomenological tradition, phenomenology is given a much wider range, addressing the meaning things have in our experience, notably, the significance of objects, events, tools, the flow of time, the self, and others, as these things arise and are experienced in our “life-world”.
Phenomenology as a discipline has been central to the tradition of continental European philosophy throughout the 20th century, while philosophy of mind has evolved in the Austro-Anglo-American tradition of analytic philosophy that developed throughout the 20th century. Yet the fundamental character of our mental activity is pursued in overlapping ways within these two traditions. Accordingly, the perspective on phenomenology drawn in this article will accommodate both traditions. The main concern here will be to characterize the discipline of phenomenology, in a contemporary purview, while also highlighting the historical tradition that brought the discipline into its own.
Basically, phenomenology studies the structure of various types of experience ranging from perception, thought, memory, imagination, emotion, desire, and volition to bodily awareness, embodied action, and social activity, including linguistic activity. The structure of these forms of experience typically involves what Husserl called “intentionality”, that is, the directedness of experience toward things in the world, the property of consciousness that it is a consciousness of or about something. According to classical Husserlian phenomenology, our experience is directed toward — represents or “intends” — things only through particular concepts, thoughts, ideas, images, etc. These make up the meaning or content of a given experience, and are distinct from the things they present or mean.
The basic intentional structure of consciousness, we find in reflection or analysis, involves further forms of experience. Thus, phenomenology develops a complex account of temporal awareness (within the stream of consciousness), spatial awareness (notably in perception), attention (distinguishing focal and marginal or “horizonal” awareness), awareness of one's own experience (self-consciousness, in one sense), self-awareness (awareness-of-oneself), the self in different roles (as thinking, acting, etc.), embodied action (including kinesthetic awareness of one's movement), purpose or intention in action (more or less explicit), awareness of other persons (in empathy, intersubjectivity, collectivity), linguistic activity (involving meaning, communication, understanding others), social interaction (including collective action), and everyday activity in our surrounding life-world (in a particular culture).
Furthermore, in a different dimension, we find various grounds or enabling conditions — conditions of the possibility — of intentionality, including embodiment, bodily skills, cultural context, language and other social practices, social background, and contextual aspects of intentional activities. Thus, phenomenology leads from conscious experience into conditions that help to give experience its intentionality. Traditional phenomenology has focused on subjective, practical, and social conditions of experience. Recent philosophy of mind, however, has focused especially on the neural substrate of experience, on how conscious experience and mental representation or intentionality are grounded in brain activity. It remains a difficult question how much of these grounds of experience fall within the province of phenomenology as a discipline. Cultural conditions thus seem closer to our experience and to our familiar self-understanding than do the electrochemical workings of our brain, much less our dependence on quantum-mechanical states of physical systems to which we may belong. The cautious thing to say is that phenomenology leads in some ways into at least some background conditions of our experience.
The discipline of phenomenology may be defined initially as the study of structures of experience, or consciousness. Literally, phenomenology is the study of “phenomena”: appearances of things, or things as they appear in our experience, or the ways we experience things, thus the meanings things have in our experience. Phenomenology studies conscious experience as experienced from the subjective or first person point of view. This field of philosophy is then to be distinguished from, and related to, the other main fields of philosophy: ontology (the study of being or what is), epistemology (the study of knowledge), logic (the study of valid reasoning), ethics (the study of right and wrong action), etc.

In recent philosophy of mind, the term “phenomenology” is often restricted to the characterization of sensory qualities of seeing, hearing, etc.: what it is like to have sensations of various kinds. However, our experience is normally much richer in content than mere sensation. Accordingly, in the phenomenological tradition, phenomenology is given a much wider range, addressing the meaning things have in our experience, notably, the significance of objects, events, tools, the flow of time, the self, and others, as these things arise and are experienced in our “life-world”.
Phenomenology as a discipline has been central to the tradition of continental European philosophy throughout the 20th century, while philosophy of mind has evolved in the Austro-Anglo-American tradition of analytic philosophy that developed throughout the 20th century. Yet the fundamental character of our mental activity is pursued in overlapping ways within these two traditions. Accordingly, the perspective on phenomenology drawn in this article will accommodate both traditions. The main concern here will be to characterize the discipline of phenomenology, in a contemporary purview, while also highlighting the historical tradition that brought the discipline into its own.
Basically, phenomenology studies the structure of various types of experience ranging from perception, thought, memory, imagination, emotion, desire, and volition to bodily awareness, embodied action, and social activity, including linguistic activity. The structure of these forms of experience typically involves what Husserl called “intentionality”, that is, the directedness of experience toward things in the world, the property of consciousness that it is a consciousness of or about something. According to classical Husserlian phenomenology, our experience is directed toward — represents or “intends” — things only through particular concepts, thoughts, ideas, images, etc. These make up the meaning or content of a given experience, and are distinct from the things they present or mean.
The basic intentional structure of consciousness, we find in reflection or analysis, involves further forms of experience. Thus, phenomenology develops a complex account of temporal awareness (within the stream of consciousness), spatial awareness (notably in perception), attention (distinguishing focal and marginal or “horizonal” awareness), awareness of one's own experience (self-consciousness, in one sense), self-awareness (awareness-of-oneself), the self in different roles (as thinking, acting, etc.), embodied action (including kinesthetic awareness of one's movement), purpose or intention in action (more or less explicit), awareness of other persons (in empathy, intersubjectivity, collectivity), linguistic activity (involving meaning, communication, understanding others), social interaction (including collective action), and everyday activity in our surrounding life-world (in a particular culture).
Furthermore, in a different dimension, we find various grounds or enabling conditions — conditions of the possibility — of intentionality, including embodiment, bodily skills, cultural context, language and other social practices, social background, and contextual aspects of intentional activities. Thus, phenomenology leads from conscious experience into conditions that help to give experience its intentionality. Traditional phenomenology has focused on subjective, practical, and social conditions of experience. Recent philosophy of mind, however, has focused especially on the neural substrate of experience, on how conscious experience and mental representation or intentionality are grounded in brain activity. It remains a difficult question how much of these grounds of experience fall within the province of phenomenology as a discipline. Cultural conditions thus seem closer to our experience and to our familiar self-understanding than do the electrochemical workings of our brain, much less our dependence on quantum-mechanical states of physical systems to which we may belong. The cautious thing to say is that phenomenology leads in some ways into at least some background conditions of our experience.
by Stanford Encyclopedia of Philosophy | Read more:
Image: via:
Saturday, August 8, 2015
The President Defends His Iran Plan
On Wednesday at American University, Barack Obama made the case for the Iran nuclear agreement, and against its critics, in a long and detailed speech. The official transcript is here; the C-Span video is here. Later that afternoon, the president met in the Roosevelt Room of the White House with nine journalists to talk for another 90 minutes about the thinking behind the plan, and its likely political and strategic effects.
The Atlantic’s Jeffrey Goldberg was one of the people at that session, and he plans to write about some aspects of the discussion. Slate’s Fred Kaplan was another, and his report is here. I was there as well and will try to convey some of the texture and highlights.
Procedural note: The session was on the record, so reporters could quote everything the president said. We were allowed to take notes in real time, including typing them out on computers, but we were not allowed to use audio recorders. Direct quotes here have been checked against an internal transcript the White House made.
Nothing in the substance of Obama’s remarks would come as a surprise to people who heard his speech earlier that day or any of his comments in the weeks since the Iran deal was struck—most notably, his answers at the very long press conference he held last month. Obama made a point of this constancy. Half a dozen times, he began answers with, “As I said in the speech...” When one reporter observed that the American University address “reads like a lot of your other speeches,” Obama cut in to say jauntily, “I’m pretty consistent!,” which got a laugh.
But although the arguments are familiar, it is still different to hear them in a conversational rather than formal-oratorical setting. Here are some of the aspects that struck me.

Procedural note: The session was on the record, so reporters could quote everything the president said. We were allowed to take notes in real time, including typing them out on computers, but we were not allowed to use audio recorders. Direct quotes here have been checked against an internal transcript the White House made.
Nothing in the substance of Obama’s remarks would come as a surprise to people who heard his speech earlier that day or any of his comments in the weeks since the Iran deal was struck—most notably, his answers at the very long press conference he held last month. Obama made a point of this constancy. Half a dozen times, he began answers with, “As I said in the speech...” When one reporter observed that the American University address “reads like a lot of your other speeches,” Obama cut in to say jauntily, “I’m pretty consistent!,” which got a laugh.
But although the arguments are familiar, it is still different to hear them in a conversational rather than formal-oratorical setting. Here are some of the aspects that struck me.
Intellectual and Strategic Confidence
This is one micron away from the trait that Obama-detractors consider his arrogance and aloofness, so I’ll try to be precise about the way it manifested itself.
On the arguments for and against the deal, Obama rattled them off as he did in his speech and at his all-Iran July 15 press conference: You think this deal is flawed? Give me a better alternative. You think its inspection provisions are weak? Look at the facts and you’ll see that they’re more intrusive and verifiable than any other ever signed. You think because Iran’s government is extremist and anti-Semitic we shouldn’t negotiate with it? It’s because Iran has been an adversary that we need to negotiate limits, just as Richard Nixon and Ronald Reagan did with the evil and threatening Soviet Union. You think that rejecting this deal will somehow lead to a “better” deal? Well, let’s follow the logic and see why you’re wrong.
It’s the follow the logic theme I want to stress. Obama is clearly so familiar with these arguments that he was able to present them rapid-fire and as if each were a discrete paragraph in a legal brief. (At other times he spoke with great, pause-filled deliberation, marking his way through the sentence word by word.) And most paragraphs in that brief seemed to end, their arguments don’t hold up or, follow the logic or, it doesn’t make sense or, I don’t think you’ll find the weakness in my logic. You’ll see something similar if you read through his AU speech.
There is practically no other big strategic point on which the U.S., Russia, and China all agree—but they held together on this deal. (“I was surprised that Russia was able to compartmentalize the Iran issue, in light of the severe tensions that we have over Ukraine,” Obama said.) The French, Germans, and British stayed together too, even though they don’t always see eye-to-eye with America on nuclear issues. High-stakes measures don’t often get through the UN Security Council on a 15-0 vote; this deal did.The context for Obama’s certainty is his knowledge that in the rest of the world, this agreement is not controversial at all.
Some hardliners in Iran don’t like the agreement, as Obama frequently points out, and it has ramifications for many countries in the Middle East. But in Washington, only two blocs are actively urging the U.S. Congress to reject it. One is of course the U.S. Republican Party. The other is the Netanyahu administration in Israel plus a range of Israelis from many political parties—though some military and intelligence officials in Israel have dissented from Benjamin Netanyahu’s condemnation of the deal.
Obama has taken heat for pointing out in his speech that “every nation in the world that has commented publicly, with the exception of the Israeli government, has expressed support.” But that’s the plain truth. As delivered, this line of his speech was very noticeably stressed in the way I show:
Obama’s intellectual confidence showed through in his certainty that if people looked at the facts and logic, they would come down on his side. His strategic confidence came through in his asserting that as a matter of U.S. national interest, “this to me is not a close call—and I say that based on having made a lot of tough calls.” Most foreign-policy judgments, he said, ended up being “judgments based on percentages,” and most of them “had hair,” the in-house term for complications. Not this one, in his view:
“When I see a situation like this one, where we can achieve an objective with a unified world behind us, and we preserve our hedge against it not working out, I think it would be foolish—even tragic—for us to pass up on that opportunity.”
If you agree with the way Obama follows these facts to these conclusions, as I do, you’re impressed by his determination to fight this out on the facts (rather than saying, 2009 fashion, “We’ll listen to good ideas from all sides”). If you disagree, I can see how his Q.E.D./brainiac certainty could grate.
This is one micron away from the trait that Obama-detractors consider his arrogance and aloofness, so I’ll try to be precise about the way it manifested itself.
On the arguments for and against the deal, Obama rattled them off as he did in his speech and at his all-Iran July 15 press conference: You think this deal is flawed? Give me a better alternative. You think its inspection provisions are weak? Look at the facts and you’ll see that they’re more intrusive and verifiable than any other ever signed. You think because Iran’s government is extremist and anti-Semitic we shouldn’t negotiate with it? It’s because Iran has been an adversary that we need to negotiate limits, just as Richard Nixon and Ronald Reagan did with the evil and threatening Soviet Union. You think that rejecting this deal will somehow lead to a “better” deal? Well, let’s follow the logic and see why you’re wrong.
It’s the follow the logic theme I want to stress. Obama is clearly so familiar with these arguments that he was able to present them rapid-fire and as if each were a discrete paragraph in a legal brief. (At other times he spoke with great, pause-filled deliberation, marking his way through the sentence word by word.) And most paragraphs in that brief seemed to end, their arguments don’t hold up or, follow the logic or, it doesn’t make sense or, I don’t think you’ll find the weakness in my logic. You’ll see something similar if you read through his AU speech.
There is practically no other big strategic point on which the U.S., Russia, and China all agree—but they held together on this deal. (“I was surprised that Russia was able to compartmentalize the Iran issue, in light of the severe tensions that we have over Ukraine,” Obama said.) The French, Germans, and British stayed together too, even though they don’t always see eye-to-eye with America on nuclear issues. High-stakes measures don’t often get through the UN Security Council on a 15-0 vote; this deal did.The context for Obama’s certainty is his knowledge that in the rest of the world, this agreement is not controversial at all.
Some hardliners in Iran don’t like the agreement, as Obama frequently points out, and it has ramifications for many countries in the Middle East. But in Washington, only two blocs are actively urging the U.S. Congress to reject it. One is of course the U.S. Republican Party. The other is the Netanyahu administration in Israel plus a range of Israelis from many political parties—though some military and intelligence officials in Israel have dissented from Benjamin Netanyahu’s condemnation of the deal.
Obama has taken heat for pointing out in his speech that “every nation in the world that has commented publicly, with the exception of the Israeli government, has expressed support.” But that’s the plain truth. As delivered, this line of his speech was very noticeably stressed in the way I show:
I recognize that Prime Minister Netanyahu disagrees—disagrees strongly. I do not doubt his sincerity. But I believe he is wrong. … And as president of the United States, it would be an abrogation of my constitutional duty to act against my best judgment simply because it causes temporary friction with a dear friend and ally.To bring this back to the theme of confidence: In this conversation, as in the speech, Obama gave Netanyahu and other Israeli critics credit for being sincere but misinformed. As for the GOP? Misinformed at best. “The fact that there is a robust debate in Congress is good,” he said in our session. “The fact that the debate sometimes seems unanchored to facts is not so good. ... [We need] to return to some semblance of bipartisanship and soberness when we approach these problems.” (I finished this post while watching the Fox News GOP debate, which gave “semblance of bipartisanship and soberness” new meaning.)
Obama’s intellectual confidence showed through in his certainty that if people looked at the facts and logic, they would come down on his side. His strategic confidence came through in his asserting that as a matter of U.S. national interest, “this to me is not a close call—and I say that based on having made a lot of tough calls.” Most foreign-policy judgments, he said, ended up being “judgments based on percentages,” and most of them “had hair,” the in-house term for complications. Not this one, in his view:
“When I see a situation like this one, where we can achieve an objective with a unified world behind us, and we preserve our hedge against it not working out, I think it would be foolish—even tragic—for us to pass up on that opportunity.”
If you agree with the way Obama follows these facts to these conclusions, as I do, you’re impressed by his determination to fight this out on the facts (rather than saying, 2009 fashion, “We’ll listen to good ideas from all sides”). If you disagree, I can see how his Q.E.D./brainiac certainty could grate.
by James Fallows, The Atlantic | Read more:
Image: Jonathan Ernst / ReutersThe Point of No Return: Climate Change Nightmares Are Already Here
Historians may look to 2015 as the year when shit really started hitting the fan. Some snapshots: In just the past few months, record-setting heat waves in Pakistan and India each killed more than 1,000 people. In Washington state's Olympic National Park, the rainforest caught fire for the first time in living memory. London reached 98 degrees Fahrenheit during the hottest July day ever recorded in the U.K.;The Guardian briefly had to pause its live blog of the heat wave because its computer servers overheated. In California, suffering from its worst drought in a millennium, a 50-acre brush fire swelled seventyfold in a matter of hours, jumping across the I-15 freeway during rush-hour traffic. Then, a few days later, the region was pounded by intense, virtually unheard-of summer rains. Puerto Rico is under its strictest water rationing in history as a monster El Niño forms in the tropical Pacific Ocean, shifting weather patterns worldwide.
On July 20th, James Hansen, the former NASA climatologist who brought climate change to the public's attention in the summer of 1988, issued a bombshell: He and a team of climate scientists had identified a newly important feedback mechanism off the coast of Antarctica that suggests mean sea levels could rise 10 times faster than previously predicted: 10 feet by 2065. The authors included this chilling warning: If emissions aren't cut, "We conclude that multi-meter sea-level rise would become practically unavoidable. Social disruption and economic consequences of such large sea-level rise could be devastating. It is not difficult to imagine that conflicts arising from forced migrations and economic collapse might make the planet ungovernable, threatening the fabric of civilization." (...)
Hansen's new study also shows how complicated and unpredictable climate change can be. Even as global ocean temperatures rise to their highest levels in recorded history, some parts of the ocean, near where ice is melting exceptionally fast, are actually cooling, slowing ocean circulation currents and sending weather patterns into a frenzy. Sure enough, a persistently cold patch of ocean is starting to show up just south of Greenland, exactly where previous experimental predictions of a sudden surge of freshwater from melting ice expected it to be. Michael Mann, another prominent climate scientist, recently said of the unexpectedly sudden Atlantic slowdown, "This is yet another example of where observations suggest that climate model predictions may be too conservative when it comes to the pace at which certain aspects of climate change are proceeding."
Since storm systems and jet streams in the United States and Europe partially draw their energy from the difference in ocean temperatures, the implication of one patch of ocean cooling while the rest of the ocean warms is profound. Storms will get stronger, and sea-level rise will accelerate. Scientists like Hansen only expect extreme weather to get worse in the years to come, though Mann said it was still "unclear" whether recent severe winters on the East Coast are connected to the phenomenon.
And yet, these aren't even the most disturbing changes happening to the Earth's biosphere that climate scientists are discovering this year. For that, you have to look not at the rising sea levels but to what is actually happening within the oceans themselves. (...)
Thanks to the pressure we're putting on the planet's ecosystem — warming, acidification and good old-fashioned pollution — the oceans are set up for several decades of rapid change. Here's what could happen next.
The combination of excessive nutrients from agricultural runoff, abnormal wind patterns and the warming oceans is already creating seasonal dead zones in coastal regions when algae blooms suck up most of the available oxygen. The appearance of low-oxygen regions has doubled in frequency every 10 years since 1960 and should continue to grow over the coming decades at an even greater rate.
So far, dead zones have remained mostly close to the coasts, but in the 21st century, deep-ocean dead zones could become common. These low-oxygen regions could gradually expand in size — potentially thousands of miles across — which would force fish, whales, pretty much everything upward. If this were to occur, large sections of the temperate deep oceans would suffer should the oxygen-free layer grow so pronounced that it stratifies, pushing surface ocean warming into overdrive and hindering upwelling of cooler, nutrient-rich deeper water.
Enhanced evaporation from the warmer oceans will create heavier downpours, perhaps destabilizing the root systems of forests, and accelerated runoff will pour more excess nutrients into coastal areas, further enhancing dead zones. In the past year, downpours have broken records in Long Island, Phoenix, Detroit, Baltimore, Houston and Pensacola, Florida.
Evidence for the above scenario comes in large part from our best understanding of what happened 250 million years ago, during the "Great Dying," when more than 90 percent of all oceanic species perished after a pulse of carbon dioxide and methane from land-based sources began a period of profound climate change. The conditions that triggered "Great Dying" took hundreds of thousands of years to develop. But humans have been emitting carbon dioxide at a much quicker rate, so the current mass extinction only took 100 years or so to kick-start.
With all these stressors working against it, a hypoxic feedback loop could wind up destroying some of the oceans' most species-rich ecosystems within our lifetime. A recent study by Sarah Moffitt of the University of California-Davis said it could take the ocean thousands of years to recover. "Looking forward for my kid, people in the future are not going to have the same ocean that I have today," Moffitt said.

Hansen's new study also shows how complicated and unpredictable climate change can be. Even as global ocean temperatures rise to their highest levels in recorded history, some parts of the ocean, near where ice is melting exceptionally fast, are actually cooling, slowing ocean circulation currents and sending weather patterns into a frenzy. Sure enough, a persistently cold patch of ocean is starting to show up just south of Greenland, exactly where previous experimental predictions of a sudden surge of freshwater from melting ice expected it to be. Michael Mann, another prominent climate scientist, recently said of the unexpectedly sudden Atlantic slowdown, "This is yet another example of where observations suggest that climate model predictions may be too conservative when it comes to the pace at which certain aspects of climate change are proceeding."
Since storm systems and jet streams in the United States and Europe partially draw their energy from the difference in ocean temperatures, the implication of one patch of ocean cooling while the rest of the ocean warms is profound. Storms will get stronger, and sea-level rise will accelerate. Scientists like Hansen only expect extreme weather to get worse in the years to come, though Mann said it was still "unclear" whether recent severe winters on the East Coast are connected to the phenomenon.
And yet, these aren't even the most disturbing changes happening to the Earth's biosphere that climate scientists are discovering this year. For that, you have to look not at the rising sea levels but to what is actually happening within the oceans themselves. (...)
Thanks to the pressure we're putting on the planet's ecosystem — warming, acidification and good old-fashioned pollution — the oceans are set up for several decades of rapid change. Here's what could happen next.
The combination of excessive nutrients from agricultural runoff, abnormal wind patterns and the warming oceans is already creating seasonal dead zones in coastal regions when algae blooms suck up most of the available oxygen. The appearance of low-oxygen regions has doubled in frequency every 10 years since 1960 and should continue to grow over the coming decades at an even greater rate.
So far, dead zones have remained mostly close to the coasts, but in the 21st century, deep-ocean dead zones could become common. These low-oxygen regions could gradually expand in size — potentially thousands of miles across — which would force fish, whales, pretty much everything upward. If this were to occur, large sections of the temperate deep oceans would suffer should the oxygen-free layer grow so pronounced that it stratifies, pushing surface ocean warming into overdrive and hindering upwelling of cooler, nutrient-rich deeper water.
Enhanced evaporation from the warmer oceans will create heavier downpours, perhaps destabilizing the root systems of forests, and accelerated runoff will pour more excess nutrients into coastal areas, further enhancing dead zones. In the past year, downpours have broken records in Long Island, Phoenix, Detroit, Baltimore, Houston and Pensacola, Florida.
Evidence for the above scenario comes in large part from our best understanding of what happened 250 million years ago, during the "Great Dying," when more than 90 percent of all oceanic species perished after a pulse of carbon dioxide and methane from land-based sources began a period of profound climate change. The conditions that triggered "Great Dying" took hundreds of thousands of years to develop. But humans have been emitting carbon dioxide at a much quicker rate, so the current mass extinction only took 100 years or so to kick-start.
With all these stressors working against it, a hypoxic feedback loop could wind up destroying some of the oceans' most species-rich ecosystems within our lifetime. A recent study by Sarah Moffitt of the University of California-Davis said it could take the ocean thousands of years to recover. "Looking forward for my kid, people in the future are not going to have the same ocean that I have today," Moffitt said.
by Eric Holthaus, Rolling Stone | Read more:
Image: Corey Accardo/NOAA/APBeyond the Bird: A Definitive List of the Artworks in ‘The Goldfinch’
The Goldfinch, Carel Fabritius, 1654
“This is just about the first painting I ever really loved,” my mother was saying. “You’ll never believe it, but it was in a book I used to take out of the library when I was a kid. I used to sit on the floor by my bed and stare at it for hours, completely fascinated—that little guy! And, I mean, actually it’s incredible how much you can learn about a painting by spending a lot of time with a reproduction, even not a very good reproduction. I started off loving the bird, the way you’d love a pet or something, and ended up loving the way he was painted.” She laughed. “The Anatomy Lesson was in the same book actually, but it scared the pants off me. I used to slam the book shut when I opened it to that page by mistake.”
The girl and the old man had come up next to us. Self-consciously, I leaned forward and looked at the painting. It was a small picture, the smallest in the exhibition, and the simplest: a yellow finch, against a plain, pale ground, chained to a perch by its twig of an ankle.
“He was Rembrandt’s pupil, Vermeer’s teacher,” my mother said. “And this one little painting is really the missing link between the two of them—that clear pure daylight, you can see where Vermeer got his quality of light from. Of course, I didn’t know or care about any of that when I was a kid, the historical significance. But it’s there.”
I stepped back, to get a better look. It was a direct and matter-of-fact little creature, with nothing sentimental about it; and something about the neat, compact way it tucked down inside itself—its brightness, its alert watchful expression—made me think of pictures I’d seen of my mother when she was small: a dark-capped finch with steady eyes.
…
“Well, Egbert was Fabritius’s neighbor, he sort of lost his mind after the powder explosion, at least that’s how it looks to me, but Fabritius was killed and his studio was destroyed. Along with almost all his paintings, except this one.” She seemed to be waiting for me to say something, but when I didn’t, she continued: “He was one of the greatest painters of his day, in one of the greatest ages of painting. Very very famous in his time. It’s sad though, because maybe only five or six paintings survived, of all his work. All the rest of it is lost—everything he ever did.”
…
“Anyway, if you ask me,” my mother was saying, “this is the most extraordinary picture in the whole show. Fabritius is making clear something that he discovered all on his own, that no painter in the world knew before him—not even Rembrandt.”
Very softly—so softly I could barely hear her—I heard the girl whisper: “It had to live its whole life like that?”
I’d been wondering the same thing; the shackled foot, the chain was terrible; her grandfather murmured some reply but my mother (who seemed totally unaware of them, even though they were right next to us) stepped back and said: “Such a mysterious picture, so simple. Really tender—invites you to stand close, you know? All those dead pheasants back there and then this little living creature.”
…
“People die, sure,” my mother was saying. “But it’s so heartbreaking and unnecessary how we lose things. From pure carelessness. Fires, wars. The Parthenon, used as a munitions storehouse. I guess that anything we manage to save from history is a miracle.”
Excerpt from: The Goldfinch, by Donna Tartt
by Laura Oosterbeek, The Millions | Read more:
Image: Carel FabritiusFriday, August 7, 2015
Could an Old-School Tube Amp Make the Music You Love Sound Better?
Like a lot of adults who attended too many rock concerts in their reckless youth, my hearing is not what it used to be. On more than one occasion, I remember stumbling out of Winterland in San Francisco after seeing high-watt bands like Hot Tuna or Pink Floyd, putting the key in the car’s ignition, giving it a turn, and then having no idea whatsoever if the engine had roared to life.
That’s what four or five hours of standing in front of a wall of speakers pumping music at more than 100 decibels will do to a person’s hearing—for the following 30 or 40 minutes, the world sounds soft and muffled, as if the air is thick with invisible clouds of cotton balls. At the time, it didn’t occur to me that I was doing lasting damage to the cochlea in my inner ear, and all these years later, I don’t necessarily wince at every sound that happens to be loud. But I do have trouble hearing someone speaking to me in a crowded restaurant, and certain sounds with the wrong acoustic profile (for me, anyway) will make my ears ring for hours.
Most painful—emotionally and literally—is the mediocre fidelity of my home stereo system, which teases the listener with the occasional splash of treble or thump of bass, but mostly delivers cacophonous mush. It’s the opposite of what most people would describe as “warm,” which is a narrow, technical term of art among audiophiles. For the rest of us, warm suggests rich and rounded tones, notes and chords of such depth the listener can almost imagine he’s in the presence of the singer or musician performing without the aid of microphones or amplifiers. Warm is intimate, warm is clean and pure, warm doesn’t make my ears ring.
Uniquely, tube amplifiers, which use vacuum tubes to amplify electrical signals, are said to deliver this sublime auditory experience more reliably than their solid-state counterparts, which use transistors to do the same thing. (Digital devices run on integrated circuits, and use software to achieve their sound, so they are not considered here.) In particular, most tube amps are regarded as being less likely to create harmonic distortions at higher frequencies than all but the best and most expensive solid-state amps. They are generally worse at the lower frequencies, but our ears don’t hear most of those lower frequency distortions very well, which makes them “sonically benign.” Distortion at high frequencies, however, is easily heard and contributes to listening fatigue (i.e., ringing ears), which may be one of the reasons why tube amps are said to sound warm.
I don’t know much about harmonics, but I can vouch for the sound quality of tube amplifiers. I grew up listening to music played through my parents’ Fisher 500-C stereo receiver, a tube amp from the early 1960s that did wondrous things to albums like “Let It Bleed” by the Rolling Stones when played at very high volumes. My ears never rang after listening to that, no matter how loud. The same could not be said for the Kenwood KR-7200 solid-state receiver I took with me to college—if memory serves, I sold it to a wide-eyed freshman early in my sophomore year.
Lately, the desire to replicate the warm auditory memories of my youth has become a musical preoccupation of mine, since I’m secretly—if only aspirationally—in the market for a new stereo. Sure, a tube amplifier won’t help me hear someone talking to me in a noisy restaurant, but it does promise relief from the worst sonic indignity of all—not being able to listen to the music I love at a respectable volume without destroying what’s left of my hearing. If tubes could do that, it would be nothing short of a miracle.
So, I went shopping. Not for equipment yet, but for knowledge. Is there something about the way in which tubes, or “valves” as they are known in the U.K., amplify sound that changes how we experience it once it finds its way through our ear canals and into our brains? And although I know what the word “warm” means to me, what does it mean to the audiophiles and the people who make tube amplifiers and other types of hi-fi stereo equipment for a living? To get answers to these and other questions, I spoke to some of the leading authorities and manufacturers of tube and solid-state amplifiers in the United States. And, the icing on the cake, I got to listen to best stereo system I’ve ever heard.
To begin, I consulted the highly regarded “Sounds Like?” audio glossary, written by the late, great J. Gordon Holt, who founded “Stereophile” magazine in 1962. According to Holt, warm describes sound that is “the same as dark, but less tilted.” In case you’re curious, “dark” refers to “the audible effect of a frequency response which is clockwise-tilted across the entire range, so that output diminishes with increasing frequency,” while “tilted” indicates an “across-the-board rotation of an otherwise flat frequency response, so that the device’s output increases or decreases at a uniform rate with increasing frequency.”
This is not at all what I was expecting. Turns out, my definition of warm (“intimate,” “clean and pure,” “doesn’t make my ears ring”) is too “euphonic” for Holt, which is a word he dismisses in the “E” section of his glossary as “pleasing to the ear” but having “a connotation of exaggerated richness rather than literal accuracy.”
Audiophiles, it seems, have no use for emotional words like “warm,” but isn’t “pleasing to the ear” what I want? In the world of high-end hi-fi, which is where you need to go if you want to learn anything meaningful about tubes and tube amplifiers, the answer to that seemingly simple question is just about always “no.”

Most painful—emotionally and literally—is the mediocre fidelity of my home stereo system, which teases the listener with the occasional splash of treble or thump of bass, but mostly delivers cacophonous mush. It’s the opposite of what most people would describe as “warm,” which is a narrow, technical term of art among audiophiles. For the rest of us, warm suggests rich and rounded tones, notes and chords of such depth the listener can almost imagine he’s in the presence of the singer or musician performing without the aid of microphones or amplifiers. Warm is intimate, warm is clean and pure, warm doesn’t make my ears ring.
Uniquely, tube amplifiers, which use vacuum tubes to amplify electrical signals, are said to deliver this sublime auditory experience more reliably than their solid-state counterparts, which use transistors to do the same thing. (Digital devices run on integrated circuits, and use software to achieve their sound, so they are not considered here.) In particular, most tube amps are regarded as being less likely to create harmonic distortions at higher frequencies than all but the best and most expensive solid-state amps. They are generally worse at the lower frequencies, but our ears don’t hear most of those lower frequency distortions very well, which makes them “sonically benign.” Distortion at high frequencies, however, is easily heard and contributes to listening fatigue (i.e., ringing ears), which may be one of the reasons why tube amps are said to sound warm.
I don’t know much about harmonics, but I can vouch for the sound quality of tube amplifiers. I grew up listening to music played through my parents’ Fisher 500-C stereo receiver, a tube amp from the early 1960s that did wondrous things to albums like “Let It Bleed” by the Rolling Stones when played at very high volumes. My ears never rang after listening to that, no matter how loud. The same could not be said for the Kenwood KR-7200 solid-state receiver I took with me to college—if memory serves, I sold it to a wide-eyed freshman early in my sophomore year.
Lately, the desire to replicate the warm auditory memories of my youth has become a musical preoccupation of mine, since I’m secretly—if only aspirationally—in the market for a new stereo. Sure, a tube amplifier won’t help me hear someone talking to me in a noisy restaurant, but it does promise relief from the worst sonic indignity of all—not being able to listen to the music I love at a respectable volume without destroying what’s left of my hearing. If tubes could do that, it would be nothing short of a miracle.
So, I went shopping. Not for equipment yet, but for knowledge. Is there something about the way in which tubes, or “valves” as they are known in the U.K., amplify sound that changes how we experience it once it finds its way through our ear canals and into our brains? And although I know what the word “warm” means to me, what does it mean to the audiophiles and the people who make tube amplifiers and other types of hi-fi stereo equipment for a living? To get answers to these and other questions, I spoke to some of the leading authorities and manufacturers of tube and solid-state amplifiers in the United States. And, the icing on the cake, I got to listen to best stereo system I’ve ever heard.
To begin, I consulted the highly regarded “Sounds Like?” audio glossary, written by the late, great J. Gordon Holt, who founded “Stereophile” magazine in 1962. According to Holt, warm describes sound that is “the same as dark, but less tilted.” In case you’re curious, “dark” refers to “the audible effect of a frequency response which is clockwise-tilted across the entire range, so that output diminishes with increasing frequency,” while “tilted” indicates an “across-the-board rotation of an otherwise flat frequency response, so that the device’s output increases or decreases at a uniform rate with increasing frequency.”
This is not at all what I was expecting. Turns out, my definition of warm (“intimate,” “clean and pure,” “doesn’t make my ears ring”) is too “euphonic” for Holt, which is a word he dismisses in the “E” section of his glossary as “pleasing to the ear” but having “a connotation of exaggerated richness rather than literal accuracy.”
Audiophiles, it seems, have no use for emotional words like “warm,” but isn’t “pleasing to the ear” what I want? In the world of high-end hi-fi, which is where you need to go if you want to learn anything meaningful about tubes and tube amplifiers, the answer to that seemingly simple question is just about always “no.”
by Ben Marks, Collectors Weekly | Read more:
Image: McIntosh
Farther Away
[ed. See also: Why The End of the Tour isn't really about my friend David Foster Wallace]
On the eve of my departure for Santiago, I visited my friend Karen, the widow of the writer David Foster Wallace. As I was getting ready to leave her house, she asked me, out of the blue, whether I might like to take along some of David’s cremation ashes and scatter them on Masafuera. I said I would, and she found an antique wooden matchbox, a tiny book with a sliding drawer, and put some ashes in it, saying that she liked the thought of part of David coming to rest on a remote and uninhabited island. It was only later, after I’d driven away from her house, that I realized that she’d given me the ashes as much for my sake as for hers or David’s. She knew, because I had told her, that my current state of flight from myself had begun soon after David’s death, two years earlier. At the time, I’d made a decision not to deal with the hideous suicide of someone I’d loved so much but instead to take refuge in anger and work. Now that the work was done, though, it was harder to ignore the circumstance that, arguably, in one interpretation of his suicide, David had died of boredom and in despair about his future novels. The desperate edge to my own recent boredom: might this be related to my having broken a promise to myself? The promise that, after I’d finished my book project, I would allow myself to feel more than fleeting grief and enduring anger at David’s death? (...)
David wrote about weather as well as anyone who ever put words on paper, and he loved his dogs more purely than he loved anything or anyone else, but nature itself didn’t interest him, and he was utterly indifferent to birds. Once, when we were driving near Stinson Beach, in California, I’d stopped to give him a telescope view of a long-billed curlew, a species whose magnificence is to my mind self-evident and revelatory. He looked through the scope for two seconds before turning away with patent boredom. “Yeah,” he said with his particular tone of hollow politeness, “it’s pretty.” In the summer before he died, sitting with him on his patio while he smoked cigarettes, I couldn’t keep my eyes off the hummingbirds around his house and was saddened that he could, and while he was taking his heavily medicated afternoon naps I was studying the birds of Ecuador for an upcoming trip, and I understood the difference between his unmanageable misery and my manageable discontents to be that I could escape myself in the joy of birds and he could not.
He was sick, yes, and in a sense the story of my friendship with him is simply that I loved a person who was mentally ill. The depressed person then killed himself, in a way calculated to inflict maximum pain on those he loved most, and we who loved him were left feeling angry and betrayed. Betrayed not merely by the failure of our investment of love but by the way in which his suicide took the person away from us and made him into a very public legend. People who had never read his fiction, or had never even heard of him, read his Kenyon College commencement address in the Wall Street Journal and mourned the loss of a great and gentle soul. A literary establishment that had never so much as short-listed one of his books for a national prize now united to declare him a lost national treasure. Of course, he was a national treasure, and, being a writer, he didn’t “belong” to his readers any less than to me. But if you happened to know that his actual character was more complex and dubious than he was getting credit for, and if you also knew that he was more lovable—funnier, sillier, needier, more poignantly at war with his demons, more lost, more childishly transparent in his lies and inconsistencies—than the benignant and morally clairvoyant artist/saint that had been made of him, it was still hard not to feel wounded by the part of him that had chosen the adulation of strangers over the love of the people closest to him.
The people who knew David least well are most likely to speak of him in saintly terms. What makes this especially strange is the near-perfect absence, in his fiction, of ordinary love. Close loving relationships, which for most of us are a foundational source of meaning, have no standing in the Wallace fictional universe. What we get, instead, are characters keeping their heartless compulsions secret from those who love them; characters scheming to appear loving or to prove to themselves that what feels like love is really just disguised self-interest; or, at most, characters directing an abstract or spiritual love toward somebody profoundly repellent—the cranial-fluid-dripping wife in “Infinite Jest,” the psychopath in the last of the interviews with hideous men. David’s fiction is populated with dissemblers and manipulators and emotional isolates, and yet the people who had only glancing or formal contact with him took his rather laborious hyper-considerateness and moral wisdom at face value.
The curious thing about David’s fiction, though, is how recognized and comforted, how loved, his most devoted readers feel when reading it. To the extent that each of us is stranded on his or her own existential island—and I think it’s approximately correct to say that his most susceptible readers are ones familiar with the socially and spiritually isolating effects of addiction or compulsion or depression—we gratefully seized on each new dispatch from that farthest-away island which was David. At the level of content, he gave us the worst of himself: he laid out, with an intensity of self-scrutiny worthy of comparison to Kafka and Kierkegaard and Dostoyevsky, the extremes of his own narcissism, misogyny, compulsiveness, self-deception, dehumanizing moralism and theologizing, doubt in the possibility of love, and entrapment in footnotes-within-footnotes self-consciousness. At the level of form and intention, however, this very cataloguing of despair about his own authentic goodness is received by the reader as a gift of authentic goodness: we feel the love in the fact of his art, and we love him for it.
David and I had a friendship of compare and contrast and (in a brotherly way) compete. A few years before he died, he signed my hardcover copies of his two most recent books. On the title page of one of them, I found the traced outline of his hand; on the title page of the other was an outline of an erection so huge that it ran off the page, annotated with a little arrow and the remark “scale 100%.” I once heard him enthusiastically describe, in the presence of a girl he was dating, someone else’s girlfriend as his “paragon of womanhood.” David’s girl did a wonderfully slow double take and said, “What?” Whereupon David, whose vocabulary was as large as anybody’s in the Western Hemisphere, took a deep breath and, letting it out, said, “I’m suddenly realizing that I’ve never actually known what the word ‘paragon’ means.”
He was lovable the way a child is lovable, and he was capable of returning love with a childlike purity. If love is nevertheless excluded from his work, it’s because he never quite felt that he deserved to receive it. He was a lifelong prisoner on the island of himself. What looked like gentle contours from a distance were in fact sheer cliffs. Sometimes only a little of him was crazy, sometimes nearly all of him, but, as an adult, he was never entirely not crazy. What he’d seen of his id while trying to escape his island prison by way of drugs and alcohol, only to find himself even more imprisoned by addiction, seems never to have ceased to be corrosive of his belief in his lovability. Even after he got clean, even decades after his late-adolescent suicide attempt, even after his slow and heroic construction of a life for himself, he felt undeserving. And this feeling was intertwined, ultimately to the point of indistinguishability, with the thought of suicide, which was the one sure way out of his imprisonment; surer than addiction, surer than fiction, and surer, finally, than love. (...)
Adulatory public narratives of David, which take his suicide as proof that (as Don McLean sang of van Gogh) “this world was never meant for one as beautiful as you,” require that there have been a unitary David, a beautiful and supremely gifted human being who, after quitting the antidepressant Nardil, which he’d been taking for twenty years, succumbed to major depression and was therefore not himself when he committed suicide. I will pass over the question of diagnosis (it’s possible he was not simply depressive) and the question of how such a beautiful human being had come by such vividly intimate knowledge of the thoughts of hideous men. But bearing in mind his fondness for Screwtape and his demonstrable penchant for deceiving himself and others—a penchant that his years in recovery held in check but never eradicated—I can imagine a narrative of ambiguity and ambivalence truer to the spirit of his work. By his own account to me, he had never ceased to live in fear of returning to the psych ward where his early suicide attempt had landed him. The allure of suicide, the last big score, may go underground, but it never entirely disappears. Certainly, David had “good” reasons to go off Nardil—his fear that its long-term physical effects might shorten the good life he’d managed to make for himself; his suspicion that its psychological effects might be interfering with the best things in his life (his work and his relationships)—and he also had less “good” reasons of ego: a perfectionist wish to be less substance-dependent, a narcissistic aversion to seeing himself as permanently mentally ill. What I find hard to believe is that he didn’t have very bad reasons as well. Flickering beneath his beautiful moral intelligence and his lovable human weakness was the old addict’s consciousness, the secret self, which, after decades of suppression by the Nardil, finally glimpsed its chance to break free and have its suicidal way.
This duality played out in the year that followed his quitting Nardil. He made strange and seemingly self-defeating decisions about his care, engaged in a fair amount of bamboozlement of his shrinks (whom one can only pity for having drawn such a brilliantly complicated case), and in the end created an entire secret life devoted to suicide. Throughout that year, the David whom I knew well and loved immoderately was struggling bravely to build a more secure foundation for his work and his life, contending with heartbreaking levels of anxiety and pain, while the David whom I knew less well, but still well enough to have always disliked and distrusted, was methodically plotting his own destruction and his revenge on those who loved him.
That he was blocked with his work when he decided to quit Nardil—was bored with his old tricks and unable to muster enough excitement about his new novel to find a way forward with it—is not inconsequential. He’d loved writing fiction, “Infinite Jest” in particular, and he’d been very explicit, in our many discussions of the purpose of novels, about his belief that fiction is a solution, the best solution, to the problem of existential solitude. Fiction was his way off the island, and as long as it was working for him—as long as he’d been able to pour his love and passion into preparing his lonely dispatches, and as long as these dispatches were coming as urgent and fresh and honest news to the mainland—he’d achieved a measure of happiness and hope for himself. When his hope for fiction died, after years of struggle with the new novel, there was no other way out but death. If boredom is the soil in which the seeds of addiction sprout, and if the phenomenology and the teleology of suicidality are the same as those of addiction, it seems fair to say that David died of boredom. In his early story “Here and There,” the brother of a perfection-seeking young man, Bruce, invites him to consider “how boring it would be to be perfect,” and Bruce tells us:
On the eve of my departure for Santiago, I visited my friend Karen, the widow of the writer David Foster Wallace. As I was getting ready to leave her house, she asked me, out of the blue, whether I might like to take along some of David’s cremation ashes and scatter them on Masafuera. I said I would, and she found an antique wooden matchbox, a tiny book with a sliding drawer, and put some ashes in it, saying that she liked the thought of part of David coming to rest on a remote and uninhabited island. It was only later, after I’d driven away from her house, that I realized that she’d given me the ashes as much for my sake as for hers or David’s. She knew, because I had told her, that my current state of flight from myself had begun soon after David’s death, two years earlier. At the time, I’d made a decision not to deal with the hideous suicide of someone I’d loved so much but instead to take refuge in anger and work. Now that the work was done, though, it was harder to ignore the circumstance that, arguably, in one interpretation of his suicide, David had died of boredom and in despair about his future novels. The desperate edge to my own recent boredom: might this be related to my having broken a promise to myself? The promise that, after I’d finished my book project, I would allow myself to feel more than fleeting grief and enduring anger at David’s death? (...)

He was sick, yes, and in a sense the story of my friendship with him is simply that I loved a person who was mentally ill. The depressed person then killed himself, in a way calculated to inflict maximum pain on those he loved most, and we who loved him were left feeling angry and betrayed. Betrayed not merely by the failure of our investment of love but by the way in which his suicide took the person away from us and made him into a very public legend. People who had never read his fiction, or had never even heard of him, read his Kenyon College commencement address in the Wall Street Journal and mourned the loss of a great and gentle soul. A literary establishment that had never so much as short-listed one of his books for a national prize now united to declare him a lost national treasure. Of course, he was a national treasure, and, being a writer, he didn’t “belong” to his readers any less than to me. But if you happened to know that his actual character was more complex and dubious than he was getting credit for, and if you also knew that he was more lovable—funnier, sillier, needier, more poignantly at war with his demons, more lost, more childishly transparent in his lies and inconsistencies—than the benignant and morally clairvoyant artist/saint that had been made of him, it was still hard not to feel wounded by the part of him that had chosen the adulation of strangers over the love of the people closest to him.
The people who knew David least well are most likely to speak of him in saintly terms. What makes this especially strange is the near-perfect absence, in his fiction, of ordinary love. Close loving relationships, which for most of us are a foundational source of meaning, have no standing in the Wallace fictional universe. What we get, instead, are characters keeping their heartless compulsions secret from those who love them; characters scheming to appear loving or to prove to themselves that what feels like love is really just disguised self-interest; or, at most, characters directing an abstract or spiritual love toward somebody profoundly repellent—the cranial-fluid-dripping wife in “Infinite Jest,” the psychopath in the last of the interviews with hideous men. David’s fiction is populated with dissemblers and manipulators and emotional isolates, and yet the people who had only glancing or formal contact with him took his rather laborious hyper-considerateness and moral wisdom at face value.
The curious thing about David’s fiction, though, is how recognized and comforted, how loved, his most devoted readers feel when reading it. To the extent that each of us is stranded on his or her own existential island—and I think it’s approximately correct to say that his most susceptible readers are ones familiar with the socially and spiritually isolating effects of addiction or compulsion or depression—we gratefully seized on each new dispatch from that farthest-away island which was David. At the level of content, he gave us the worst of himself: he laid out, with an intensity of self-scrutiny worthy of comparison to Kafka and Kierkegaard and Dostoyevsky, the extremes of his own narcissism, misogyny, compulsiveness, self-deception, dehumanizing moralism and theologizing, doubt in the possibility of love, and entrapment in footnotes-within-footnotes self-consciousness. At the level of form and intention, however, this very cataloguing of despair about his own authentic goodness is received by the reader as a gift of authentic goodness: we feel the love in the fact of his art, and we love him for it.
David and I had a friendship of compare and contrast and (in a brotherly way) compete. A few years before he died, he signed my hardcover copies of his two most recent books. On the title page of one of them, I found the traced outline of his hand; on the title page of the other was an outline of an erection so huge that it ran off the page, annotated with a little arrow and the remark “scale 100%.” I once heard him enthusiastically describe, in the presence of a girl he was dating, someone else’s girlfriend as his “paragon of womanhood.” David’s girl did a wonderfully slow double take and said, “What?” Whereupon David, whose vocabulary was as large as anybody’s in the Western Hemisphere, took a deep breath and, letting it out, said, “I’m suddenly realizing that I’ve never actually known what the word ‘paragon’ means.”
He was lovable the way a child is lovable, and he was capable of returning love with a childlike purity. If love is nevertheless excluded from his work, it’s because he never quite felt that he deserved to receive it. He was a lifelong prisoner on the island of himself. What looked like gentle contours from a distance were in fact sheer cliffs. Sometimes only a little of him was crazy, sometimes nearly all of him, but, as an adult, he was never entirely not crazy. What he’d seen of his id while trying to escape his island prison by way of drugs and alcohol, only to find himself even more imprisoned by addiction, seems never to have ceased to be corrosive of his belief in his lovability. Even after he got clean, even decades after his late-adolescent suicide attempt, even after his slow and heroic construction of a life for himself, he felt undeserving. And this feeling was intertwined, ultimately to the point of indistinguishability, with the thought of suicide, which was the one sure way out of his imprisonment; surer than addiction, surer than fiction, and surer, finally, than love. (...)
Adulatory public narratives of David, which take his suicide as proof that (as Don McLean sang of van Gogh) “this world was never meant for one as beautiful as you,” require that there have been a unitary David, a beautiful and supremely gifted human being who, after quitting the antidepressant Nardil, which he’d been taking for twenty years, succumbed to major depression and was therefore not himself when he committed suicide. I will pass over the question of diagnosis (it’s possible he was not simply depressive) and the question of how such a beautiful human being had come by such vividly intimate knowledge of the thoughts of hideous men. But bearing in mind his fondness for Screwtape and his demonstrable penchant for deceiving himself and others—a penchant that his years in recovery held in check but never eradicated—I can imagine a narrative of ambiguity and ambivalence truer to the spirit of his work. By his own account to me, he had never ceased to live in fear of returning to the psych ward where his early suicide attempt had landed him. The allure of suicide, the last big score, may go underground, but it never entirely disappears. Certainly, David had “good” reasons to go off Nardil—his fear that its long-term physical effects might shorten the good life he’d managed to make for himself; his suspicion that its psychological effects might be interfering with the best things in his life (his work and his relationships)—and he also had less “good” reasons of ego: a perfectionist wish to be less substance-dependent, a narcissistic aversion to seeing himself as permanently mentally ill. What I find hard to believe is that he didn’t have very bad reasons as well. Flickering beneath his beautiful moral intelligence and his lovable human weakness was the old addict’s consciousness, the secret self, which, after decades of suppression by the Nardil, finally glimpsed its chance to break free and have its suicidal way.
This duality played out in the year that followed his quitting Nardil. He made strange and seemingly self-defeating decisions about his care, engaged in a fair amount of bamboozlement of his shrinks (whom one can only pity for having drawn such a brilliantly complicated case), and in the end created an entire secret life devoted to suicide. Throughout that year, the David whom I knew well and loved immoderately was struggling bravely to build a more secure foundation for his work and his life, contending with heartbreaking levels of anxiety and pain, while the David whom I knew less well, but still well enough to have always disliked and distrusted, was methodically plotting his own destruction and his revenge on those who loved him.
That he was blocked with his work when he decided to quit Nardil—was bored with his old tricks and unable to muster enough excitement about his new novel to find a way forward with it—is not inconsequential. He’d loved writing fiction, “Infinite Jest” in particular, and he’d been very explicit, in our many discussions of the purpose of novels, about his belief that fiction is a solution, the best solution, to the problem of existential solitude. Fiction was his way off the island, and as long as it was working for him—as long as he’d been able to pour his love and passion into preparing his lonely dispatches, and as long as these dispatches were coming as urgent and fresh and honest news to the mainland—he’d achieved a measure of happiness and hope for himself. When his hope for fiction died, after years of struggle with the new novel, there was no other way out but death. If boredom is the soil in which the seeds of addiction sprout, and if the phenomenology and the teleology of suicidality are the same as those of addiction, it seems fair to say that David died of boredom. In his early story “Here and There,” the brother of a perfection-seeking young man, Bruce, invites him to consider “how boring it would be to be perfect,” and Bruce tells us:
I defer to Leonard’s extensive and hard-earned knowledge about being boring, but do point out that since being boring is an imperfection, it would by definition be impossible for a perfect person to be boring.It’s a good joke; and yet the logic is somehow strangulatory. It’s the logic of “everything and more,” to echo yet another of David’s titles, and everything and more is what he wanted from and for his fiction. This had worked for him before, in “Infinite Jest.” But to try to add more to what is already everything is to risk having nothing: to become boring to yourself.
by Jonathan Franzen, New Yorker | Read more:
Image: Zohar Lazar
Strunk and White’s Macho Grammar Club
“Be clear.” “Omit needless words.” “Do not overwrite.” “Avoid fancy words.” “Use the active voice.” Who can argue with such common sense commandments, especially when they’re delivered with Voice-of-God authority? Certainly not the generations of students, secretaries, working writers, and wannabe Hemingways who’ve feared and revered Strunk and White’s Elements of Style as the Bible of “plain English style,” as E.B. White calls it in his introduction. (Since 1959, when White revised and substantially expanded the brief guide to prose style self-published in 1918 by William Strunk Jr., a professor of English literature at Cornell, Strunk & White, as most of us know it, has sold more than 10 million copies.)
Can it really be coincidence that, smack on the first page, in a note about exceptions to one of his Elementary Rules Of Usage (“Form the possessive singular of nouns by adding ‘s..., whatever the final consonant”), Strunk gives as an example, “Moses’ laws”? The Elements of Style, more than another book, has set in stone American ideas about proper usage and, more profoundly, good style. Professor Strunk wrote his little tract as a stout defense of “the rules of usage and principles of composition most commonly violated,” the red-flag word in that sentence being “violated.”
Usage absolutists are the Scalia-esque Originalists of the language-maven set. Their emphasis on “timeless” grammatical truths, in opposition to most linguists’ view of language as a living, changing thing, is at heart conservative; their fulminations about the grammatical violations perpetrated by the masses mask deeper anxieties about moral relativism and social turbulence. (Strunk published Elements in the last year of the Great War, a cataclysm that turned Europe into history’s goriest slaughter bench, fanned the flames of revolution in Russia, and shaped the cynical, disillusioned worldview of Hemingway and his “lost generation,” as Gertrude Stein called them.) For usage purists, the decline of the language portends the fall of the republic. We’re only one misplaced comma away from the barbarians at the gates, my fellow Romans.
“No book is genuinely free from political bias,” George Orwell wrote, in his essay “Why I Write.” “The opinion that art should have nothing to do with politics is itself a political attitude.” The opinion that the canon laws of usage, composition, and style—our unquestioned assumptions about what constitutes “good prose”—have nothing to do with politics is itself a political attitude. Obviously, it’s easier for you to make out my meaning if the pane you’re peering through isn’t some Baroque fantasy in stained glass. But the Anglo-American article of faith that clarity can only be achieved through words of one syllable and sentences fit for a telegram is pure dogma. The Elements of Style is as ideological, in its bow-tied, wire-rimmed way, as any manifesto.
Strunkian style embraces the cultural logic of the Machine Age, which by 1918 was well underway. The head-whipping speedup of the 20th century, its throttle thrown wide open by faster modes of travel and accelerating social change, soon found poetic expression in the aerodynamic aesthetic known as streamlining: toasters with speedlines, teardrop-shaped prototype cars, cocktail shakers that looked like they could break the sound barrier. Anticipating streamlining, Strunk decrees, “A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts.” Likewise, his golden rule, “omit needless words,” complements the “less is more” ethos of the Bauhaus school of design, another expression of Machine Age Modernism. Optimized for peak efficiency, Strunk’s is a prose for an age of standardized widgets and standardized workers, when the efficiency gospel of F.W. Taylor, father of “scientific management,” was percolating out of the workplace, into the culture at large. “Mass reproduction is aided especially by the reproduction of the masses,” wrote the Marxist cultural critic Walter Benjamin, in 1936. Why not standardize the mass production of prose? “Prefer the standard to the offbeat,” admonishes White, cautioning against “eccentricities in language” in the “Approach to Style” he appended to his 1959 revision. Strunk & White is a child of its times—the early Machine Age, when the Professor first published it, and the gray-flannel ‘50s, when White revised it—in other ways, too. There’s much talk of vigorous prose, “vigor” being a byword in Strunk’s day for cold-shower masculinity of the strenuous, Teddy Roosevelt sort. White juxtaposes the bicep-flexing “toughness” of good writing with the “unwholesome,” sometimes even “nauseating” ickiness of “rich, ornate prose.” “If the sickly sweet word, the overblown phrase are your natural form of expression,” he counsels, “you will have to compensate for it by a show of vigor.” The implication is obvious: if a lean, mean Modernist prose of “plainness, simplicity, orderliness, [and] sincerity” is manly, then a style that rejoices in ornament and complexity and sharpens its wit with the knowing insincerity of irony or camp is unmanly—feminine or, worse yet, sissified. (Pop quiz: Why do we call overwrought language “flowery”? Because flowers recall that unmentionable part of a lady’s anatomy, and the effeminization of language saps it of its potency. Why is purple prose purple? Because purple has been synonymous with foppish unmanliness ever since Oscar Wilde wore mauve gloves to the premiere of Lady Windemere’s Fan.)

Usage absolutists are the Scalia-esque Originalists of the language-maven set. Their emphasis on “timeless” grammatical truths, in opposition to most linguists’ view of language as a living, changing thing, is at heart conservative; their fulminations about the grammatical violations perpetrated by the masses mask deeper anxieties about moral relativism and social turbulence. (Strunk published Elements in the last year of the Great War, a cataclysm that turned Europe into history’s goriest slaughter bench, fanned the flames of revolution in Russia, and shaped the cynical, disillusioned worldview of Hemingway and his “lost generation,” as Gertrude Stein called them.) For usage purists, the decline of the language portends the fall of the republic. We’re only one misplaced comma away from the barbarians at the gates, my fellow Romans.
“No book is genuinely free from political bias,” George Orwell wrote, in his essay “Why I Write.” “The opinion that art should have nothing to do with politics is itself a political attitude.” The opinion that the canon laws of usage, composition, and style—our unquestioned assumptions about what constitutes “good prose”—have nothing to do with politics is itself a political attitude. Obviously, it’s easier for you to make out my meaning if the pane you’re peering through isn’t some Baroque fantasy in stained glass. But the Anglo-American article of faith that clarity can only be achieved through words of one syllable and sentences fit for a telegram is pure dogma. The Elements of Style is as ideological, in its bow-tied, wire-rimmed way, as any manifesto.
Strunkian style embraces the cultural logic of the Machine Age, which by 1918 was well underway. The head-whipping speedup of the 20th century, its throttle thrown wide open by faster modes of travel and accelerating social change, soon found poetic expression in the aerodynamic aesthetic known as streamlining: toasters with speedlines, teardrop-shaped prototype cars, cocktail shakers that looked like they could break the sound barrier. Anticipating streamlining, Strunk decrees, “A sentence should contain no unnecessary words, a paragraph no unnecessary sentences, for the same reason that a drawing should have no unnecessary lines and a machine no unnecessary parts.” Likewise, his golden rule, “omit needless words,” complements the “less is more” ethos of the Bauhaus school of design, another expression of Machine Age Modernism. Optimized for peak efficiency, Strunk’s is a prose for an age of standardized widgets and standardized workers, when the efficiency gospel of F.W. Taylor, father of “scientific management,” was percolating out of the workplace, into the culture at large. “Mass reproduction is aided especially by the reproduction of the masses,” wrote the Marxist cultural critic Walter Benjamin, in 1936. Why not standardize the mass production of prose? “Prefer the standard to the offbeat,” admonishes White, cautioning against “eccentricities in language” in the “Approach to Style” he appended to his 1959 revision. Strunk & White is a child of its times—the early Machine Age, when the Professor first published it, and the gray-flannel ‘50s, when White revised it—in other ways, too. There’s much talk of vigorous prose, “vigor” being a byword in Strunk’s day for cold-shower masculinity of the strenuous, Teddy Roosevelt sort. White juxtaposes the bicep-flexing “toughness” of good writing with the “unwholesome,” sometimes even “nauseating” ickiness of “rich, ornate prose.” “If the sickly sweet word, the overblown phrase are your natural form of expression,” he counsels, “you will have to compensate for it by a show of vigor.” The implication is obvious: if a lean, mean Modernist prose of “plainness, simplicity, orderliness, [and] sincerity” is manly, then a style that rejoices in ornament and complexity and sharpens its wit with the knowing insincerity of irony or camp is unmanly—feminine or, worse yet, sissified. (Pop quiz: Why do we call overwrought language “flowery”? Because flowers recall that unmentionable part of a lady’s anatomy, and the effeminization of language saps it of its potency. Why is purple prose purple? Because purple has been synonymous with foppish unmanliness ever since Oscar Wilde wore mauve gloves to the premiere of Lady Windemere’s Fan.)
by Mark Dery, Daily Beast | Read more:
Image: New York Times Co./Getty ImagesHere Is Everything I Learned in New York City
Wear Comfortable Shoes
Yes, there are women who walk around New York in five-inch stilettos. There are also people who like to have sex hanging from a ceiling with a ball gag in their mouth. This world is strange and mysterious. But New York is a walking city, a city of derring-do, and you don’t want to be limping behind.
Yes, there are women who walk around New York in five-inch stilettos. There are also people who like to have sex hanging from a ceiling with a ball gag in their mouth. This world is strange and mysterious. But New York is a walking city, a city of derring-do, and you don’t want to be limping behind.
Don’t Be Afraid to Ask for What You Want
When I first came to New York, I was intimidated by delis, which is a little bit like being frightened of lawn sprinklers. But my heart would pound at the counter as I approached, feeling the impending pressure of a public decision.
“Whaddaya want?” the man would ask me.
“Um, what do you have?” I’d ask, accustomed to a detailed list of signature sandwiches from which to choose.
The man would look at an expansive glass case of cold cuts and cheeses splayed out before me with a gesture that suggested: What do you need, lady, a map? Ordering a sandwich at a deli is, technically, the easiest way to order a sandwich, because they will make it exactly as you want it. But I spent so much of my life suppressing exactly what I wanted in favor of what was available that I had no idea how I liked my sandwiches. I preferred to take other people’s suggestions, and then, when they weren’t looking, pick off the parts I didn’t like—which is an apt metaphor for my life at that time.
Sometimes I panicked. “I’ll take a pastrami on rye,” I said once, because it sounded like something a Woody Allen character would order, and god forbid the old lady buying cat food behind me should think of me as anything less than an authentic New Yorker.
I was embarrassed to ask for what I really wanted: Ham and American cheese on white bread with spicy mustard, which is possibly the least exotic, least adventurous, did-you-order-that-for-your-invisible-seven-year-old-child request you can make at a deli.
But in life, you can either ask for what you want and suffer the possibility of judgment, or you can pretend you want something else and almost certainly get it. It’s remarkable to me how long I chose the latter.
When I finally asked for a sandwich as I really wanted it, the man behind the counter simply nodded. “That all?” he asked.
My face prickled with embarrassment. “Should I get something else?”
He shrugged. “It’s not my sandwich!”
And that was the thing: It was not his sandwich. Why on earth would he care what kind of sandwich I ate, and if he did care what kind of sandwich I ate, what the hell was wrong with him? “I feel self-conscious for such a boring order,” I told him.
He smiled. “You’re an easy order.”
And from then on, we were friends. He knew my order, because few others asked for it. In fact, you could say it was my signature sandwich.

“Whaddaya want?” the man would ask me.
“Um, what do you have?” I’d ask, accustomed to a detailed list of signature sandwiches from which to choose.
The man would look at an expansive glass case of cold cuts and cheeses splayed out before me with a gesture that suggested: What do you need, lady, a map? Ordering a sandwich at a deli is, technically, the easiest way to order a sandwich, because they will make it exactly as you want it. But I spent so much of my life suppressing exactly what I wanted in favor of what was available that I had no idea how I liked my sandwiches. I preferred to take other people’s suggestions, and then, when they weren’t looking, pick off the parts I didn’t like—which is an apt metaphor for my life at that time.
Sometimes I panicked. “I’ll take a pastrami on rye,” I said once, because it sounded like something a Woody Allen character would order, and god forbid the old lady buying cat food behind me should think of me as anything less than an authentic New Yorker.
I was embarrassed to ask for what I really wanted: Ham and American cheese on white bread with spicy mustard, which is possibly the least exotic, least adventurous, did-you-order-that-for-your-invisible-seven-year-old-child request you can make at a deli.
But in life, you can either ask for what you want and suffer the possibility of judgment, or you can pretend you want something else and almost certainly get it. It’s remarkable to me how long I chose the latter.
When I finally asked for a sandwich as I really wanted it, the man behind the counter simply nodded. “That all?” he asked.
My face prickled with embarrassment. “Should I get something else?”
He shrugged. “It’s not my sandwich!”
And that was the thing: It was not his sandwich. Why on earth would he care what kind of sandwich I ate, and if he did care what kind of sandwich I ate, what the hell was wrong with him? “I feel self-conscious for such a boring order,” I told him.
He smiled. “You’re an easy order.”
And from then on, we were friends. He knew my order, because few others asked for it. In fact, you could say it was my signature sandwich.
Be Decisive
People complain New Yorkers are rude, which is imprecise. New Yorkers are some of the kindest, most good-hearted people I’ve ever met. But New Yorkers are busy, and they cannot tolerate dawdling. And that’s a challenge, because the city is a choose-your-own-adventure game of constant decisions: Cab or subway? Express or local? Highway or side street? Which do you want? Answer now!
At first, I found this crippling, because I was obsessed with making the right decision and felt like I kept whiffing it. I lived in the hipster Brooklyn neighborhood of handlebar mustaches, when I would have been happier in the bougie neighborhood of spendy trattorias. I went to the dive bar, when all I wanted was a craft cocktail. This kind of thinking will make you miserable, because you will always feel the life you deserve is not only out of reach but being enjoyed by thinner, smarter people down the hall. But eventually, I realized there is only one bad decision, the decision I moved to New York to avoid: Doing nothing at all. That is unforgivable.
People complain New Yorkers are rude, which is imprecise. New Yorkers are some of the kindest, most good-hearted people I’ve ever met. But New Yorkers are busy, and they cannot tolerate dawdling. And that’s a challenge, because the city is a choose-your-own-adventure game of constant decisions: Cab or subway? Express or local? Highway or side street? Which do you want? Answer now!
At first, I found this crippling, because I was obsessed with making the right decision and felt like I kept whiffing it. I lived in the hipster Brooklyn neighborhood of handlebar mustaches, when I would have been happier in the bougie neighborhood of spendy trattorias. I went to the dive bar, when all I wanted was a craft cocktail. This kind of thinking will make you miserable, because you will always feel the life you deserve is not only out of reach but being enjoyed by thinner, smarter people down the hall. But eventually, I realized there is only one bad decision, the decision I moved to New York to avoid: Doing nothing at all. That is unforgivable.
by Sarah Hepola, TMN | Read more:
Image: Robert Moses, The Panorama of the City of New York, 1964.Thursday, August 6, 2015
Subscribe to:
Posts (Atom)