Saturday, November 26, 2011

Examining the Big Lie

It’s fair to say that our discussion about the big lie touched a nerve.

The big lie of the financial crisis, of course, is that troubling technique used to try to change the narrative history and shift blame from the bad ideas and terrible policies that created it.

Based on the scores of comments, people are clearly interested in understanding the causes of the economic disaster.

I want to move beyond what I call “the squishy narrative” — an imprecise, sloppy way to think about the world — toward a more rigorous form of analysis. Unlike other disciplines, economics looks at actual consequences in terms of real dollars. So let’s follow the money and see what the data reveal about the causes of the collapse.

Rather than attend a college-level seminar on the complex philosophy of causation, we’ll keep it simple. To assess how blameworthy any factor is regarding the cause of a subsequent event, consider whether that element was 1) proximate 2) statistically valid 3) necessary and sufficient.

Consider the causes cited by those who’ve taken up the big lie. Take for example New York Mayor Michael Bloomberg’s statement that it was Congress that forced banks to make ill-advised loans to people who could not afford them and defaulted in large numbers. He and others claim that caused the crisis. Others have suggested these were to blame: the home mortgage interest deduction, the Community Reinvestment Act of 1977, the 1994 Housing and Urban Development memo, Fannie Mae and Freddie Mac, Rep. Barney Frank (D-Mass.) and homeownership targets set by both the Clinton and Bush administrations.

When an economy booms or busts, money gets misspent, assets rise in prices, fortunes are made. Out of all that comes a set of easy-to-discern facts.

Here are key things we know based on data. Together, they present a series of tough hurdles for the big lie proponents.

by Barry Ritholtz, The Big Picture |  Continue reading:
Illustration: Washington Post

License Plate Readers and Cell Phone Rippers

[ed. Civil liberties?  Oh yeah, those old things...yawn.  I'm sure authorities would never think of using the data inappropriately.]

Scores of cameras across the city capture 1,800 images a minute and download the information into a rapidly expanding archive that can pinpoint people’s movements all over town.  (...)

More than 250 cameras in the District and its suburbs scan license plates in real time, helping police pinpoint stolen cars and fleeing killers. But the program quietly has expanded beyond what anyone had imagined even a few years ago.

With virtually no public debate, police agencies have begun storing the information from the cameras, building databases that document the travels of millions of vehicles.

Nowhere is that more prevalent than in the District, which has more than one plate-reader per square mile, the highest concentration in the nation. Police in the Washington suburbs have dozens of them as well, and local agencies plan to add many more in coming months, creating a comprehensive dragnet that will include all the approaches into the District.  (...)

“That’s quite a large database of innocent people’s comings and goings,” said Jay Stanley, senior policy analyst for the American Civil Liberties Union’s technology and liberty program. “The government has no business collecting that kind of information on people without a warrant.”

by  Allision Klein and Josh White, Washington Post | Continue reading:

A high-tech gadget that can quickly download information from a cellphone is at the center of a controversy that's pitting civil liberties advocates against state police in Michigan.

Since 2008, the ACLU of Michigan has been petitioning the Michigan State Police to turn over information about their use of so-called "data extraction devices" (or DEDs). Manufactured by Cellebrite, a mobile forensics and data services company headquartered in Israel, the devices can connect to cellphones and, even bypassing passwords, retrieve phone numbers, text messages, call history, photos and video.

On a "tip" that police had used a DED unlawfully, Moss said the ACLU filed its first Freedom of Information Act (FOIA) request in 2008 to learn the policies and practices surrounding the extraction device.

The police did not offer answers. Instead, they told the ACLU it would need to pay more than $544,000 to retrieve the records and reports it had asked for. Over the past few years, Moss said the ACLU has tried to work with the police to narrow the request and lower the cost, but with little success.

by Ki Mae Heussner, ABC News | Continue reading:
Photos: James A. Parcell - For The Washington Post.  Cellbrite.
 

Zhou Hao, “No.174-f”, Drypoint

via:

Libraries: Where It All Went Wrong

It was my pleasure to address the National and State Librarians of Australasia on the eve of their strategic planning meeting in Auckland at the start of November this year. I have been involved in libraries for a few years now, and am always humbled by the expertise, hard work, and dedication that librarians of all stripes have. Yet it’s no revelation that libraries aren’t the great sources of knowledge and information on the web that they were in the pre-Internet days. I wanted to push on that and challenge the National and State librarians to think better about the Internet.

I prefaced my talk by saying that none of this is original, so it shouldn’t come as a surprise. I merely wanted to bring the different strands together in a way that showed them how to think about the opportunities afforded to libraries for the digital age.  (...)

Bill Gates wrote a bestseller in 1995.  He was on a roll: Microsoft Windows had finally crushed its old foe the Macintosh computer from Apple, Microsoft was minting money hand over fist, and he was hugely respected in the industry he had helped start. He roped in other big brains from Microsoft to write a book to answer the question, “what next?”  The Road Ahead talked about the implications of everyone having a computer and how they would use the great Information Superhighway that was going to happen.

The World Wide Web appears in the index to The Road Ahead precisely four times.  Bill Gates didn’t think the Internet would be big.  The Information Superhighway of Gates’s fantasies would have more structure than the Internet, be better controlled than the Internet, in short it would be more the sort of thing that a company like Microsoft would make.

Bill Gates and Microsoft were caught flat-footed by the take-up of the Internet. They had built an incredibly profitable and strong company which treated computers as disconnected islands: Microsoft software ran on the computers, but didn’t help connect them.  Gates and Microsoft soon realized the Internet was here to stay and rushed to fix Windows to deal with it, but they never made up for that initial wrong-footing.

At least part of the reason for this was because they had this fantastic cash cow in Windows, the island software.  They were victims of what Clayton Christenson calls the Innovator’s Dilemma: they couldn’t think past their own successes to build the next big thing, the thing that’d eat their lunch.  They still haven’t got there: Bing, their rival to Google, has eaten $5.5B since 2009 and it isn’t profitable yet.

I’m telling you this because libraries are like Microsoft.

At one point you had a critical role: you were one of the few places to conduct research. When academics and the public needed to do research into the documentary record, they’d come to you. As you now know, that monopoly has been broken.

The Internet, led by Google, is the start and end of most people’s research. It’s good enough to meet their needs, which is great news for the casual researcher but bad news for you.

Now they don’t think of you at all.

by Nat Torkington | Continue reading: 
Photo:  David Lat

What’s in a Name? Ask Google

It’s the rare parent, it seems, who wants a common name for a child. New parents, after all, envision future presidents, Super Bowl winners and cancer curers, not Vatican streakers or college beer-bong guzzlers.

But maybe common names are more prudent. A recent study by the online security firm AVG found that 92 percent of children under 2 in the United States have some kind of online presence, whether a tagged photo, sonogram image or Facebook page. Life, it seems, begins not at birth but with online conception. And a child’s name is the link to that permanent record.

“When you name your baby, it’s a time of dreaming,” Ms. Wattenberg said. “No one stops and thinks, ‘What if one day my child does something embarrassing and wants to hide from it?’ ”

Maybe the wisest approach in our searchable new world is to let computers do the naming.

Lindsey Pollak, a writer on the Upper West Side of Manhattan who specializes in career advice, fancied the name Chloe when she was pregnant with her daughter. Her husband, Evan Gotlib, wanted Zoe.

To settle the feud, they downloaded a 99-cent iPhone app called Kick to Pick. After typing in the two names, they held the phone to Ms. Pollak’s stomach, as the phone alternated between the two. When the fetus kicked, the phone froze on one name, like a coin toss. It came up Chloe for each of the four tries.

The next thing Ms. Pollak did, of course, was to Google it. “One of the Web sites said Chloe means little green shoots, and we liked that,” Ms. Pollak said. Chloe it was. They even registered their unborn child’s first and last name as a domain name and signed her up on Tumblr, Twitter and G-mail.

The Kaslofskys wish they had had that foresight. When they Googled Kaleya in 2009, there were only a few relevant results. But since then, the parents of another child named Kaleya have started posting videos of that little girl’s adventures on YouTube, with titles like “Kaleya Makes a Snow Angel” and “Kaleya Runs From a Wave.”

Ms. Kaslofsky is miffed. “Things have changed in the last three years,” she said.

by  Allan Salkin, NY Times | Continue reading:
Illustration: Allison Seiffer

Friday, November 25, 2011


Jean-François Provost, “Tricycle à Deux Roues”, encre et tech. mixtes sur papier
via:

Friday Book Club - NY Times 100 Notable Books for 2011

THE ANGEL ESMERALDA: Nine Stories.By Don DeLillo. (Scribner, $24.) DeLillo’s first collection of short fiction, compiling stories written between 1979 and 2011, serves as a liberating reminder that terror existed long before there was a war on it. 
 
THE ART OF FIELDING. By Chad Harbach. (Little, Brown, $25.99.) This allusive, Franzen-like first novel, about a gifted but vulnerable baseball player, proceeds with a handsome stateliness. 
 
THE BARBARIAN NURSERIES. By Héctor Tobar. (Farrar, Straus & Giroux, $27.) A big, insightful novel about social and ethnic conflict in contemporary Los Angeles. 
 
BIG QUESTIONS. Or, Asomatognosia: Whose Hand Is It Anyway? Written and illustrated by Anders Brekhus Nilsen. (Drawn & Quarterly, cloth, $69.95; paper, $44.95.) In this capacious, metaphysically inclined graphic novel, a flock of finches act out Nilsen’s unsettling comic vision about the food chain, fate and death. 
 
THE BUDDHA IN THE ATTIC. By Julie Otsuka. (Knopf, $22.) Through a chorus of narrators, Otsuka unfurls the stories of Japanese women who came to America in the early 1900s to marry men they’d never met. 
 
CANTI. By Giacomo Leopardi. Translated by Jonathan Galassi. (Farrar, Straus & Giroux, $35.) With this English translation, Leopardi may at last become as important to American literature as Rilke or Baudelaire. 
 
THE CAT’S TABLE. By Michael Ondaatje. (Knopf, $26.) Ondaatje grants that this novel, about three daring Ceylonese schoolboys on a sea journey to England, sometimes uses the “coloring and locations of memoir.” 
 
CHANGÓ’S BEADS AND TWO-TONE SHOES. By William Kennedy. (Viking, $26.95.) In Kennedy’s most musical work of fiction, a newspaperman attains a cynical old-pro objectivity as Albany’s political machine pulls out the stops to head off a race riot in 1968. 

Continue reading:
Illustration: R.O. Blechman

Current Events: What a World

[ed.  So, Walmart actually sold these items to the woman AFTER she pepper sprayed her way in?]

Ten minutes before the Porter Ranch Walmart opened in Los Angeles, California, a mother of two used a can of pepper spray to attack other shoppers waiting in line for the early, Black Friday door-buster sales according to the Los Angeles Times. The woman was specifically looking to get the $100 discounted price on a Xbox 360 as well as a few Xbox 360 games. With both children in tow, she began spraying other customers as soon as the store employees started pulling packing plastic off the discounted items. Once the doors opened, the woman also used the pepper spray to gain a favorable position over other customers competing for items. A witness at the attack posted on Twitter that customers were screaming about stinging eyes.

Los Angeles firefighters and the LAPD responded to the attack quickly and treated 20 injured customers that suffered from extreme swelling, coughing and redness of the face and eyes. One customer required further treatment at the local hospital. The woman fled the store soon after it opened and the LAPD are accessing Wal-Mart security footage to get a clear picture of the woman as well as video evidence of the attacks. Los Angeles Police Lt. called the attack “customer-versus-customer shopping rage” and mentioned that the LAPD plans to release the photo of the woman to the press shortly. The police are also looking into her method of payment to quickly track down the woman through a credit card or check payment.

by Mike Flacy, Digital Trends via Yahoo |  Continue reading:

You Say You Want a Revolution

[ed.  The man who is probably responsible for more revolutions than anyone else in history.]

Halfway around the world from Tahrir Square in Cairo, an aging American intellectual shuffles about his cluttered brick row house in a working-class neighborhood here. His name is Gene Sharp. Stoop-shouldered and white-haired at 83, he grows orchids, has yet to master the Internet and hardly seems like a dangerous man.

But for the world’s despots, his ideas can be fatal.

Few Americans have heard of Mr. Sharp. But for decades, his practical writings on nonviolent revolution — most notably “From Dictatorship to Democracy,” a 93-page guide to toppling autocrats, available for download in 24 languages — have inspired dissidents around the world, including in Burma, Bosnia, Estonia and Zimbabwe, and now Tunisia and Egypt.  (...)

Mr. Sharp, hard-nosed yet exceedingly shy, is careful not to take credit. He is more thinker than revolutionary, though as a young man he participated in lunch-counter sit-ins and spent nine months in a federal prison in Danbury, Conn., as a conscientious objector during the Korean War. He has had no contact with the Egyptian protesters, he said, although he recently learned that the Muslim Brotherhood had “From Dictatorship to Democracy” posted on its Web site.

While seeing the revolution that ousted Hosni Mubarak as a sign of “encouragement,” Mr. Sharp said, “The people of Egypt did that — not me.”

He has been watching events in Cairo unfold on CNN from his modest house in East Boston, which he bought in 1968 for $150 plus back taxes.

It doubles as the headquarters of the Albert Einstein Institution, an organization Mr. Sharp founded in 1983 while running seminars at Harvard and teaching political science at what is now the University of Massachusetts at Dartmouth. It consists of him; his assistant, Jamila Raqib, whose family fled Soviet oppression in Afghanistan when she was 5; a part-time office manager and a Golden Retriever mix named Sally. Their office wall sports a bumper sticker that reads “Gotov Je!” — Serbian for “He is finished!”

In this era of Twitter revolutionaries, the Internet holds little allure for Mr. Sharp. He is not on Facebook and does not venture onto the Einstein Web site. (“I should,” he said apologetically.) If he must send e-mail, he consults a handwritten note Ms. Raqib has taped to the doorjamb near his state-of-the-art Macintosh computer in a study overflowing with books and papers. “To open a blank e-mail,” it reads, “click once on icon that says ‘new’ at top of window.”

Some people suspect Mr. Sharp of being a closet peacenik and a lefty — in the 1950s, he wrote for a publication called “Peace News” and he once worked as personal secretary to A. J. Muste, a noted labor union activist and pacifist — but he insists that he outgrew his own early pacifism and describes himself as “trans-partisan.”

Based on studies of revolutionaries like Gandhi, nonviolent uprisings, civil rights struggles, economic boycotts and the like, he has concluded that advancing freedom takes careful strategy and meticulous planning, advice that Ms. Ziada said resonated among youth leaders in Egypt. Peaceful protest is best, he says — not for any moral reason, but because violence provokes autocrats to crack down. “If you fight with violence,” Mr. Sharp said, “you are fighting with your enemy’s best weapon, and you may be a brave but dead hero.”  (...)
 
“He is generally considered the father of the whole field of the study of strategic nonviolent action,” said Stephen Zunes, an expert in that field at the University of San Francisco. “Some of these exaggerated stories of him going around the world and starting revolutions and leading mobs, what a joke. He’s much more into doing the research and the theoretical work than he is in disseminating it.”

by Sheryl Gay Stolberg, NY Times |  Continue reading:
Photo: Evan McGlinn for The New York Times

Tetsuo Aoki
via:

The Sketchbook of Susan Kare, the Artist Who Gave Computing a Human Face

Point, click.

The gestures and metaphors of icon-driven computing feel so natural and effortless to us now, it seems strange to recall navigating in the digital world any other way. Until Apple’s debut of the Macintosh in 1984, however, most of our interactions with computers looked more like this:

Command line

How did we get from there to here?

iPad photo by Ben Atkin
iPad photo by Ben Atkin, under Creative Commons license

The Mac wasn’t the first computer to present the user with a virtual desktop of files and folders instead of a command line and a blinking cursor. As every amateur geek historian knows, the core concepts behind the graphical user interface or GUI (including the icons, mouse, and bitmapped graphics) made their debut in 1968 in a presentation by Stanford Research Institute’s Doug Engelbart celebrated as the “mother of all demos.”

The revolutionary ideas in Engelbart’s demo were further developed at Xerox PARC, where a 24-year-old Steve Jobs took a legendary tour in 1979 that convinced him that the GUI represented the democratic future of computing. (“I thought it was the best thing I’d ever seen in my life,” he said later. “Within ten minutes, it was obvious to me that all computers would work like this someday.”) He promptly licensed the GUI technology he saw at work in a non-commercial product called the Xerox Alto for a modest amount of Apple stock, and the rest is Silicon Valley history. (...)

The challenge of designing a personal computer that “the rest of us” would not only buy, but fall crazy in love with, however, required input from the kind of people who might some day be convinced to try using a Mac. Fittingly, one of the team’s most auspicious early hires was a young artist herself: Susan Kare.

After taking painting lessons as a young girl and graduating from New York University with a Ph.D. in fine arts, Kare moved to the Bay Area, where she took a curatorial job at the Fine Arts Museums of San Francisco. But she quickly felt like she was on the wrong side of the creative equation. “I’d go talk to artists in their studios for exhibitions,” she recalls, “but I really wanted to be working in my studio.”

Eventually Kare earned a commission from an Arkansas museum to sculpt a razorback hog out of steel. That was the project she was tackling in her garage in Palo Alto when she got a call from a high-school friend named Andy Hertzfeld, who was the lead software architect for the Macintosh operating system, offering her a job.

by Steve Silberman, PLosBlogs |  Continue reading:

Ernst-Ludwig-Kirchner. Nudes in a meadow. 1929. Oil on canvas.
via:

Competion in an Unregulated Market

How the Plummeting Price of Cocaine Fueled the Nationwide Drop in Violent Crime.

Starting in the mid-1990s, major American cities began a radical transformation. Years of high violent crime rates, thefts, robberies, and inner-city decay suddenly started to turn around. Crime rates didn't just hold steady, they began falling faster than they went up. This trend appeared in practically every post-industrial American city, simultaneously.

"The drop of crime in the 1990s affected all geographic areas and demographic groups," Steven D. Levitt wrote in his landmark paper on the subject, Understanding Why Crime Fell in the 1990s, and elucidated further in the best-selling book Freakonomics. "It was so unanticipated that it was widely dismissed as temporary or illusory long after it had begun.” He went on to tie the drop to the legalization of abortion 20 years earlier, dismissing police tactics as a cause because they failed to explain the universality and unexpectedness of the change. Alfred Blumstein's The Crime Drop in America pinned the cause of crime solely on the crack epidemic but gave the credit for its disappearance to those self-same policing strategies.

Plenty of other theories have been offered to account for the double-digit decrease in violence, from the advent of "broken windows" policies, three strikes laws, changing demographics, gun control laws, and the increasing prevalence of cellphones to an upturn in the economy and cultural shifts in American society. Some of these theories have been disproven outright while others require a healthy dose of assumption to turn correlation into causation. But much less attention has been paid to another likely culprit: the collapse of the U.S. cocaine market.
•       •       •       •       •
Cocaine was the driving force behind the majority of drug-related violence throughout the 1980s and into the early 1990s. It was the main target of the federal War on Drugs and was the highest profit drug trade overall. In 1988, the American cocaine market was valued at almost $140 billion dollars, over 2 percent of U.S. GDP. The violence that surrounded its distribution and sale pushed the murder rate to its highest point in America's history (between 8-10 per 100,000 residents from 1981-1991), turned economically impoverished cities like Baltimore, Detroit, Trenton and Gary, Indiana, into international murder capitals, and made America the most violent industrialized nation in the world.

Then in 1994, the crime rate dropped off a cliff. The number of homicides would plummet drastically, dropping almost 50 percent in less than ten years. The same would go for every garden variety of violent crime on down to petty theft. The same year as the sharp decline in crime, cocaine prices hit an all-time low. According to the DEA's System to Retrieve Information on Drug Evidence (STRIDE) data, the price per gram of cocaine bottomed out in 1994 at around $147 (calculated in 2003 dollars), the lowest it had been since statistics became available.

Something was wrong. If anything, cocaine prices should have been skyrocketing. One of the DEA's stated objectives for the War on Drugs was to make drugs more expensive and therefore harder to access for the individual user. To get there, the DEA pursued a number of strategies: large drug busts, heavier penalties on importers and producers, and limiting access to the materials used in drug production. Even while many of those tactics produced big successes, cocaine prices still went down, not up, and crime plummeted right alongside.

by Llewellyn Hinkes-Jones, The Atlantic |  Continue reading:

The Cloud Appreciation Society

I first learned about cloud lovers in a police report concerning a man who received a blowjob from a young woman and went mad. The man—let's call him Carl (police reports have the names of suspects and victims redacted)—was in his 40s, and the woman, let's call her Lisa, was almost 18. The two first met in the fall of 2003 at a local TV station that was holding a contest to find the best video footage of Northwest clouds.

According to the report, which was lost when I cleaned my messy desk in 2005 (I'm recalling all of this from an imperfect memory), Carl, who was married and well-to-do, fell in love with Lisa, whose family was not so well-off, upon seeing her for the first time. He had a videocassette in his hand; she had a videocassette in her hand. He showed his tape to the station's weatherman (sun, sky, clouds). She showed hers (clouds, sky, sun). During the contest, his eyes could not escape her beauty. After the contest, the impression she made on his mind intensified. That bewitching coin in the short story by Jorge Luis Borges, "The Zahir," comes to mind. If a person sees this coin only once, the memory of its image begins to more and more dominate his/her thoughts and dreams. Soon the coin becomes the mind's sole reality. Lisa's face was Carl's Zahir.  (...)

"We believe that clouds are unjustly maligned and that life would be immeasurably poorer without them. We think that they are Nature's poetry, and the most egalitarian of her displays, since everyone can have a fantastic view of them. We pledge to fight 'blue-sky thinking' wherever we find it. Life would be dull if we had to look up at cloudless monotony day after day." This is the opening of the Cloud Appreciation Society's manifesto. The organization emerged unexpectedly in 2004 (the year before I lost the remarkable police report) from a lecture delivered by Gavin Pretor-Pinney at a literary festival in Cornwall, England, entitled "The Inaugural Lecture of the Cloud Appreciation Society."

"Lots of people showed up for the talk," explains Pretor-Pinney in an e-mail, "and came up to me afterward to ask how they could join my society. So I put up a website and issued anyone who wanted to join with a badge and a certificate with their name on it... The membership just spread in the viral way that things can on the internet. We now have more than 27,500 members in 94 countries around the world." (Pretor-Pinney also published a book, The Cloud Collector's Handbook, that, when closed, fits snugly in your pocket and, when open, provides information for identifying and scoring clouds.)

The Cloud Appreciation Society website (www.cloudappreciationsociety.org) has several features, the best of which is a gallery of cloud photographs by members and nonmembers, professionals and amateurs, the young and old. Indeed, if Lisa and Carl are still lovers of clouds (she by now is in her late 20s and he in his late 40s), they are probably familiar with the pictures on this website. Some clouds are caught at dusk, others at dawn, others in the dead middle of the day. Some are reflected by a glassy sea, others cling to the tops of green trees, others rise over the glittering ice of Antarctica. One photo captures a god-mad cloud that threatens to smite some rural road in a god-fearing country. Another shows dusky clouds that are massively stacked in the sky above Singapore's port. One stunning photo, which was taken by Nick Lippert (a resident of Tumwater, Washington) at 7:40 a.m. on October 28, 2011, transports us to the place we expect to see when it is time to pay for sins: a hellish Mount Rainier casting a demon shadow on a soaring continent of blood-red clouds. (...)

The Cloud Appreciation Society's website also has poems ("Cloud Verse"), love letters to clouds, and short essays. Much of it is bad, and much of it is wonderfully bizarre. For example, one essay, "The Advantages of Watching the Cloud Channel," which was composed by one Andrea de Majewski, a Seattleite who currently lives in the Big Apple, loftily compares watching clouds to watching TV. In a million years of dreaming and thinking, I would never have seen this connection, never found this invisible thread that links the sky to the TV screen.

"The cloud channel has several advantages over regular TV," writes Majewski. "First off, you don't have to choose between rabbit ears or taking out a mortgage to fund a dish or cable package or whatever. It's free, and whether it's on or not is completely beyond your control. Here in Seattle, it's broadcast more often than many places. Move here if you want to watch a lot. If it's not on, you must do other things. The laundry, grocery shop, whatever. But if it's on, you can postpone chores and lie down and watch it.

"It's very relaxing. One reason for this is that there are no ads. Not even the things on public television that are just like ads except shorter and more boring. No one tries to sell you anything at all on the cloud channel." The impression one gets from the Cloud Appreciation Society's website is that cloud collectors are very dreamy people, utopians to the core, and extremely sensitive to the transience of life.

by Charles Mudede, The Stranger |  Continue reading:
Photo: OeilDeNuit

Askers and Guessers

The advice of etiquette experts on dealing with unwanted invitations, or overly demanding requests for favours, has always been the same: just say no. That may have been a useless mantra in the war on drugs, but in the war on relatives who want to stay for a fortnight, or colleagues trying to get you to do their work, the manners guru Emily Post's formulation – "I'm afraid that won't be possible" – remains the gold standard. Excuses merely invite negotiation. The comic retort has its place (Peter Cook: "Oh dear, I find I'm watching television that night"), and I'm fond of the tautological non-explanation ("I can't, because I'm unable to"). But these are variations on a theme: the best way to say no is to say no. Then shut up.  (...)

There are certainly profound issues here, of self-esteem, guilt etcetera. But it's also worth considering whether part of the problem doesn't originate in a simple misunderstanding between two types of people: Askers and Guessers.

This terminology comes from a brilliant web posting by Andrea Donderi that's achieved minor cult status online. We are raised, the theory runs, in one of two cultures. In Ask culture, people grow up believing they can ask for anything – a favour, a pay rise– fully realising the answer may be no. In Guess culture, by contrast, you avoid "putting a request into words unless you're pretty sure the answer will be yes… A key skill is putting out delicate feelers. If you do this with enough subtlety, you won't have to make the request directly; you'll get an offer. Even then, the offer may be genuine or pro forma; it takes yet more skill and delicacy to discern whether you should accept."

Neither's "wrong", but when an Asker meets a Guesser, unpleasantness results. An Asker won't think it's rude to request two weeks in your spare room, but a Guess culture person will hear it as presumptuous and resent the agony involved in saying no. Your boss, asking for a project to be finished early, may be an overdemanding boor – or just an Asker, who's assuming you might decline. If you're a Guesser, you'll hear it as an expectation. This is a spectrum, not a dichotomy, and it explains cross-cultural awkwardnesses, too: Brits and Americans get discombobulated doing business in Japan, because it's a Guess culture, yet experience Russians as rude, because they're diehard Askers.

by Oliver Burkeman, Guardian |  Continue reading:
image via:

Monique
via: