Tuesday, October 22, 2013

Healthcare.gov: It Could Be Worse

On October 1st, the first day of the government shutdown, the U.S. Centers for Medicare & Medicaid Services launched Healthcare.gov, a four-hundred-million-dollar online marketplace designed to help Americans research and purchase health insurance. In its first days, only a small fraction of users could create an account or log in. The problems were initially attributed to high demand. But as days turned into weeks, Healthcare.gov’s troubles only seemed to multiply. Reports appeared of applications freezing half-completed and of the system “putting users in inescapable loops, and miscalculating healthcare subsidies.” Politico reported that “Web brokers … have been unable to connect to the federal system.” Healthcare.gov is the public face of the Obama Administration’s signature policy achievement, and its launch has been widely derided as a disaster. But it could have been worse.

On September 11, 2001, the F.B.I. was still using a computer system that couldn’t store or display pictures; entering data was time-consuming and awkward, and retrieving it even more so. A 9/11 Commission staff report concluded that “the FBI’s primary information management system, designed using 1980s technology already obsolete when installed in 1995, limited the Bureau’s ability to share its information internally and externally.” But an overhaul of that system had already begun in the months leading up to 9/11. In June, 2001, the F.B.I. awarded the contractor Science Applications International Corp. (S.A.I.C.) a fourteen-million-dollar contract to upgrade the F.B.I.’s computer systems. The project was called Virtual Case File, or V.C.F., and it would ultimately cost over six hundred million dollars before finally being abandoned, in early 2005, unfinished and never deployed. V.C.F. was then replaced with a project called Sentinel, expected to launch in 2009, which was “designed to be everything V.C.F. was not, with specific requirements, regular milestones and aggressive oversight,” according to F.B.I. officials who spoke to the Washington Post in 2006. But by 2010, Sentinel was also being described as “troubled,” and only two out of a planned four phases had been completed. Sentinel was finally deployed on July 1, 2012, after the F.B.I. took over the project from the contractor Lockheed-Martin in 2010, bringing it in-house for completion—at an ultimate cost of at least four hundred and fifty-one million dollars. In the end, the upgrade took the F.B.I. more than a decade and over a billion dollars.

Healthcare.gov is not so much a Web site as an interface for accessing a collection of databases and information systems. Behind the nicely designed Web forms are systems to create accounts, manage user logins, and collect insurance-application data. There’s a part that determines subsidy eligibility, a part that sends applications to the right insurance company, and other parts that glue these things together. Picture the dashboard of your car, which has a few knobs and buttons, some switches, and a big wheel—simple controls for a lot of complex machinery under the hood. All of these systems, whether in your car or on Healthcare.gov, have to communicate the right information at the right time for any of it to work properly. In the case of Healthcare.gov, we don’t know what precisely has gone wrong, because the system isn’t open-source—meaning the code used to build it isn’t available for anyone to see—and nobody involved has released technical information. But the multiple databases and subsystems are probably distributed all over the country, written in a variety of computer languages, and handle data in very different ways. Some are brand new, others are old.

For large software projects, failure is generally determined early in the process, because failures almost exclusively have to do with planning: the failure to create a workable plan, to stick to it, or both. Healthcare.gov reportedly involved over fifty-five contractors, managed by a human-services agency that lacked deep experience in software engineering or project management. The final product had to be powerful enough to navigate any American through a complex array of different insurance offerings, secure enough to hold sensitive private data, and robust enough to withstand peak traffic in the hundreds of thousands, if not millions, of concurrent users. It also had to be simple enough so that anyone who can open a Web browser could use it. In complexity, this is a project on par with the F.B.I.’s V.C.F. or Sentinel. The number and variety of systems to be connected may not be quite as large, but the interface had to be usable by anyone, without special training. And, unlike V.C.F., Healthcare.gov was given only twenty-two months from contract award to launch—less than two years for a project similar to one that took the F.B.I. more than ten years and over twice the budget.

by Rusty Foster, New Yorker |  Read more:
Image: Michael Kupperman

Reflections on a Paris Left Behind


Even Hemingway struggled with this city, working on a memoir of his poor early days, “A Moveable Feast,” off and on for years, before it was finally published after his death. Christopher Hitchens once called it “an ur-text of the American enthrallment with Paris,” identifying an unthinking nostalgia “as we contemplate a Left Bank that has since become a banal tourist enclave in a Paris where the tough and plebeian districts are gone, to be replaced by seething Muslim banlieues all around the periphery.”

Sometimes, reading about Paris in newspapers, magazines and on Web sites devoted to tourism, I feel the clichés piling high enough to touch the Eiffel Tower — or even the still-hideous Tour Montparnasse, which for decades has given skyscrapers a bad name here.

All the clichés are still there, if that’s as far as you’re willing to look, from the supposedly haughty waiters to the baguettes and croissants and the nighttime lights on the Notre-Dame de Paris, shimmering with a faith now largely abandoned. (...)

There are parts of Paris that are “cool,” to be sure, but not the way London is, or Berlin, or even Amsterdam. Paris is a city of the well-to-do, mostly white, and their careful pleasures: museums, restaurants, opera, ballet and bicycle lanes. Bertrand Delanoë, the Paris mayor since 2001, is a Socialist Michael Bloomberg — into bobo virtues like health and the environment and very much down on cars.

Adam Gopnik, a New Yorker writer, finds “the Parisian achievement” to have created, in the 19th century, two concepts of society: “the Haussmannian idea of bourgeois order and comfort, and the avant-garde of ‘la vie de bohème.’ ” While these two societies seemed to be at war, he suggests, in fact they were “deeply dependent on each other.”

Today, however, the balance is gone, and Paris is too ordered, too antiseptic and too tightly policed to have much of a louche life beyond bourgeois adulteries. In that sense, something important has been lost. (...)

Paris is the most beautiful city in the world; to me, only Prague comes close. But Paris is also filthy. While tourists regard Paris with awe and respect, for the most part many Parisians treat it with studied indifference, a high virtue here, or with contempt.

It is the Parisians who leave dog excrement on the sidewalks, who ignore the trash containers. With smoking now supposedly banned inside restaurants, the terraces of cafes become more crowded. But the streets have become ashtrays, and the rubbish defeats the traditional sluicing of the gutters with city water by men with long green nylon brushes. Large parts of Paris remind me of how, in the never quite-so-bad old days, Times Square used to look at 8 a.m. on a Sunday.

France still gets more foreign tourists than most any other country: 83 million in 2012, and 83 percent of them from Europe, compared with only 29.3 million who visited Britain. Paris alone gets 33 million tourists a year, half of them foreigners, many in search of that mythical place where Charles Aznavour meets Catherine Deneuve meets Zidane meets Dior, all drinking Champagne and nibbling foie gras, truffles, oysters and langouste.

While tourists to Israel sometimes suffer from the Jerusalem syndrome, imagining themselves in direct contact with God, some Japanese tourists suffer from what is called the “Paris Syndrome,” distraught at the difference between what they imagine and what they find. Of course, as Walt Whitman wrote about himself, Paris contains multitudes, and most visitors go away having found just enough of what they craved to develop a lifelong yearning to return.

by Steven Erlanger, NY Times |  Read more:
Image: Kosuke Okahara

New Technique Holds Promise for Hair Growth

Scientists have found a new way to grow hair, one that they say may lead to better treatments for baldness.

So far, the technique has been tested only in mice, but it has managed to grow hairs on human skin grafted onto the animals. If the research pans out, the scientists say, it could produce a treatment for hair loss that would be more effective and useful to more people than current remedies like drugs or hair transplants.

Present methods are not much help to women, but a treatment based on the new technique could be, the researchers reported Monday in Proceedings of the National Academy of Sciences.

Currently, transplants move hair follicles from the back of the head to the front, relocating hair but not increasing the amount. The procedure can take eight hours, and leave a large scar on the back of the head. The new technique would remove a smaller patch of cells involved in hair formation from the scalp, culture them in the laboratory to increase their numbers, and then inject them back into the person’s head to fill in bald or thinning spots. Instead of just shifting hair from one spot to another, the new approach would actually add hair. (...)

In the current study, Dr. Christiano worked with researchers from Durham University in Britain. They focused on dermal papillae, groups of cells at the base of hair follicles that give rise to the follicles. Researchers have known for more than 40 years that papilla cells from rodents could be transplanted and would lead to new hair growth. The cells from the papillae have the ability to reprogram the surrounding skin cells to form hair follicles.

But human papilla cells, grown in culture, mysteriously lose the ability to make hair follicles form. A breakthrough came when the researchers realized they might be growing the cells the wrong way.

One of Dr. Christiano’s partners from Durham University, Dr. Colin Jahoda, noticed that the rodent papilla cells formed clumps in culture, but the human cells did not. Maybe the clumps were important, he reasoned. So, instead of trying to grow the cells the usual way, in a flat, one-cell layer on a petri dish, he turned to an older method called the “hanging drop culture.”

That method involves putting about 3,000 papilla cells — the number in a typical papilla — into a drop of culture medium on the lid of a dish, and then flipping the lid over so that the drops are hanging upside down.

“The droplets aren’t so heavy that they drip off,” Dr. Christiano said. “The force of gravity just takes the 3,000 cells and draws them into an aggregate at the bottom of the drop.”

The technique made all the difference. The cells seem to need to touch one another in three dimensions rather than two to send and receive the signals they need to induce hair formation.

by Denise Grady, NY Times |  Read more:
Image: Ruth Fremson

Monday, October 21, 2013


Andy Warhol, Kimiko
via:

Why Have Young People in Japan Stopped Having Sex?

Ai Aoyama is a sex and relationship counsellor who works out of her narrow three-storey home on a Tokyo back street. Her first name means "love" in Japanese, and is a keepsake from her earlier days as a professional dominatrix. Back then, about 15 years ago, she was Queen Ai, or Queen Love, and she did "all the usual things" like tying people up and dripping hot wax on their nipples. Her work today, she says, is far more challenging. Aoyama, 52, is trying to cure what Japan's media calls sekkusu shinai shokogun, or "celibacy syndrome".

Japan's under-40s appear to be losing interest in conventional relationships. Millions aren't even dating, and increasing numbers can't be bothered with sex. For their government, "celibacy syndrome" is part of a looming national catastrophe. Japan already has one of the world's lowest birth rates. Its population of 126 million, which has been shrinking for the past decade, is projected to plunge a further one-third by 2060. Aoyama believes the country is experiencing "a flight from human intimacy" – and it's partly the government's fault. (...)

The number of single people has reached a record high. A survey in 2011 found that 61% of unmarried men and 49% of women aged 18-34 were not in any kind of romantic relationship, a rise of almost 10% from five years earlier. Another study found that a third of people under 30 had never dated at all. (There are no figures for same-sex relationships.) Although there has long been a pragmatic separation of love and sex in Japan – a country mostly free of religious morals – sex fares no better. A survey earlier this year by the Japan Family Planning Association (JFPA) found that 45% of women aged 16-24 "were not interested in or despised sexual contact". More than a quarter of men felt the same way.

Many people who seek her out, says Aoyama, are deeply confused. "Some want a partner, some prefer being single, but few relate to normal love and marriage." However, the pressure to conform to Japan's anachronistic family model of salaryman husband and stay-at-home wife remains. "People don't know where to turn. They're coming to me because they think that, by wanting something different, there's something wrong with them." (...)

Marriage has become a minefield of unattractive choices. Japanese men have become less career-driven, and less solvent, as lifetime job security has waned. Japanese women have become more independent and ambitious. Yet conservative attitudes in the home and workplace persist. Japan's punishing corporate world makes it almost impossible for women to combine a career and family, while children are unaffordable unless both parents work. Cohabiting or unmarried parenthood is still unusual, dogged by bureaucratic disapproval.

Aoyama says the sexes, especially in Japan's giant cities, are "spiralling away from each other". Lacking long-term shared goals, many are turning to what she terms "Pot Noodle love" – easy or instant gratification, in the form of casual sex, short-term trysts and the usual technological suspects: online porn, virtual-reality "girlfriends", anime cartoons. Or else they're opting out altogether and replacing love and sex with other urban pastimes. (...)

Aversion to marriage and intimacy in modern life is not unique to Japan. Nor is growing preoccupation with digital technology. But what endless Japanese committees have failed to grasp when they stew over the country's procreation-shy youth is that, thanks to official shortsightedness, the decision to stay single often makes perfect sense. This is true for both sexes, but it's especially true for women. "Marriage is a woman's grave," goes an old Japanese saying that refers to wives being ignored in favour of mistresses. For Japanese women today, marriage is the grave of their hard-won careers.

by Abigail Haworth, Guardian |  Read more:
Image: Eric Rechsteiner

Free Thinkers

José Urbina López Primary School sits next to a dump just across the US border in Mexico. The school serves residents of Matamoros, a dusty, sunbaked city of 489,000 that is a flash point in the war on drugs. There are regular shoot-outs, and it’s not uncommon for locals to find bodies scattered in the street in the morning. To get to the school, students walk along a white dirt road that parallels a fetid canal. On a recent morning there was a 1940s-era tractor, a decaying boat in a ditch, and a herd of goats nibbling gray strands of grass. A cinder-block barrier separates the school from a wasteland—the far end of which is a mound of trash that grew so big, it was finally closed down. On most days, a rotten smell drifts through the cement-walled classrooms. Some people here call the school un lugar de castigo—“a place of punishment.”

For 12-year-old Paloma Noyola Bueno, it was a bright spot. More than 25 years ago, her family moved to the border from central Mexico in search of a better life. Instead, they got stuck living beside the dump. Her father spent all day scavenging for scrap, digging for pieces of aluminum, glass, and plastic in the muck. Recently, he had developed nosebleeds, but he didn’t want Paloma to worry. She was his little angel—the youngest of eight children.

After school, Paloma would come home and sit with her father in the main room of their cement-and-wood home. Her father was a weather-beaten, gaunt man who always wore a cowboy hat. Paloma would recite the day’s lessons for him in her crisp uniform—gray polo, blue-and-white skirt—and try to cheer him up. She had long black hair, a high forehead, and a thoughtful, measured way of talking. School had never been challenging for her. She sat in rows with the other students while teachers told the kids what they needed to know. It wasn’t hard to repeat it back, and she got good grades without thinking too much. As she headed into fifth grade, she assumed she was in for more of the same—lectures, memorization, and busy work.
Sergio Juárez Correa was used to teaching that kind of class. For five years, he had stood in front of students and worked his way through the government-mandated curriculum. It was mind-numbingly boring for him and the students, and he’d come to the conclusion that it was a waste of time. Test scores were poor, and even the students who did well weren’t truly engaged. Something had to change.

He too had grown up beside a garbage dump in Matamoros, and he had become a teacher to help kids learn enough to make something more of their lives. So in 2011—when Paloma entered his class—Juárez Correa decided to start experimenting. He began reading books and searching for ideas online. Soon he stumbled on a video describing the work of Sugata Mitra, a professor of educational technology at Newcastle University in the UK. In the late 1990s and throughout the 2000s, Mitra conducted experiments in which he gave children in India access to computers. Without any instruction, they were able to teach themselves a surprising variety of things, from DNA replication to English.

Juárez Correa didn’t know it yet, but he had happened on an emerging educational philosophy, one that applies the logic of the digital age to the classroom. That logic is inexorable: Access to a world of infinite information has changed how we communicate, process information, and think. Decentralized systems have proven to be more productive and agile than rigid, top-down ones. Innovation, creativity, and independent thinking are increasingly crucial to the global economy.

And yet the dominant model of public education is still fundamentally rooted in the industrial revolution that spawned it, when workplaces valued punctuality, regularity, attention, and silence above all else. (In 1899, William T. Harris, the US commissioner of education, celebrated the fact that US schools had developed the “appearance of a machine,” one that teaches the student “to behave in an orderly manner, to stay in his own place, and not get in the way of others.”) We don’t openly profess those values nowadays, but our educational system—which routinely tests kids on their ability to recall information and demonstrate mastery of a narrow set of skills—doubles down on the view that students are material to be processed, programmed, and quality-tested. School administrators prepare curriculum standards and “pacing guides” that tell teachers what to teach each day. Legions of managers supervise everything that happens in the classroom; in 2010 only 50 percent of public school staff members in the US were teachers.

The results speak for themselves: Hundreds of thousands of kids drop out of public high school every year. Of those who do graduate from high school, almost a third are “not prepared academically for first-year college courses,” according to a 2013 report from the testing service ACT. The World Economic Forum ranks the US just 49th out of 148 developed and developing nations in quality of math and science instruction. “The fundamental basis of the system is fatally flawed,” says Linda Darling-Hammond, a professor of education at Stanford and founding director of the National Commission on Teaching and America’s Future. “In 1970 the top three skills required by the Fortune 500 were the three Rs: reading, writing, and arithmetic. In 1999 the top three skills in demand were teamwork, problem-solving, and interpersonal skills. We need schools that are developing these skills.”

That’s why a new breed of educators, inspired by everything from the Internet to evolutionary psychology, neuroscience, and AI, are inventing radical new ways for children to learn, grow, and thrive. To them, knowledge isn’t a commodity that’s delivered from teacher to student but something that emerges from the students’ own curiosity-fueled exploration. Teachers provide prompts, not answers, and then they step aside so students can teach themselves and one another. They are creating ways for children to discover their passion—and uncovering a generation of geniuses in the process.

by Joshua Davis, Wired |  Read more:
Image: Peter Yang

Without Copyrights: Piracy, Publishing, and the Public Domain. What Exactly is "Piracy" in the Digital Age?

“PIRACY,” the newly created National Intellectual Property Rights Protection Coordination Center (IPR Center) informs DVD viewers, “is not a victimless crime.” Setting aside the fact that the IPR Center and its partners in the FBI and Department of Homeland Security target this message at precisely the wrong audience — those who’ve chosen to purchase or rent a DVD — the campaign begs a couple of questions. Is this “piracy” actually a “crime”? And more importantly, what exactly is “piracy”?

While content-industry trade groups like the Recording Industry Association of America (RIAA), Motion Picture Association of America (MPAA), and Association of American Publishers (AAP) would doubtless like to take credit for popularizing the term to mean “using creative products without the permission of the creator or rights holder,” “piracy” has meant that for centuries, as Robert Spoo points out in his new book Without Copyrights: Piracy, Publishing, and the Public Domain (Oxford). But it’s never been so simple, particularly in the United States, long a holdout from international copyright norms. “Piracy” is always a term of rhetoric, suggesting a legal force that it frequently does not have; the word was and is a tool to sway the public and lawmakers. And even as their copyright protections were dramatically expanded in the late 20th century, rights holders sought to broaden the definition of “piracy” and concomitantly shrink the public domain, that ocean of content free for all of us to use.

In Without Copyrights, Spoo provides a deeply researched case study of the complicated American copyright situation surrounding the great literary landmark of the 20th century, James Joyce’s 1922 novel Ulysses. He shows that lax and fuzzy copyright laws in the US created a large and fertile public domain that infuriated writers, benefited readers, and provided publishers an opportunity for informal self-governance. But most importantly for the current American debate about intellectual property, Spoo makes clear that “piracy” has never been a clear-cut concept. Rights holders like to define “piracy” as any act of which they disapprove, even when — as with unauthorized publication of Ulysses in the US, or sampling of funk records in 1980s rap recordings, or uploading clips from TV awards shows to YouTube — those acts are expressly or plausibly legal. In part by using loaded terms like “piracy” to influence legislators and law enforcement agencies, rights holders try, and recently have succeeded, in then expanding the legal meaning of those terms and contracting the cultural commons.

The context Spoo ably recreates, though, is the legal environment governing American publishing from the early 19th century through the post-World War II period. In the 19th century, the so-called “reprint industry,” which mined previously published books, largely British, dominated American publishing. And while reprinters bore most of the fixed costs facing any publishing concern (labor, materials, advertising, distribution) they had one great competitive advantage: they didn’t have to pay their authors. Until 1891, US law extended copyright protection only to works by American citizens, so these reprinters made a business model out of selling British books, generally without ever contacting (much less entering into an agreement with) their authors. It’s hard to think of a more obvious example of “piracy” than this, and authors from Dickens to Wilde fumed about their vast lost revenue. A familiar anecdote describes Dickens fans, desperate to find out whether Little Nell was dead, storming the New York wharves as ships laden with the latest issue of Master Humphrey’s Clock docked. Some of those impatient fans, though, were probably publishers’ agents, frantic to grab their copies, get back to their presses, and be the first ones to market with a “pirated,” but entirely legal, American edition of the novel.

Frustrating as it was to aggrieved British authors, the law had some justification. The US was a large but largely under-booked nation in the early 1800s. In keeping with the spirit of the US Constitution’s Copyright Clause, which emphasizes that the real goal of copyright is not first and foremost the protection of an author’s rights but the promotion “of Science and useful Arts,” the law subsidized the production and dissemination of books. A lot of books. A lot of cheap books that would, Congress hoped, spread across (and educate) our widely dispersed and unschooled nation. And while the 1790 Copyright Act assured American citizens of copyright protection, ironically it did little to cultivate a native literary culture: why sign up an American author and pay royalties when one could print a guaranteed seller like Tennyson or George Eliot instead, and pocket the difference? As a result, British literature dominated American reading through the 19th century (with notable exceptions such as Uncle Tom’s Cabin, which was, in a neat turnabout, widely “pirated” in Britain).

If anyone could publish any British author, how, then, did the American publishing industry not consume itself through self-destructive cost-cutting? A professor at the University of Tulsa College of Law, Spoo is sensitive to the important distinctions between common law, legislated law, and informal community norms that carry the force of law, and thus identifies “trade courtesy” as the mechanism that saved publishing houses from bankrupting themselves through competitive discounting. These “pirate” publishers behaved more like a genteel cartel than like bootlegging gangsters, Spoo makes clear. A publisher would make it known among the community of reprinters that he intended to publish a given author or a book. Other publishers, parties to this informal gentlemen’s agreement, respected that publisher’s claim to that title, and renegades were punished through public shaming (manifested in advertisements that questioned the quality or authenticity of their texts) or, in the cases of particularly obstinate transgressors, commercial retaliation. Like Wal-Mart meeting Main Street, colluding reprinters would print their own editions of a violator’s books, pricing them ruinously low or even at a loss in pursuit of the greater good of the stability of the industry. At this time, in fact, while British authors referred to the entire American industry as “pirates,” publishers used the word internally to describe those members of their community who deviated from norms of trade courtesy.

by Greg Barnhisel, LA Review of Books |  Read more:
Image: Oxford University Press

Shovels & Rope

Sunday, October 20, 2013

The Waterboys

Dog Story

A year ago, my wife and I bought a dog for our ten-year-old daughter, Olivia. We had tried to fob her off with fish, which died, and with a singing blue parakeet, which she named Skyler, but a Havanese puppy was what she wanted, and all she wanted. With the diligence of a renegade candidate pushing for a political post, she set about organizing a campaign: quietly mustering pro-dog friends as a pressure group; introducing persuasive literature (John Grogan’s “Marley & Me”); demonstrating reliability with bird care.

I was so ignorant about dogs that I thought what she wanted must be a Javanese, a little Indonesian dog, not a Havanese, named for the city in Cuba. When we discovered, with a pang, the long Google histories that she left on my wife’s computer—havanese puppies/havanese care/how to find a havanese/havanese, convincing your parints—I assumed she was misspelling the name. But in fact it was a Havanese she wanted, a small, sturdy breed that, in the past decade, has become a mainstay of New York apartment life. (It was recognized as a breed by the American Kennel Club only in the mid-nineties.) Shrewd enough to know that she would never get us out of the city to an approved breeder, she quietly decided that she could live with a Manhattan pet-store “puppy mill” dog if she could check its eyes for signs of illness and its temperament for symptoms of sweetness. Finally, she backed us into a nice pet store on Lexington Avenue and showed us a tiny bundle of caramel-colored fur with a comical black mask. “That’s my dog,” she said simply.

My wife and I looked at each other with a wild surmise: the moment parents become parints, creatures beyond convincing who exist to be convinced. When it came to dogs, we shared a distaste that touched the fringe of disgust and flirted with the edge of phobia. I was bitten by a nasty German-shepherd guard dog when I was about eight—not a terrible bite but traumatic all the same—and it led me ever after to cross streets and jump nervously at the sight of any of its kind. My wife’s objections were narrowly aesthetic: the smells, the slobber, the shit. We both disliked dog owners in their dog-owning character: the empty laughter as the dog jumped up on you; the relentless apologies for the dog’s bad behavior, along with the smiling assurance that it was all actually rather cute. Though I could read, and even blurb, friends’ books on dogs, I felt about them as if the same friends had written books on polar exploration: I could grasp it as a subject worthy of extended poetic description, but it was not a thing I had any plans to pursue myself. “Dogs are failed humans,” a witty friend said, and I agreed.

We were, however, doomed, and knew it. The constitution of parents and children may, like the British one, be unwritten, but, as the Brits point out, that doesn’t make it less enforceable or authoritative. The unwritten compact that governs family life says somewhere that children who have waited long enough for a dog and want one badly enough have a right to have one. I felt as the Queen must at meeting an unpleasant Socialist Prime Minister: it isn’t what you wanted, but it’s your constitutional duty to welcome, and pretend.

The pet-store people packed up the dog, a female, in a little crate and Olivia excitedly considered names. Willow? Daisy? Or maybe Honey? “Why not call her Butterscotch?” I suggested, prompted by a dim memory of one of those Dan Jenkins football novels from the seventies, where the running-back hero always uses that word when referring to the hair color of his leggy Texas girlfriends. Olivia nodded violently. Yes! That was her name. Butterscotch.

We took her home and put her in the back storage room to sleep. Tiny thing, we thought. Enormous eyes. My wife and I were terrified that it would be a repeat of the first year with a baby, up all night. But she was good. She slept right through the first night, and all subsequent nights, waiting in the morning for you past the point that a dog could decently be expected to wait, greeting you with a worried look, then racing across the apartment to her “papers”—the pads that you put out for a dog to pee and shit on. Her front legs were shorter than her rear ones, putting a distinctive hop in her stride. (“Breed trait,” Olivia said, knowingly.)

All the creature wanted was to please. Unlike a child, who pleases in spite of herself, Butterscotch wanted to know what she could do to make you happy, if only you kept her fed and let her play. She had none of the imperiousness of a human infant. A child starts walking away as soon as she starts to walk—on the way out, from the very first day. What makes kids so lovable is the tension between their helplessness and their drive to deny it. Butterscotch, though, was a born courtesan. She learned the tricks Olivia taught her with startling ease: sitting and rolling over and lying down and standing and shaking hands (or paws) and jumping over stacks of unsold books. The terms of the tricks were apparent: she did them for treats. But, if it was a basic bargain, she employed it with an avidity that made it the most touching thing I have seen. When a plate of steak appeared at the end of dinner, she would race through her repertory of stunts and then offer a paw to shake. Just tell me what you want, and I’ll do it!

She was a bit like one of Al Capp’s Shmoos, in “Li’l Abner,” designed to please people at any cost. (People who don’t like Havanese find them too eager to please, and lacking in proper doggie dignity and reserve.) The key to dogginess, I saw, is that, though dogs are pure creatures of sensation, they are also capable of shrewd short-term plans. Dogs don’t live, like mystics, in the moment; dogs live in the minute. They live in and for the immediate short-term exchange: tricks for food, kisses for a walk. When Butterscotch saw me come home with bags from the grocery store, she would leap with joy as her memory told her that something good was about to happen, just as she had learned that a cloud-nexus of making phone calls and getting the leash and taking elevators produced a chance to play with Lily and Cuba, the two Havanese who live upstairs. But she couldn’t grasp exactly how these chains of events work: some days when she heard the name “Lily” she rushed to the door, sometimes to her leash, sometimes to the elevator, and sometimes to the door on our floor that corresponds to the door on the eighth floor where Lily lives.

But she had another side, too. At the end of a long walk, or a prance around the block, she would come in with her usual happy hop, and then, let off her leash, she would growl and hiss and make Ewok-like noises that we never otherwise heard from her; it was a little scary at first, like the moment in “Gremlins” when the cute thing becomes a wild, toothy one. Then she would race madly from one end of the hall to the other, bang her head, and turn around and race back, still spitting and snorting and mumbling guttural consonants to herself, like a mad German monarch. Sometimes she would climax this rampage by pulling up hard and showing her canines and directing two sharp angry barks at Olivia, her owner, daring her to do something about it. Then, just as abruptly, Butterscotch would stop, sink to the floor, and once again become a sweet, smiling companion, trotting loyally behind whoever got up first. The wolf was out; and then was tucked away in a heart-drawer of prudence. This behavior, Olivia assured us, is a Havanese breed trait, called “run-like-hell,” though “Call of the Wild” might be a better name. (Olivia spent hours on the Havanese forum, a worldwide chat board composed mostly of older women who call themselves the small dogs’ “mommies,” and share a tone of slightly addled coziness, which Olivia expertly imitated. Being a dog owner pleased her almost more than owning a dog.)

But what could account for that odd double nature, that compelling sweetness and implicit wildness? I began to read as widely as I could about this strange, dear thing that I had so long been frightened of.

by Adam Gopnik, New Yorker |  Read more:
Image: Jules Feiffer

Miss Heather, chicapoquita
via:

Amy Stock
via:

Saturday, October 19, 2013

Dire Straits


Lyrics

well this is my back yard - my back gate
I hate to start my parties late
here's the party cart - ain't that great?
that ain't the best part honey - just wait
that a genuine weathervane - it moves with the breeze
portable hammok baby - who needs trees
it's casual entertaining - we aim to please
at my parties

check out the shingles - it's brand new
excuse me while I mingle - hi, how are you
hey everybody - let me give you a toast
this one's for me - the host with the most

it's getting a trifle colder - step inside my home
that's a brass toilet tissue holder with its own telephone
that's a musical doorbell - it don't ring, I ain't kiddin'
it plays america the beautiful and tie a yellow ribbon

boy, this punch is a trip - it's o.k. in my book
here, take a sip - maybe a little heavy on the fruit
ah, here comes the dip - you may kiss the cook
let me show you honey - it's easy - look
you take a fork and spike 'em - say, did you try these?
so glad you like 'em - the secret's in the cheese
it's casual entertaining - we aim to please
at my parties

now don't talk to me about the polar bear
don't talk to me about the ozone layer
ain't much of anything these days, even the air
they're running out of rhinos - what do I care?
let's hear it for the dolphin - let's hear it for the trees
ain't running out of nothing in my deep freeze
it's casual entertaining - we aim to please
at my parties

Richard Diebenkorn

Delectable: The Only Wine App Worth a Damn

Ever since the iPhone was first released, I've been trying apps that are aimed at wine lovers. For the relatively niche market that wine represents, there have been a surprising number of apps trying to address it. There are maps, buying guides, ratings databases, food and wine pairing, cellar management, e-commerce, wine tasting tools, regional guidebooks, and social networks.

It sounds ridiculous for me to claim that I've tried every single wine app that is out there on the market, and it's probably not true, but let me tell you, I've tried most of them. And they're all crap.

Ninety-eight percent of them either don't offer to do anything truly useful for wine lovers, or, if they do offer to do something useful, they actually don't deliver on their promise. For instance what good is a winery tasting room guide and map of Napa that doesn't have the first five wineries that I search for in their database? How helpful is an app that promises to identify bottles that you take a photo of, but can't seem to get the identification right two out of five times?

The other two percent of wine apps on the market might -- and it is impossible to put too much emphasis on that word 'might' -- offer some value, but any hope of doing so is immediately destroyed by completely awful design and usability. Yes, your app might actually have some interesting advice to give about pairing wine and food, but when it takes me five minutes to drill down through a horribly designed menu-tree of foodstuffs to find the thing that I'm looking to cook, no matter how good your content is, I hate your app before I get to it.

In short, my professional opinion (and it is worth reminding you that by day I run an interactive design agency that, among other things, designs iPhone applications) is that there is one, and only one wine app that I've ever seen that is worth using, and it is called Delectable. Credit given where credit is due, I hadn't heard about it until Jon Bonné of the San Francisco Chronicle mentioned it to me last year, and he introduced it with much the same sentiment I am sharing right now.

But let me explain why, and how, Delectable has managed to pass the ultimate test that any wine app must face: "Will I actually use this thing regularly?"

by Alder, Vinography |  Read more:
Image: uncredited