Monday, October 21, 2013
Why Have Young People in Japan Stopped Having Sex?
Ai Aoyama is a sex and relationship counsellor who works out of her narrow three-storey home on a Tokyo back street. Her first name means "love" in Japanese, and is a keepsake from her earlier days as a professional dominatrix. Back then, about 15 years ago, she was Queen Ai, or Queen Love, and she did "all the usual things" like tying people up and dripping hot wax on their nipples. Her work today, she says, is far more challenging. Aoyama, 52, is trying to cure what Japan's media calls sekkusu shinai shokogun, or "celibacy syndrome".
Japan's under-40s appear to be losing interest in conventional relationships. Millions aren't even dating, and increasing numbers can't be bothered with sex. For their government, "celibacy syndrome" is part of a looming national catastrophe. Japan already has one of the world's lowest birth rates. Its population of 126 million, which has been shrinking for the past decade, is projected to plunge a further one-third by 2060. Aoyama believes the country is experiencing "a flight from human intimacy" – and it's partly the government's fault. (...)
The number of single people has reached a record high. A survey in 2011 found that 61% of unmarried men and 49% of women aged 18-34 were not in any kind of romantic relationship, a rise of almost 10% from five years earlier. Another study found that a third of people under 30 had never dated at all. (There are no figures for same-sex relationships.) Although there has long been a pragmatic separation of love and sex in Japan – a country mostly free of religious morals – sex fares no better. A survey earlier this year by the Japan Family Planning Association (JFPA) found that 45% of women aged 16-24 "were not interested in or despised sexual contact". More than a quarter of men felt the same way.
Many people who seek her out, says Aoyama, are deeply confused. "Some want a partner, some prefer being single, but few relate to normal love and marriage." However, the pressure to conform to Japan's anachronistic family model of salaryman husband and stay-at-home wife remains. "People don't know where to turn. They're coming to me because they think that, by wanting something different, there's something wrong with them." (...)
Marriage has become a minefield of unattractive choices. Japanese men have become less career-driven, and less solvent, as lifetime job security has waned. Japanese women have become more independent and ambitious. Yet conservative attitudes in the home and workplace persist. Japan's punishing corporate world makes it almost impossible for women to combine a career and family, while children are unaffordable unless both parents work. Cohabiting or unmarried parenthood is still unusual, dogged by bureaucratic disapproval.
Aoyama says the sexes, especially in Japan's giant cities, are "spiralling away from each other". Lacking long-term shared goals, many are turning to what she terms "Pot Noodle love" – easy or instant gratification, in the form of casual sex, short-term trysts and the usual technological suspects: online porn, virtual-reality "girlfriends", anime cartoons. Or else they're opting out altogether and replacing love and sex with other urban pastimes. (...)
Aversion to marriage and intimacy in modern life is not unique to Japan. Nor is growing preoccupation with digital technology. But what endless Japanese committees have failed to grasp when they stew over the country's procreation-shy youth is that, thanks to official shortsightedness, the decision to stay single often makes perfect sense. This is true for both sexes, but it's especially true for women. "Marriage is a woman's grave," goes an old Japanese saying that refers to wives being ignored in favour of mistresses. For Japanese women today, marriage is the grave of their hard-won careers.
Image: Eric Rechsteiner
Japan's under-40s appear to be losing interest in conventional relationships. Millions aren't even dating, and increasing numbers can't be bothered with sex. For their government, "celibacy syndrome" is part of a looming national catastrophe. Japan already has one of the world's lowest birth rates. Its population of 126 million, which has been shrinking for the past decade, is projected to plunge a further one-third by 2060. Aoyama believes the country is experiencing "a flight from human intimacy" – and it's partly the government's fault. (...)
The number of single people has reached a record high. A survey in 2011 found that 61% of unmarried men and 49% of women aged 18-34 were not in any kind of romantic relationship, a rise of almost 10% from five years earlier. Another study found that a third of people under 30 had never dated at all. (There are no figures for same-sex relationships.) Although there has long been a pragmatic separation of love and sex in Japan – a country mostly free of religious morals – sex fares no better. A survey earlier this year by the Japan Family Planning Association (JFPA) found that 45% of women aged 16-24 "were not interested in or despised sexual contact". More than a quarter of men felt the same way.
Many people who seek her out, says Aoyama, are deeply confused. "Some want a partner, some prefer being single, but few relate to normal love and marriage." However, the pressure to conform to Japan's anachronistic family model of salaryman husband and stay-at-home wife remains. "People don't know where to turn. They're coming to me because they think that, by wanting something different, there's something wrong with them." (...)
Marriage has become a minefield of unattractive choices. Japanese men have become less career-driven, and less solvent, as lifetime job security has waned. Japanese women have become more independent and ambitious. Yet conservative attitudes in the home and workplace persist. Japan's punishing corporate world makes it almost impossible for women to combine a career and family, while children are unaffordable unless both parents work. Cohabiting or unmarried parenthood is still unusual, dogged by bureaucratic disapproval.
Aoyama says the sexes, especially in Japan's giant cities, are "spiralling away from each other". Lacking long-term shared goals, many are turning to what she terms "Pot Noodle love" – easy or instant gratification, in the form of casual sex, short-term trysts and the usual technological suspects: online porn, virtual-reality "girlfriends", anime cartoons. Or else they're opting out altogether and replacing love and sex with other urban pastimes. (...)
Aversion to marriage and intimacy in modern life is not unique to Japan. Nor is growing preoccupation with digital technology. But what endless Japanese committees have failed to grasp when they stew over the country's procreation-shy youth is that, thanks to official shortsightedness, the decision to stay single often makes perfect sense. This is true for both sexes, but it's especially true for women. "Marriage is a woman's grave," goes an old Japanese saying that refers to wives being ignored in favour of mistresses. For Japanese women today, marriage is the grave of their hard-won careers.
by Abigail Haworth, Guardian | Read more:
Free Thinkers
José Urbina López Primary School sits next to a dump just across the US border in Mexico. The school serves residents of Matamoros, a dusty, sunbaked city of 489,000 that is a flash point in the war on drugs. There are regular shoot-outs, and it’s not uncommon for locals to find bodies scattered in the street in the morning. To get to the school, students walk along a white dirt road that parallels a fetid canal. On a recent morning there was a 1940s-era tractor, a decaying boat in a ditch, and a herd of goats nibbling gray strands of grass. A cinder-block barrier separates the school from a wasteland—the far end of which is a mound of trash that grew so big, it was finally closed down. On most days, a rotten smell drifts through the cement-walled classrooms. Some people here call the school un lugar de castigo—“a place of punishment.”
For 12-year-old Paloma Noyola Bueno, it was a bright spot. More than 25 years ago, her family moved to the border from central Mexico in search of a better life. Instead, they got stuck living beside the dump. Her father spent all day scavenging for scrap, digging for pieces of aluminum, glass, and plastic in the muck. Recently, he had developed nosebleeds, but he didn’t want Paloma to worry. She was his little angel—the youngest of eight children.
After school, Paloma would come home and sit with her father in the main room of their cement-and-wood home. Her father was a weather-beaten, gaunt man who always wore a cowboy hat. Paloma would recite the day’s lessons for him in her crisp uniform—gray polo, blue-and-white skirt—and try to cheer him up. She had long black hair, a high forehead, and a thoughtful, measured way of talking. School had never been challenging for her. She sat in rows with the other students while teachers told the kids what they needed to know. It wasn’t hard to repeat it back, and she got good grades without thinking too much. As she headed into fifth grade, she assumed she was in for more of the same—lectures, memorization, and busy work.
Sergio Juárez Correa was used to teaching that kind of class. For five years, he had stood in front of students and worked his way through the government-mandated curriculum. It was mind-numbingly boring for him and the students, and he’d come to the conclusion that it was a waste of time. Test scores were poor, and even the students who did well weren’t truly engaged. Something had to change.
He too had grown up beside a garbage dump in Matamoros, and he had become a teacher to help kids learn enough to make something more of their lives. So in 2011—when Paloma entered his class—Juárez Correa decided to start experimenting. He began reading books and searching for ideas online. Soon he stumbled on a video describing the work of Sugata Mitra, a professor of educational technology at Newcastle University in the UK. In the late 1990s and throughout the 2000s, Mitra conducted experiments in which he gave children in India access to computers. Without any instruction, they were able to teach themselves a surprising variety of things, from DNA replication to English.
Juárez Correa didn’t know it yet, but he had happened on an emerging educational philosophy, one that applies the logic of the digital age to the classroom. That logic is inexorable: Access to a world of infinite information has changed how we communicate, process information, and think. Decentralized systems have proven to be more productive and agile than rigid, top-down ones. Innovation, creativity, and independent thinking are increasingly crucial to the global economy.
And yet the dominant model of public education is still fundamentally rooted in the industrial revolution that spawned it, when workplaces valued punctuality, regularity, attention, and silence above all else. (In 1899, William T. Harris, the US commissioner of education, celebrated the fact that US schools had developed the “appearance of a machine,” one that teaches the student “to behave in an orderly manner, to stay in his own place, and not get in the way of others.”) We don’t openly profess those values nowadays, but our educational system—which routinely tests kids on their ability to recall information and demonstrate mastery of a narrow set of skills—doubles down on the view that students are material to be processed, programmed, and quality-tested. School administrators prepare curriculum standards and “pacing guides” that tell teachers what to teach each day. Legions of managers supervise everything that happens in the classroom; in 2010 only 50 percent of public school staff members in the US were teachers.
The results speak for themselves: Hundreds of thousands of kids drop out of public high school every year. Of those who do graduate from high school, almost a third are “not prepared academically for first-year college courses,” according to a 2013 report from the testing service ACT. The World Economic Forum ranks the US just 49th out of 148 developed and developing nations in quality of math and science instruction. “The fundamental basis of the system is fatally flawed,” says Linda Darling-Hammond, a professor of education at Stanford and founding director of the National Commission on Teaching and America’s Future. “In 1970 the top three skills required by the Fortune 500 were the three Rs: reading, writing, and arithmetic. In 1999 the top three skills in demand were teamwork, problem-solving, and interpersonal skills. We need schools that are developing these skills.”
That’s why a new breed of educators, inspired by everything from the Internet to evolutionary psychology, neuroscience, and AI, are inventing radical new ways for children to learn, grow, and thrive. To them, knowledge isn’t a commodity that’s delivered from teacher to student but something that emerges from the students’ own curiosity-fueled exploration. Teachers provide prompts, not answers, and then they step aside so students can teach themselves and one another. They are creating ways for children to discover their passion—and uncovering a generation of geniuses in the process.
by Joshua Davis, Wired | Read more:
Image: Peter Yang
For 12-year-old Paloma Noyola Bueno, it was a bright spot. More than 25 years ago, her family moved to the border from central Mexico in search of a better life. Instead, they got stuck living beside the dump. Her father spent all day scavenging for scrap, digging for pieces of aluminum, glass, and plastic in the muck. Recently, he had developed nosebleeds, but he didn’t want Paloma to worry. She was his little angel—the youngest of eight children.
After school, Paloma would come home and sit with her father in the main room of their cement-and-wood home. Her father was a weather-beaten, gaunt man who always wore a cowboy hat. Paloma would recite the day’s lessons for him in her crisp uniform—gray polo, blue-and-white skirt—and try to cheer him up. She had long black hair, a high forehead, and a thoughtful, measured way of talking. School had never been challenging for her. She sat in rows with the other students while teachers told the kids what they needed to know. It wasn’t hard to repeat it back, and she got good grades without thinking too much. As she headed into fifth grade, she assumed she was in for more of the same—lectures, memorization, and busy work.
Sergio Juárez Correa was used to teaching that kind of class. For five years, he had stood in front of students and worked his way through the government-mandated curriculum. It was mind-numbingly boring for him and the students, and he’d come to the conclusion that it was a waste of time. Test scores were poor, and even the students who did well weren’t truly engaged. Something had to change.
He too had grown up beside a garbage dump in Matamoros, and he had become a teacher to help kids learn enough to make something more of their lives. So in 2011—when Paloma entered his class—Juárez Correa decided to start experimenting. He began reading books and searching for ideas online. Soon he stumbled on a video describing the work of Sugata Mitra, a professor of educational technology at Newcastle University in the UK. In the late 1990s and throughout the 2000s, Mitra conducted experiments in which he gave children in India access to computers. Without any instruction, they were able to teach themselves a surprising variety of things, from DNA replication to English.
Juárez Correa didn’t know it yet, but he had happened on an emerging educational philosophy, one that applies the logic of the digital age to the classroom. That logic is inexorable: Access to a world of infinite information has changed how we communicate, process information, and think. Decentralized systems have proven to be more productive and agile than rigid, top-down ones. Innovation, creativity, and independent thinking are increasingly crucial to the global economy.
And yet the dominant model of public education is still fundamentally rooted in the industrial revolution that spawned it, when workplaces valued punctuality, regularity, attention, and silence above all else. (In 1899, William T. Harris, the US commissioner of education, celebrated the fact that US schools had developed the “appearance of a machine,” one that teaches the student “to behave in an orderly manner, to stay in his own place, and not get in the way of others.”) We don’t openly profess those values nowadays, but our educational system—which routinely tests kids on their ability to recall information and demonstrate mastery of a narrow set of skills—doubles down on the view that students are material to be processed, programmed, and quality-tested. School administrators prepare curriculum standards and “pacing guides” that tell teachers what to teach each day. Legions of managers supervise everything that happens in the classroom; in 2010 only 50 percent of public school staff members in the US were teachers.
The results speak for themselves: Hundreds of thousands of kids drop out of public high school every year. Of those who do graduate from high school, almost a third are “not prepared academically for first-year college courses,” according to a 2013 report from the testing service ACT. The World Economic Forum ranks the US just 49th out of 148 developed and developing nations in quality of math and science instruction. “The fundamental basis of the system is fatally flawed,” says Linda Darling-Hammond, a professor of education at Stanford and founding director of the National Commission on Teaching and America’s Future. “In 1970 the top three skills required by the Fortune 500 were the three Rs: reading, writing, and arithmetic. In 1999 the top three skills in demand were teamwork, problem-solving, and interpersonal skills. We need schools that are developing these skills.”
That’s why a new breed of educators, inspired by everything from the Internet to evolutionary psychology, neuroscience, and AI, are inventing radical new ways for children to learn, grow, and thrive. To them, knowledge isn’t a commodity that’s delivered from teacher to student but something that emerges from the students’ own curiosity-fueled exploration. Teachers provide prompts, not answers, and then they step aside so students can teach themselves and one another. They are creating ways for children to discover their passion—and uncovering a generation of geniuses in the process.
by Joshua Davis, Wired | Read more:
Image: Peter Yang
Without Copyrights: Piracy, Publishing, and the Public Domain. What Exactly is "Piracy" in the Digital Age?
“PIRACY,” the newly created National Intellectual Property Rights Protection Coordination Center (IPR Center) informs DVD viewers, “is not a victimless crime.” Setting aside the fact that the IPR Center and its partners in the FBI and Department of Homeland Security target this message at precisely the wrong audience — those who’ve chosen to purchase or rent a DVD — the campaign begs a couple of questions. Is this “piracy” actually a “crime”? And more importantly, what exactly is “piracy”?
While content-industry trade groups like the Recording Industry Association of America (RIAA), Motion Picture Association of America (MPAA), and Association of American Publishers (AAP) would doubtless like to take credit for popularizing the term to mean “using creative products without the permission of the creator or rights holder,” “piracy” has meant that for centuries, as Robert Spoo points out in his new book Without Copyrights: Piracy, Publishing, and the Public Domain (Oxford). But it’s never been so simple, particularly in the United States, long a holdout from international copyright norms. “Piracy” is always a term of rhetoric, suggesting a legal force that it frequently does not have; the word was and is a tool to sway the public and lawmakers. And even as their copyright protections were dramatically expanded in the late 20th century, rights holders sought to broaden the definition of “piracy” and concomitantly shrink the public domain, that ocean of content free for all of us to use.
In Without Copyrights, Spoo provides a deeply researched case study of the complicated American copyright situation surrounding the great literary landmark of the 20th century, James Joyce’s 1922 novel Ulysses. He shows that lax and fuzzy copyright laws in the US created a large and fertile public domain that infuriated writers, benefited readers, and provided publishers an opportunity for informal self-governance. But most importantly for the current American debate about intellectual property, Spoo makes clear that “piracy” has never been a clear-cut concept. Rights holders like to define “piracy” as any act of which they disapprove, even when — as with unauthorized publication of Ulysses in the US, or sampling of funk records in 1980s rap recordings, or uploading clips from TV awards shows to YouTube — those acts are expressly or plausibly legal. In part by using loaded terms like “piracy” to influence legislators and law enforcement agencies, rights holders try, and recently have succeeded, in then expanding the legal meaning of those terms and contracting the cultural commons.
The context Spoo ably recreates, though, is the legal environment governing American publishing from the early 19th century through the post-World War II period. In the 19th century, the so-called “reprint industry,” which mined previously published books, largely British, dominated American publishing. And while reprinters bore most of the fixed costs facing any publishing concern (labor, materials, advertising, distribution) they had one great competitive advantage: they didn’t have to pay their authors. Until 1891, US law extended copyright protection only to works by American citizens, so these reprinters made a business model out of selling British books, generally without ever contacting (much less entering into an agreement with) their authors. It’s hard to think of a more obvious example of “piracy” than this, and authors from Dickens to Wilde fumed about their vast lost revenue. A familiar anecdote describes Dickens fans, desperate to find out whether Little Nell was dead, storming the New York wharves as ships laden with the latest issue of Master Humphrey’s Clock docked. Some of those impatient fans, though, were probably publishers’ agents, frantic to grab their copies, get back to their presses, and be the first ones to market with a “pirated,” but entirely legal, American edition of the novel.
Frustrating as it was to aggrieved British authors, the law had some justification. The US was a large but largely under-booked nation in the early 1800s. In keeping with the spirit of the US Constitution’s Copyright Clause, which emphasizes that the real goal of copyright is not first and foremost the protection of an author’s rights but the promotion “of Science and useful Arts,” the law subsidized the production and dissemination of books. A lot of books. A lot of cheap books that would, Congress hoped, spread across (and educate) our widely dispersed and unschooled nation. And while the 1790 Copyright Act assured American citizens of copyright protection, ironically it did little to cultivate a native literary culture: why sign up an American author and pay royalties when one could print a guaranteed seller like Tennyson or George Eliot instead, and pocket the difference? As a result, British literature dominated American reading through the 19th century (with notable exceptions such as Uncle Tom’s Cabin, which was, in a neat turnabout, widely “pirated” in Britain).
If anyone could publish any British author, how, then, did the American publishing industry not consume itself through self-destructive cost-cutting? A professor at the University of Tulsa College of Law, Spoo is sensitive to the important distinctions between common law, legislated law, and informal community norms that carry the force of law, and thus identifies “trade courtesy” as the mechanism that saved publishing houses from bankrupting themselves through competitive discounting. These “pirate” publishers behaved more like a genteel cartel than like bootlegging gangsters, Spoo makes clear. A publisher would make it known among the community of reprinters that he intended to publish a given author or a book. Other publishers, parties to this informal gentlemen’s agreement, respected that publisher’s claim to that title, and renegades were punished through public shaming (manifested in advertisements that questioned the quality or authenticity of their texts) or, in the cases of particularly obstinate transgressors, commercial retaliation. Like Wal-Mart meeting Main Street, colluding reprinters would print their own editions of a violator’s books, pricing them ruinously low or even at a loss in pursuit of the greater good of the stability of the industry. At this time, in fact, while British authors referred to the entire American industry as “pirates,” publishers used the word internally to describe those members of their community who deviated from norms of trade courtesy.

In Without Copyrights, Spoo provides a deeply researched case study of the complicated American copyright situation surrounding the great literary landmark of the 20th century, James Joyce’s 1922 novel Ulysses. He shows that lax and fuzzy copyright laws in the US created a large and fertile public domain that infuriated writers, benefited readers, and provided publishers an opportunity for informal self-governance. But most importantly for the current American debate about intellectual property, Spoo makes clear that “piracy” has never been a clear-cut concept. Rights holders like to define “piracy” as any act of which they disapprove, even when — as with unauthorized publication of Ulysses in the US, or sampling of funk records in 1980s rap recordings, or uploading clips from TV awards shows to YouTube — those acts are expressly or plausibly legal. In part by using loaded terms like “piracy” to influence legislators and law enforcement agencies, rights holders try, and recently have succeeded, in then expanding the legal meaning of those terms and contracting the cultural commons.
The context Spoo ably recreates, though, is the legal environment governing American publishing from the early 19th century through the post-World War II period. In the 19th century, the so-called “reprint industry,” which mined previously published books, largely British, dominated American publishing. And while reprinters bore most of the fixed costs facing any publishing concern (labor, materials, advertising, distribution) they had one great competitive advantage: they didn’t have to pay their authors. Until 1891, US law extended copyright protection only to works by American citizens, so these reprinters made a business model out of selling British books, generally without ever contacting (much less entering into an agreement with) their authors. It’s hard to think of a more obvious example of “piracy” than this, and authors from Dickens to Wilde fumed about their vast lost revenue. A familiar anecdote describes Dickens fans, desperate to find out whether Little Nell was dead, storming the New York wharves as ships laden with the latest issue of Master Humphrey’s Clock docked. Some of those impatient fans, though, were probably publishers’ agents, frantic to grab their copies, get back to their presses, and be the first ones to market with a “pirated,” but entirely legal, American edition of the novel.
Frustrating as it was to aggrieved British authors, the law had some justification. The US was a large but largely under-booked nation in the early 1800s. In keeping with the spirit of the US Constitution’s Copyright Clause, which emphasizes that the real goal of copyright is not first and foremost the protection of an author’s rights but the promotion “of Science and useful Arts,” the law subsidized the production and dissemination of books. A lot of books. A lot of cheap books that would, Congress hoped, spread across (and educate) our widely dispersed and unschooled nation. And while the 1790 Copyright Act assured American citizens of copyright protection, ironically it did little to cultivate a native literary culture: why sign up an American author and pay royalties when one could print a guaranteed seller like Tennyson or George Eliot instead, and pocket the difference? As a result, British literature dominated American reading through the 19th century (with notable exceptions such as Uncle Tom’s Cabin, which was, in a neat turnabout, widely “pirated” in Britain).
If anyone could publish any British author, how, then, did the American publishing industry not consume itself through self-destructive cost-cutting? A professor at the University of Tulsa College of Law, Spoo is sensitive to the important distinctions between common law, legislated law, and informal community norms that carry the force of law, and thus identifies “trade courtesy” as the mechanism that saved publishing houses from bankrupting themselves through competitive discounting. These “pirate” publishers behaved more like a genteel cartel than like bootlegging gangsters, Spoo makes clear. A publisher would make it known among the community of reprinters that he intended to publish a given author or a book. Other publishers, parties to this informal gentlemen’s agreement, respected that publisher’s claim to that title, and renegades were punished through public shaming (manifested in advertisements that questioned the quality or authenticity of their texts) or, in the cases of particularly obstinate transgressors, commercial retaliation. Like Wal-Mart meeting Main Street, colluding reprinters would print their own editions of a violator’s books, pricing them ruinously low or even at a loss in pursuit of the greater good of the stability of the industry. At this time, in fact, while British authors referred to the entire American industry as “pirates,” publishers used the word internally to describe those members of their community who deviated from norms of trade courtesy.
by Greg Barnhisel, LA Review of Books | Read more:
Image: Oxford University Press
Sunday, October 20, 2013
Dog Story
A year ago, my wife and I bought a dog for our ten-year-old daughter, Olivia. We had tried to fob her off with fish, which died, and with a singing blue parakeet, which she named Skyler, but a Havanese puppy was what she wanted, and all she wanted. With the diligence of a renegade candidate pushing for a political post, she set about organizing a campaign: quietly mustering pro-dog friends as a pressure group; introducing persuasive literature (John Grogan’s “Marley & Me”); demonstrating reliability with bird care.
I was so ignorant about dogs that I thought what she wanted must be a Javanese, a little Indonesian dog, not a Havanese, named for the city in Cuba. When we discovered, with a pang, the long Google histories that she left on my wife’s computer—havanese puppies/havanese care/how to find a havanese/havanese, convincing your parints—I assumed she was misspelling the name. But in fact it was a Havanese she wanted, a small, sturdy breed that, in the past decade, has become a mainstay of New York apartment life. (It was recognized as a breed by the American Kennel Club only in the mid-nineties.) Shrewd enough to know that she would never get us out of the city to an approved breeder, she quietly decided that she could live with a Manhattan pet-store “puppy mill” dog if she could check its eyes for signs of illness and its temperament for symptoms of sweetness. Finally, she backed us into a nice pet store on Lexington Avenue and showed us a tiny bundle of caramel-colored fur with a comical black mask. “That’s my dog,” she said simply.
My wife and I looked at each other with a wild surmise: the moment parents become parints, creatures beyond convincing who exist to be convinced. When it came to dogs, we shared a distaste that touched the fringe of disgust and flirted with the edge of phobia. I was bitten by a nasty German-shepherd guard dog when I was about eight—not a terrible bite but traumatic all the same—and it led me ever after to cross streets and jump nervously at the sight of any of its kind. My wife’s objections were narrowly aesthetic: the smells, the slobber, the shit. We both disliked dog owners in their dog-owning character: the empty laughter as the dog jumped up on you; the relentless apologies for the dog’s bad behavior, along with the smiling assurance that it was all actually rather cute. Though I could read, and even blurb, friends’ books on dogs, I felt about them as if the same friends had written books on polar exploration: I could grasp it as a subject worthy of extended poetic description, but it was not a thing I had any plans to pursue myself. “Dogs are failed humans,” a witty friend said, and I agreed.
We were, however, doomed, and knew it. The constitution of parents and children may, like the British one, be unwritten, but, as the Brits point out, that doesn’t make it less enforceable or authoritative. The unwritten compact that governs family life says somewhere that children who have waited long enough for a dog and want one badly enough have a right to have one. I felt as the Queen must at meeting an unpleasant Socialist Prime Minister: it isn’t what you wanted, but it’s your constitutional duty to welcome, and pretend.
The pet-store people packed up the dog, a female, in a little crate and Olivia excitedly considered names. Willow? Daisy? Or maybe Honey? “Why not call her Butterscotch?” I suggested, prompted by a dim memory of one of those Dan Jenkins football novels from the seventies, where the running-back hero always uses that word when referring to the hair color of his leggy Texas girlfriends. Olivia nodded violently. Yes! That was her name. Butterscotch.
We took her home and put her in the back storage room to sleep. Tiny thing, we thought. Enormous eyes. My wife and I were terrified that it would be a repeat of the first year with a baby, up all night. But she was good. She slept right through the first night, and all subsequent nights, waiting in the morning for you past the point that a dog could decently be expected to wait, greeting you with a worried look, then racing across the apartment to her “papers”—the pads that you put out for a dog to pee and shit on. Her front legs were shorter than her rear ones, putting a distinctive hop in her stride. (“Breed trait,” Olivia said, knowingly.)
All the creature wanted was to please. Unlike a child, who pleases in spite of herself, Butterscotch wanted to know what she could do to make you happy, if only you kept her fed and let her play. She had none of the imperiousness of a human infant. A child starts walking away as soon as she starts to walk—on the way out, from the very first day. What makes kids so lovable is the tension between their helplessness and their drive to deny it. Butterscotch, though, was a born courtesan. She learned the tricks Olivia taught her with startling ease: sitting and rolling over and lying down and standing and shaking hands (or paws) and jumping over stacks of unsold books. The terms of the tricks were apparent: she did them for treats. But, if it was a basic bargain, she employed it with an avidity that made it the most touching thing I have seen. When a plate of steak appeared at the end of dinner, she would race through her repertory of stunts and then offer a paw to shake. Just tell me what you want, and I’ll do it!
I was so ignorant about dogs that I thought what she wanted must be a Javanese, a little Indonesian dog, not a Havanese, named for the city in Cuba. When we discovered, with a pang, the long Google histories that she left on my wife’s computer—havanese puppies/havanese care/how to find a havanese/havanese, convincing your parints—I assumed she was misspelling the name. But in fact it was a Havanese she wanted, a small, sturdy breed that, in the past decade, has become a mainstay of New York apartment life. (It was recognized as a breed by the American Kennel Club only in the mid-nineties.) Shrewd enough to know that she would never get us out of the city to an approved breeder, she quietly decided that she could live with a Manhattan pet-store “puppy mill” dog if she could check its eyes for signs of illness and its temperament for symptoms of sweetness. Finally, she backed us into a nice pet store on Lexington Avenue and showed us a tiny bundle of caramel-colored fur with a comical black mask. “That’s my dog,” she said simply.
My wife and I looked at each other with a wild surmise: the moment parents become parints, creatures beyond convincing who exist to be convinced. When it came to dogs, we shared a distaste that touched the fringe of disgust and flirted with the edge of phobia. I was bitten by a nasty German-shepherd guard dog when I was about eight—not a terrible bite but traumatic all the same—and it led me ever after to cross streets and jump nervously at the sight of any of its kind. My wife’s objections were narrowly aesthetic: the smells, the slobber, the shit. We both disliked dog owners in their dog-owning character: the empty laughter as the dog jumped up on you; the relentless apologies for the dog’s bad behavior, along with the smiling assurance that it was all actually rather cute. Though I could read, and even blurb, friends’ books on dogs, I felt about them as if the same friends had written books on polar exploration: I could grasp it as a subject worthy of extended poetic description, but it was not a thing I had any plans to pursue myself. “Dogs are failed humans,” a witty friend said, and I agreed.
We were, however, doomed, and knew it. The constitution of parents and children may, like the British one, be unwritten, but, as the Brits point out, that doesn’t make it less enforceable or authoritative. The unwritten compact that governs family life says somewhere that children who have waited long enough for a dog and want one badly enough have a right to have one. I felt as the Queen must at meeting an unpleasant Socialist Prime Minister: it isn’t what you wanted, but it’s your constitutional duty to welcome, and pretend.
The pet-store people packed up the dog, a female, in a little crate and Olivia excitedly considered names. Willow? Daisy? Or maybe Honey? “Why not call her Butterscotch?” I suggested, prompted by a dim memory of one of those Dan Jenkins football novels from the seventies, where the running-back hero always uses that word when referring to the hair color of his leggy Texas girlfriends. Olivia nodded violently. Yes! That was her name. Butterscotch.
We took her home and put her in the back storage room to sleep. Tiny thing, we thought. Enormous eyes. My wife and I were terrified that it would be a repeat of the first year with a baby, up all night. But she was good. She slept right through the first night, and all subsequent nights, waiting in the morning for you past the point that a dog could decently be expected to wait, greeting you with a worried look, then racing across the apartment to her “papers”—the pads that you put out for a dog to pee and shit on. Her front legs were shorter than her rear ones, putting a distinctive hop in her stride. (“Breed trait,” Olivia said, knowingly.)
All the creature wanted was to please. Unlike a child, who pleases in spite of herself, Butterscotch wanted to know what she could do to make you happy, if only you kept her fed and let her play. She had none of the imperiousness of a human infant. A child starts walking away as soon as she starts to walk—on the way out, from the very first day. What makes kids so lovable is the tension between their helplessness and their drive to deny it. Butterscotch, though, was a born courtesan. She learned the tricks Olivia taught her with startling ease: sitting and rolling over and lying down and standing and shaking hands (or paws) and jumping over stacks of unsold books. The terms of the tricks were apparent: she did them for treats. But, if it was a basic bargain, she employed it with an avidity that made it the most touching thing I have seen. When a plate of steak appeared at the end of dinner, she would race through her repertory of stunts and then offer a paw to shake. Just tell me what you want, and I’ll do it!
She was a bit like one of Al Capp’s Shmoos, in “Li’l Abner,” designed to please people at any cost. (People who don’t like Havanese find them too eager to please, and lacking in proper doggie dignity and reserve.) The key to dogginess, I saw, is that, though dogs are pure creatures of sensation, they are also capable of shrewd short-term plans. Dogs don’t live, like mystics, in the moment; dogs live in the minute. They live in and for the immediate short-term exchange: tricks for food, kisses for a walk. When Butterscotch saw me come home with bags from the grocery store, she would leap with joy as her memory told her that something good was about to happen, just as she had learned that a cloud-nexus of making phone calls and getting the leash and taking elevators produced a chance to play with Lily and Cuba, the two Havanese who live upstairs. But she couldn’t grasp exactly how these chains of events work: some days when she heard the name “Lily” she rushed to the door, sometimes to her leash, sometimes to the elevator, and sometimes to the door on our floor that corresponds to the door on the eighth floor where Lily lives.
But she had another side, too. At the end of a long walk, or a prance around the block, she would come in with her usual happy hop, and then, let off her leash, she would growl and hiss and make Ewok-like noises that we never otherwise heard from her; it was a little scary at first, like the moment in “Gremlins” when the cute thing becomes a wild, toothy one. Then she would race madly from one end of the hall to the other, bang her head, and turn around and race back, still spitting and snorting and mumbling guttural consonants to herself, like a mad German monarch. Sometimes she would climax this rampage by pulling up hard and showing her canines and directing two sharp angry barks at Olivia, her owner, daring her to do something about it. Then, just as abruptly, Butterscotch would stop, sink to the floor, and once again become a sweet, smiling companion, trotting loyally behind whoever got up first. The wolf was out; and then was tucked away in a heart-drawer of prudence. This behavior, Olivia assured us, is a Havanese breed trait, called “run-like-hell,” though “Call of the Wild” might be a better name. (Olivia spent hours on the Havanese forum, a worldwide chat board composed mostly of older women who call themselves the small dogs’ “mommies,” and share a tone of slightly addled coziness, which Olivia expertly imitated. Being a dog owner pleased her almost more than owning a dog.)
But what could account for that odd double nature, that compelling sweetness and implicit wildness? I began to read as widely as I could about this strange, dear thing that I had so long been frightened of.
But she had another side, too. At the end of a long walk, or a prance around the block, she would come in with her usual happy hop, and then, let off her leash, she would growl and hiss and make Ewok-like noises that we never otherwise heard from her; it was a little scary at first, like the moment in “Gremlins” when the cute thing becomes a wild, toothy one. Then she would race madly from one end of the hall to the other, bang her head, and turn around and race back, still spitting and snorting and mumbling guttural consonants to herself, like a mad German monarch. Sometimes she would climax this rampage by pulling up hard and showing her canines and directing two sharp angry barks at Olivia, her owner, daring her to do something about it. Then, just as abruptly, Butterscotch would stop, sink to the floor, and once again become a sweet, smiling companion, trotting loyally behind whoever got up first. The wolf was out; and then was tucked away in a heart-drawer of prudence. This behavior, Olivia assured us, is a Havanese breed trait, called “run-like-hell,” though “Call of the Wild” might be a better name. (Olivia spent hours on the Havanese forum, a worldwide chat board composed mostly of older women who call themselves the small dogs’ “mommies,” and share a tone of slightly addled coziness, which Olivia expertly imitated. Being a dog owner pleased her almost more than owning a dog.)
But what could account for that odd double nature, that compelling sweetness and implicit wildness? I began to read as widely as I could about this strange, dear thing that I had so long been frightened of.
by Adam Gopnik, New Yorker | Read more:
Image: Jules Feiffer
Saturday, October 19, 2013
Dire Straits
Lyrics
well this is my back yard - my back gate
I hate to start my parties latehere's the party cart - ain't that great?
that ain't the best part honey - just wait
that a genuine weathervane - it moves with the breeze
portable hammok baby - who needs trees
it's casual entertaining - we aim to please
at my parties
check out the shingles - it's brand new
excuse me while I mingle - hi, how are you
hey everybody - let me give you a toast
this one's for me - the host with the most
it's getting a trifle colder - step inside my home
that's a brass toilet tissue holder with its own telephone
that's a musical doorbell - it don't ring, I ain't kiddin'
it plays america the beautiful and tie a yellow ribbon
boy, this punch is a trip - it's o.k. in my book
here, take a sip - maybe a little heavy on the fruit
ah, here comes the dip - you may kiss the cook
let me show you honey - it's easy - look
you take a fork and spike 'em - say, did you try these?
so glad you like 'em - the secret's in the cheese
it's casual entertaining - we aim to please
at my parties
now don't talk to me about the polar bear
don't talk to me about the ozone layer
ain't much of anything these days, even the air
they're running out of rhinos - what do I care?
let's hear it for the dolphin - let's hear it for the trees
ain't running out of nothing in my deep freeze
it's casual entertaining - we aim to please
at my parties
Delectable: The Only Wine App Worth a Damn
Ever since the iPhone was first released, I've been trying apps that are aimed at wine lovers. For the relatively niche market that wine represents, there have been a surprising number of apps trying to address it. There are maps, buying guides, ratings databases, food and wine pairing, cellar management, e-commerce, wine tasting tools, regional guidebooks, and social networks.
It sounds ridiculous for me to claim that I've tried every single wine app that is out there on the market, and it's probably not true, but let me tell you, I've tried most of them. And they're all crap.
Ninety-eight percent of them either don't offer to do anything truly useful for wine lovers, or, if they do offer to do something useful, they actually don't deliver on their promise. For instance what good is a winery tasting room guide and map of Napa that doesn't have the first five wineries that I search for in their database? How helpful is an app that promises to identify bottles that you take a photo of, but can't seem to get the identification right two out of five times?
The other two percent of wine apps on the market might -- and it is impossible to put too much emphasis on that word 'might' -- offer some value, but any hope of doing so is immediately destroyed by completely awful design and usability. Yes, your app might actually have some interesting advice to give about pairing wine and food, but when it takes me five minutes to drill down through a horribly designed menu-tree of foodstuffs to find the thing that I'm looking to cook, no matter how good your content is, I hate your app before I get to it.
In short, my professional opinion (and it is worth reminding you that by day I run an interactive design agency that, among other things, designs iPhone applications) is that there is one, and only one wine app that I've ever seen that is worth using, and it is called Delectable. Credit given where credit is due, I hadn't heard about it until Jon Bonné of the San Francisco Chronicle mentioned it to me last year, and he introduced it with much the same sentiment I am sharing right now.
But let me explain why, and how, Delectable has managed to pass the ultimate test that any wine app must face: "Will I actually use this thing regularly?"

Ninety-eight percent of them either don't offer to do anything truly useful for wine lovers, or, if they do offer to do something useful, they actually don't deliver on their promise. For instance what good is a winery tasting room guide and map of Napa that doesn't have the first five wineries that I search for in their database? How helpful is an app that promises to identify bottles that you take a photo of, but can't seem to get the identification right two out of five times?
The other two percent of wine apps on the market might -- and it is impossible to put too much emphasis on that word 'might' -- offer some value, but any hope of doing so is immediately destroyed by completely awful design and usability. Yes, your app might actually have some interesting advice to give about pairing wine and food, but when it takes me five minutes to drill down through a horribly designed menu-tree of foodstuffs to find the thing that I'm looking to cook, no matter how good your content is, I hate your app before I get to it.
In short, my professional opinion (and it is worth reminding you that by day I run an interactive design agency that, among other things, designs iPhone applications) is that there is one, and only one wine app that I've ever seen that is worth using, and it is called Delectable. Credit given where credit is due, I hadn't heard about it until Jon Bonné of the San Francisco Chronicle mentioned it to me last year, and he introduced it with much the same sentiment I am sharing right now.
But let me explain why, and how, Delectable has managed to pass the ultimate test that any wine app must face: "Will I actually use this thing regularly?"
by Alder, Vinography | Read more:
Image: uncredited
Everybody Knows You’re a Dog
[ed. Evercookie = malware]
Remember 1993? The World Wide Web had already been invented and nobody knew about it. The NCSA Mosaic browser had just appeared in a limited alpha release, but the text-based Gopher service was the closest thing most people had to an interactive user interface to dive into information on the Internet. The commercial use of the Net was still extremely limited in America, as the National Science Foundation’s NSFNet backbone ostensibly prohibited anything but academic use. In other countries, the Internet existed barely, if at all.
Into that void, New Yorker cartoonist Peter Steiner dropped what is now the most popular and well-known panel ever produced for the magazine: "On the Internet, nobody knows you're a dog."
In 1993, the Internet was still a great mystery to most people, if they’d even heard of it. America Online (AOL) was the way most people got “online,” and email was the primary use of the Internet — and for people who weren’t connected to an academic institution or using one of the early Internet service providers (ISPs), this happened via gateways from AOL and other walled gardens.
Steiner’s single panel manages to convey the Internet’s newness and the personal purpose to which most people then put it, as well as its fundamental anonymity. The tools to track someone down or even demand a “real” name were nonexistent. One could be a dog (cats came later), and no one would be able to tell the difference. It was a time that now seems remarkably innocent. (...)
When the “dog” cartoon first appeared, pseudonymity on the Internet was a given, and anonymity wasn’t difficult. With the onset of commercial uses of the network and the appearance of widely available graphical Web browsers later in 1993, general anonymity became even easier because one could pay an ISP for access, but the ISP didn’t give two figs what you did online — nor, in those days, could it possibly have had the routers or storage to track behavior if it had wanted to.
One could post comments all over with little or no connection to one’s identity or location. The Web is inherently stateless — there’s no idea of a continuous session built in — which made it hard in its early days to allow for the association of a browser with an identity. The later addition of Web cookies allowed servers to push a unique code and other information to browsers, which browsers would send back with each request. This strings connections like pearls on a necklace and allows continuity, which allows an account and a login.
Such tracking doesn’t necessarily destroy anonymity, but it does reduce it. When a Web site or outside authority has only an IP address and other browser information by which to identify a user uniquely, the task becomes harder, as many networks share a single numeric address on the public-facing Internet and use private addresses internally. And the public addresses can change over time.
Marketers, of course, want to erase anonymity and even pseudonymity, because the less knowable an individual is, the less value that person has to advertisers. The more accurately they track you, the more lucrative it is to sell ads that cater to you or shop your data to other parties that combine online details with real-world purchasing behavior and credit records.
Over the years, Adobe Flash-based cookies and other breadcrumbs that allow tracking over many sites (or that bypass users’ ability to defeat such tracking) have been baked into plug-ins and browsers, often as unintentional byproducts. The “evercookie” proof-of-concept revealed that it was nearly impossible to kill all the stateful tracking elements for a site that wanted to keep you in its sights.
by Glenn Fleishman, Boing Boing | Read more:
Image: Wikipedia
Remember 1993? The World Wide Web had already been invented and nobody knew about it. The NCSA Mosaic browser had just appeared in a limited alpha release, but the text-based Gopher service was the closest thing most people had to an interactive user interface to dive into information on the Internet. The commercial use of the Net was still extremely limited in America, as the National Science Foundation’s NSFNet backbone ostensibly prohibited anything but academic use. In other countries, the Internet existed barely, if at all.

In 1993, the Internet was still a great mystery to most people, if they’d even heard of it. America Online (AOL) was the way most people got “online,” and email was the primary use of the Internet — and for people who weren’t connected to an academic institution or using one of the early Internet service providers (ISPs), this happened via gateways from AOL and other walled gardens.
Steiner’s single panel manages to convey the Internet’s newness and the personal purpose to which most people then put it, as well as its fundamental anonymity. The tools to track someone down or even demand a “real” name were nonexistent. One could be a dog (cats came later), and no one would be able to tell the difference. It was a time that now seems remarkably innocent. (...)
When the “dog” cartoon first appeared, pseudonymity on the Internet was a given, and anonymity wasn’t difficult. With the onset of commercial uses of the network and the appearance of widely available graphical Web browsers later in 1993, general anonymity became even easier because one could pay an ISP for access, but the ISP didn’t give two figs what you did online — nor, in those days, could it possibly have had the routers or storage to track behavior if it had wanted to.
One could post comments all over with little or no connection to one’s identity or location. The Web is inherently stateless — there’s no idea of a continuous session built in — which made it hard in its early days to allow for the association of a browser with an identity. The later addition of Web cookies allowed servers to push a unique code and other information to browsers, which browsers would send back with each request. This strings connections like pearls on a necklace and allows continuity, which allows an account and a login.
Such tracking doesn’t necessarily destroy anonymity, but it does reduce it. When a Web site or outside authority has only an IP address and other browser information by which to identify a user uniquely, the task becomes harder, as many networks share a single numeric address on the public-facing Internet and use private addresses internally. And the public addresses can change over time.
Marketers, of course, want to erase anonymity and even pseudonymity, because the less knowable an individual is, the less value that person has to advertisers. The more accurately they track you, the more lucrative it is to sell ads that cater to you or shop your data to other parties that combine online details with real-world purchasing behavior and credit records.
Over the years, Adobe Flash-based cookies and other breadcrumbs that allow tracking over many sites (or that bypass users’ ability to defeat such tracking) have been baked into plug-ins and browsers, often as unintentional byproducts. The “evercookie” proof-of-concept revealed that it was nearly impossible to kill all the stateful tracking elements for a site that wanted to keep you in its sights.
by Glenn Fleishman, Boing Boing | Read more:
Image: Wikipedia
Je Regrette: Why Regret is Essential to the Good Life
There’s a particular disdain for regret in US culture. It’s regarded as self-indulgent and irrational — a ‘useless’ feeling. We prefer utilitarian emotions, those we can use as vehicles for transformation, and closure. ‘Dwelling’, we tend to agree, gets you nowhere. It just leads you around in circles.
Regret is so counter to the pioneer spirit — with its belief in blinkered perseverance, and dogged forward motion — it’s practically un-American. In the US, you keep your squint firmly planted on the horizon and put one foot in front of the other. There’s something suspiciously female, possibly French, about any morbid interiority.
Best, then, to treat the past like an overflowing closet: just shut the door and walk away. ‘What’s done is done,’ we say. ‘It is what it is.’ ‘There’s no use crying over spilt milk.’ (...)
In a culture that believes winning is everything, that sees success as a totalising, absolute system, happiness and even basic worth are determined by winning. It’s not surprising, then, that people feel they need to deny regret — deny failure — in order to stay in the game. Though we each have a personal framework for looking at regret, Landman argues, the culture privileges a pragmatic, rationalist attitude toward regret that doesn’t allow for emotion or counterfactual ideation, and then combines with it a heroic framework which equates anything that lands short of the platonic ideal with failure. In such an environment, the denial of failure takes on magical powers. It becomes inoculation against failure itself. To express regret is nothing short of dangerous. It threatens to collapse the whole system.
In starting to lay out the possible uses of regret, Landman quotes William Faulkner. ‘The past,’ he wrote in 1950, ‘is never dead. It’s not even past.’ Great novels, Landman points out, are often about regret: about the life-changing consequences of a single bad decision (say, marrying the wrong person, not marrying the right one, or having let love pass you by altogether) over a long period of time. Sigmund Freud believed that thoughts, feelings, wishes, etc, are never entirely eradicated, but if repressed ‘[ramify] like a fungus in the dark and [take] on extreme forms of expression’. The denial of regret, in other words, will not block the fall of the dominoes. It will just allow you to close your eyes and clap your hands over your ears as they fall, down to the very last one.
Not surprisingly, it turns out that people’s greatest regrets revolve around education, work, and marriage, because the decisions we make around these issues have long-term, ever-expanding repercussions. The point of regret is not to try to change the past, but to shed light on the present. This is traditionally the realm of the humanities. What novels tell us is that regret is instructive. And the first thing regret tells us (much like its physical counterpart — pain) is that something in the present is wrong.
Regret is so counter to the pioneer spirit — with its belief in blinkered perseverance, and dogged forward motion — it’s practically un-American. In the US, you keep your squint firmly planted on the horizon and put one foot in front of the other. There’s something suspiciously female, possibly French, about any morbid interiority.
Best, then, to treat the past like an overflowing closet: just shut the door and walk away. ‘What’s done is done,’ we say. ‘It is what it is.’ ‘There’s no use crying over spilt milk.’ (...)
In a culture that believes winning is everything, that sees success as a totalising, absolute system, happiness and even basic worth are determined by winning. It’s not surprising, then, that people feel they need to deny regret — deny failure — in order to stay in the game. Though we each have a personal framework for looking at regret, Landman argues, the culture privileges a pragmatic, rationalist attitude toward regret that doesn’t allow for emotion or counterfactual ideation, and then combines with it a heroic framework which equates anything that lands short of the platonic ideal with failure. In such an environment, the denial of failure takes on magical powers. It becomes inoculation against failure itself. To express regret is nothing short of dangerous. It threatens to collapse the whole system.
In starting to lay out the possible uses of regret, Landman quotes William Faulkner. ‘The past,’ he wrote in 1950, ‘is never dead. It’s not even past.’ Great novels, Landman points out, are often about regret: about the life-changing consequences of a single bad decision (say, marrying the wrong person, not marrying the right one, or having let love pass you by altogether) over a long period of time. Sigmund Freud believed that thoughts, feelings, wishes, etc, are never entirely eradicated, but if repressed ‘[ramify] like a fungus in the dark and [take] on extreme forms of expression’. The denial of regret, in other words, will not block the fall of the dominoes. It will just allow you to close your eyes and clap your hands over your ears as they fall, down to the very last one.
Not surprisingly, it turns out that people’s greatest regrets revolve around education, work, and marriage, because the decisions we make around these issues have long-term, ever-expanding repercussions. The point of regret is not to try to change the past, but to shed light on the present. This is traditionally the realm of the humanities. What novels tell us is that regret is instructive. And the first thing regret tells us (much like its physical counterpart — pain) is that something in the present is wrong.
by Carina Chocano, Aeon | Read more:
Image: Anna Pogossova/Gallery StockFriday, October 18, 2013
Vapor Trail
The fall day is cloudless, calm and temperate. I’ve just finished a great meal on the patio of a popular café not far from the Plaza. An espresso is en route, and now, there’s only one thing left to do to complete the perfection of my dining experience: scratch a decades-old nicotine itch.
I take a familiar puff, a deep inhale, and there’s a warm collision at the back of my throat. I exhale: a dense but quickly dissipating cloud. It gets better as I repeat the ritual and pair it with the just-delivered caffeine concentrate.
But suddenly I’m aware that this whole thing might not be as satisfying for my waiter and the restaurant’s other patrons as it is for me. No one says a word. The looks I’m catching, though, as I billow out another nebula, range from disgust to curiosity.
That’s not smoke coming out of my mouth; it’s water-vapor. Nothing’s on fire. There’s been no combustion. Still, confusion reigns. So I find a quiet patch of shaded grass beyond the confines of the patio to enjoy what’s left of the afternoon–and my electronic cigarette.
The diners and staff at the restaurant are far from alone in their befuddlement, curiosity and inherent distrust about “e-cigarettes.” They have been around a decade but are only recently becoming ubiquitous in urban society. Studies and surveys show that millions of people are using e-cigarettes, and the number is steadily climbing. Financial analysts predict that, by year’s end, e-cigarettes will comprise an industry that has doubled in size since 2012 to become worth more than a billion dollars. Some even say the rise of the e-cig has contributed to a slight decline in cigarette sales.
The market’s explosive growth, its lack of regulation, an increase in use among children, pressing medical questions about health effects and the products’ association with one of America’s true social pariahs has placed e-cigarettes at the center of a vigorous national public health debate. That debate has found footholds at the state and local levels, too.
Essentially, e-cigarettes are battery-powered devices that heat a solution of vegetable glycerin, propylene glycol and artificial flavoring, converting the mixture into vapor the user inhales. The act has birthed a new verb into the parlance: vaping. The overwhelming majority of vapers, me included, buy e-cig juice that’s infused with the highly addictive drug nicotine at a level chosen by the purchaser.
by Jeff Proctor, SF Reporter | Read more:
Image: Sean Gallup/Getty Images via:
Subscribe to:
Posts (Atom)