Saturday, January 19, 2013
Google and the Future of Search

[ed. I know, another article about search. I'm just beginning to get the implications of this technological battleground.]
Thinking about Google over the last week, I have fallen into the typically procrastinatory habit of every so often typing the words "what is" or "what" or "wha" into the Google search box at the top right of my computer screen. Those prompts are all the omnipotent engine needs to inform me of the current instant top 10 of the virtual world's most urgent desires. At the time of typing, this list reads, in descending order:
What is the fiscal cliff
What is my ip
What is obamacare
What is love
What is gluten
What is instagram
What does yolo mean
What is the illuminati
What is a good credit score
What is lupus
It is a list that indicates anxieties, not least the ways in which we are restlessly fixated with our money, our bodies and our technology – and paranoid and confused in just about equal measure. A Prince Charles-like desire for the definition of love, in my repetitive experience of the last few days, always seems to come in at No 4 on this list of priorities, though the preoccupations above it and below it tend to shift slightly with the news.
The list also supports another truism: that we – the billion components of the collective questioning mind – have got used to asking Google pretty much anything and expecting it to point us to some kind of satisfactory answer. It's long since become the place most of us go for knowledge, possibly even, desperately, for wisdom. And it is already almost inconceivable to imagine how we might have gone about finding the answer to some of these questions only 15 years ago without it – a visit to the library? To a doctor? To Citizens Advice? To a shrink?
That was the time, in the prehistory of about 1995, when our ideas of "search" still carried the sense of the word's Latin roots – a search was a kind of "arduous quest" that invariably involved "wandering" and "seeking" and "traversing". Not any longer. For those who are growing up to search in this millennium, it implies nothing more taxing than typing two words into a box – or, increasingly, mumbling them into a phone – and waiting less than an instant for a comprehensive answer, generally involving texts and images and films and books and maps. Search's sense of questing purpose has already gone the way of other pre-Google concepts, such as "getting lost".
That rate of change – of how we gather information, how we make connections and think – has been so rapid that it invites a further urgent Google question. Where will search go next? One answer to that question was provided by the billionaire double act of Sergey Brin and Larry Page, Google's founders, in 2004, when pressed about their vision of the future by the former Newsweek journalist Steven Levy.
"Search will be included in people's brains," said Page of their ambition. "When you think about something and don't really know much about it, you will automatically get information."
"That's true," Brin concurred. "Ultimately I view Google as a way to augment your brain with the knowledge of the world. Right now, you go into your computer and type a phrase, but you can imagine that it could be easier in the future, that you can have just devices you talk into or you can have computers that pay attention to what's going on around them…"
Page, generally the wilder thinker, was adamant, though. "Eventually, you'll have the implant, where if you think about a fact, it will just tell you the answer."
Nine years on, Brin's vision at least is already reality. In the past couple of years, a great advance in voice-recognition technology has allowed you to talk to search apps – notably on iPhone's Siri as well as Google's Jelly Bean – while Google Now, awarded 2012 innovation of the year, will tell you what you want to know – traffic conditions, your team's football scores, the weather – before you ask it, based on your location and search history. Page's brain implants remain some way further off, though both Google founders have lately been wearing "Google Glass" prototypes, headbands that project a permanent screen on the edge of your field of vision, with apps – cameras, search, whatever – answerable to voice-activated command. Searching is ever more intimately related to thinking.Outside Google HQ in Mountain View, California.
In this sense, the man who is, these days, in charge of the vast majority of the world's questing and wandering and seeking and traversing is called Amit Singhal. Aged 44, head of Google Search, he is a boyishly enthusiastic presence, who inhabits a much-mythologised office in Mountain View, California, somewhat in the way that the Wizard of Oz lived at one end of the Yellow Brick Road. Singhal is the man who pulls the levers that might just help you find a heart, or a brain, or the way back to Kansas. For a dozen years, he has taken over responsibility from Brin for writing and refining the closely guarded algorithm – more than 200 separate coded equations – that powers Google's endless trawl for answers through pretty much all of history's recorded knowledge. So far, he has never stopped finding ways to make it ever smarter and quicker.
by Tim Adams, The Guardian | Read more:
Photograph: Google/Rex Features
Deception Is Futile
For thousands of years, attempts to detect deceit have relied on the notion that liars’ bodies betray them. But even after a century of scientific research, this fundamental assumption has never been definitively proven. “We know very little about deception from either a psychological or physiological view at the basic level,” says Charles Honts, a former Department of Defense polygrapher and now a Boise State University psychologist specializing in the study of deception. “If you look at the lie-detection literature, there’s nothing that ties it together, because there’s no basic theory there. It’s all over the place.”
Despite their fixation on the problem of deceit, government agencies aren’t interested in funding anything so abstract as basic research. “They want to buy hardware,” Honts says. But without an understanding of the mechanics of lying, it seems that any attempt to build a lie-detecting device is doomed to fail. “It’s like trying to build an atomic bomb without knowing the theory of the atom,” Honts says.
Take the polygraph. It functions today on the same principles as when it was conceived in 1921: providing a continuous recording of vital signs, including blood pressure, heart rate, and perspiration. But the validity of the polygraph approach has been questioned almost since its inception. It records the signs of arousal, and while these may be indications that a subject is lying—dissembling can be stressful—they might also be signs of anger, fear, even sexual excitement. “It’s not deception, per se,” says Judee Burgoon, Nunamaker’s research partner at the University of Arizona. “But that little caveat gets lost in the shuffle.”
The US Army founded a polygraph school in 1951, and the government later introduced the machine as an employee-screening tool. Indeed, according to some experts, the polygraph can detect deception more than 90 percent of the time—albeit under very strictly defined criteria. “If you’ve got a single issue, and the person knows whether or not they’ve shot John Doe,” Honts says, “the polygraph is pretty good.” Experienced polygraph examiners like Phil Houston, legendary within the CIA for his successful interrogations, are careful to point out that the device relies on the skill of the examiner to produce accurate results—the right kind of questions, the experience to know when to press harder and when the mere presence of the device can intimidate a suspect into telling the truth. Without that, a polygraph machine is no more of a lie-detector than a rubber truncheon or a pair of pliers.
As a result, although some state courts allow them, polygraph examinations have rarely been admitted as evidence in federal court; they’ve been dogged by high false-positive rates, and notorious spies, including CIA mole Aldrich Ames, have beaten the tests. In 2003 the National Academy of Sciences reported that the evidence of polygraph accuracy was “scanty and scientifically weak” and that, while the device might be used effectively in criminal investigations, as a screening tool it was practically useless. By then, other devices and techniques that had been touted as reliable lie detectors—voice stress analysis, pupillometry, brain scanning—had also either been dismissed as junk science or not fully tested.
But spooks and cops remain desperate for technology that could boost their rate of success even a couple of points above chance. That’s why, in 2006, project managers from the Army’s polygraph school—by then renamed the Defense Academy for Credibility Assessment—approached Nunamaker and Burgoon. The government wanted them to build a new machine, a device that could sniff out liars without touching them and that wouldn’t need a trained human examiner: a polygraph for the 21st century.
by Wired Staff, Wired | Read more:
Despite their fixation on the problem of deceit, government agencies aren’t interested in funding anything so abstract as basic research. “They want to buy hardware,” Honts says. But without an understanding of the mechanics of lying, it seems that any attempt to build a lie-detecting device is doomed to fail. “It’s like trying to build an atomic bomb without knowing the theory of the atom,” Honts says.
Take the polygraph. It functions today on the same principles as when it was conceived in 1921: providing a continuous recording of vital signs, including blood pressure, heart rate, and perspiration. But the validity of the polygraph approach has been questioned almost since its inception. It records the signs of arousal, and while these may be indications that a subject is lying—dissembling can be stressful—they might also be signs of anger, fear, even sexual excitement. “It’s not deception, per se,” says Judee Burgoon, Nunamaker’s research partner at the University of Arizona. “But that little caveat gets lost in the shuffle.”
The US Army founded a polygraph school in 1951, and the government later introduced the machine as an employee-screening tool. Indeed, according to some experts, the polygraph can detect deception more than 90 percent of the time—albeit under very strictly defined criteria. “If you’ve got a single issue, and the person knows whether or not they’ve shot John Doe,” Honts says, “the polygraph is pretty good.” Experienced polygraph examiners like Phil Houston, legendary within the CIA for his successful interrogations, are careful to point out that the device relies on the skill of the examiner to produce accurate results—the right kind of questions, the experience to know when to press harder and when the mere presence of the device can intimidate a suspect into telling the truth. Without that, a polygraph machine is no more of a lie-detector than a rubber truncheon or a pair of pliers.
As a result, although some state courts allow them, polygraph examinations have rarely been admitted as evidence in federal court; they’ve been dogged by high false-positive rates, and notorious spies, including CIA mole Aldrich Ames, have beaten the tests. In 2003 the National Academy of Sciences reported that the evidence of polygraph accuracy was “scanty and scientifically weak” and that, while the device might be used effectively in criminal investigations, as a screening tool it was practically useless. By then, other devices and techniques that had been touted as reliable lie detectors—voice stress analysis, pupillometry, brain scanning—had also either been dismissed as junk science or not fully tested.
But spooks and cops remain desperate for technology that could boost their rate of success even a couple of points above chance. That’s why, in 2006, project managers from the Army’s polygraph school—by then renamed the Defense Academy for Credibility Assessment—approached Nunamaker and Burgoon. The government wanted them to build a new machine, a device that could sniff out liars without touching them and that wouldn’t need a trained human examiner: a polygraph for the 21st century.
by Wired Staff, Wired | Read more:
Illustration Joyce P. Chan/The University of Arizona
What Is Middle Class in Manhattan?
Even the landscape is carved up by class. From 15,000 feet up, you can stare down at subdivisions and tract houses, and America’s class lines will stare right back up at you.
Manhattan, however, is not like most places. Its 1.6 million residents hide in a forest of tall buildings, and even the city’s elite take the subway. Sure, there are obvious brand-name buildings and tony ZIP codes where the price of entry clearly demands a certain amount of wealth, but middle-class neighborhoods do not really exist in Manhattan — probably the only place in the United States where a $5.5 million condo with a teak closet and mother-of-pearl wall tile shares a block with a public housing project.
In TriBeCa, Karen Azeez feels squeezed. A fund-raising consultant, Ms. Azeez has lived in the city for more than 20 years. Her husband, a retired police sergeant, bought their one-bedroom apartment in the low $200,000 range in 1997.
“When we got here, I didn’t feel so out of place, I didn’t have this awareness of being middle class,” she said. But in the last 5 or 10 years an array of high-rises brought “uberwealthy” neighbors, she said, the kind of people who discuss winter trips to St. Barts at the dog run, and buy $700 Moncler ski jackets for their children.
Even the local restaurants give Ms. Azeez the sense that she is now living as an economic minority in her own neighborhood.
“There’s McDonald’s, Mexican and Nobu,” she said, and nothing in between.
In a city like New York, where everything is superlative, who exactly is middle class? What kind of salary are we talking about? Where does a middle-class person live? And could the relentless rise in real estate prices push the middle class to extinction?
“A lot of people are hanging on by the skin of their teeth,” said Cheryl King, an acting coach who lives and works in a combined apartment and performance space that she rents out for screenings, video shoots and workshops to help offset her own high rent.
“My niece just bought a home in Atlanta for $85,000,” she said. “I almost spend that on rent and utilities in a year. To them, making $250,000 a year is wealthy. To us, it’s maybe the upper edge of middle class.”
“It’s horrifying,” she added.
Her horror, of course, is Manhattan’s high cost of living, which has for decades shocked transplants from Kansas and elsewhere, and threatened natives with the specter of an economic apocalypse that will empty the city of all but a few hardy plutocrats.
And yet the middle class stubbornly hangs on, trading economic pain for the emotional gain of hot restaurants, the High Line and the feeling of being in the center of everything. The price tag for life’s basic necessities — everything from milk to haircuts to Lipitor to electricity, and especially housing — is more than twice the national average.
“It’s overwhelmingly housing — that’s the big distortion relative to other places,” said Frank Braconi, the chief economist in the New York City comptroller’s office. “Virtually everything costs more, but not to the degree that housing does.”
The average Manhattan apartment, at $3,973 a month, costs almost $2,800 more than the average rental nationwide. The average sale price of a home in Manhattan last year was $1.46 million, according to a recent Douglas Elliman report, while the average sale price for a new home in the United States was just under $230,000. The middle class makes up a smaller proportion of the population in New York than elsewhere in the nation. New Yorkers also live in a notably unequal place. Household incomes in Manhattan are about as evenly distributed as they are in Bolivia or Sierra Leone — the wealthiest fifth of Manhattanites make 40 times more than the lowest fifth, according to 2010 census data.
Ask people around the country, “Are you middle class?” and the answer is likely to be yes. But ask the same question in Manhattan, and people often pause in confusion, unsure exactly what you mean.
There is no single, formal definition of class status in this country. Statisticians and demographers all use slightly different methods to divvy up the great American whole into quintiles and median ranges. Complicating things, most people like to think of themselves as middle class. It feels good, after all, and more egalitarian than proclaiming yourself to be rich or poor. A $70,000 annual income is middle class for a family of four, according to the median response in a recent Pew Research Center survey, and yet people at a wide range of income levels, including those making less than $30,000 and more than $100,000 a year, said they, too, belonged to the middle.
“You could still go into a bar in Manhattan and virtually everyone will tell you they’re middle class,” said Daniel J. Walkowitz, an urban historian at New York University. “Housing has always been one of the ways the middle class has defined itself, by the ability to own your own home. But in New York, you didn’t have to own.”
There is no stigma, he said, to renting a place you can afford only because it is rent-regulated; such a situation is even considered enviable.
Without the clear badge of middle-class membership — a home mortgage — it is hard to say where a person fits on the class continuum. So let’s consider the definition of “middle class” through five different lenses.
by Amy O'Leary, NY Times | Read more:
Photo: Piotr RedlinskiWorld Rankings
We're No.1 !! Well, actually, more like 12,513,988 (out of 30 million). I have no idea what these statistics mean. But, like in high school, now I can say I at least graduated in the top half of my class (and am apparently worth $652 to somebody). Next year's goal: 12,513,987! Go Team!
WebstatsDomain
~ markk
Friday, January 18, 2013
How Good Does Karaoke Have to Be to Qualify as Art?
But what if a musical revolution wasn’t in grunge, or hip-hop, or rock ’n’ roll? What if it was in karaoke? Is it possible that one of the most exciting music scenes in America is happening right now in Portland, and it doesn’t feature a single person playing an actual instrument?You may recall when you were younger that many nights achieved, for perhaps an hour or two, a state of euphoria so all-consuming that the next morning you could only describe the nights as “massive” or “epic.” Adventures were had. Astonishing things were seen. Maybe you stole a Coke machine, whatever. You would toss off these words —massive, epic — casually at brunch, annoying the middle-aged people sitting nearby who were grimly aware that even as those nights become few and far between, the price you pay afterward in hangovers and regrets is significantly greater. (If you are younger, you may be in the middle of a massive night right now, in which case you should stop reading this article. Put down your phone and go to it! This might be the last one.)
For me, those few such nights I get anymore revolve around karaoke. Something about the openness required to sing in public — and the vulnerability it makes me feel — allows me to cut loose in an un-self-conscious way. It’s hard, anymore, to lose myself in the moment. Karaoke lets me do that.
But I recently moved to Arlington, Va., with two children, and so I rarely go out at night to sing (or do anything). We have friends in Arlington, but not the kind of friends we had in New York — not yet. I sing whenever I can on business trips, with friends I browbeat into renting rooms at trusty karaoke spots like BINY or Second on Second. But for quite some time, I’d been reading Facebook status updates and tweets from acquaintances in Portland that suggested the city was some kind of karaoke paradise — a place in which you could sing every night in a different bar, and where the song choices were so outlandishly awesome that you might never run out of songs to sing.
My mission in Portland was to see if this could possibly be true. Portland does have dozens of karaoke bars, and over the course of six nights we did our best to visit them all. I sang Lee Ann Womack in a honky-tonk in far southeast Portland, Kanye West in a comedy club and INXS in a Chinese restaurant. I watched Emilie, my seven-months-pregnant sister-in-law, sing Melanie’s “Brand New Key” onstage at Stripparaoke night at the Devils Point, a teensy, low-ceilinged club on a triangular lot well outside Portland’s downtown, while a topless dancer worked the pole next to her. Afterward, the dancer — whose bare stomach featured a tattoo of a vividly horrible shark and the word REDRUM — gave Emilie a sweet hug.
And one night, I went with Emilie, her husband and my wife to the Alibi Tiki Lounge, which advertises itself as Portland’s “Original Tiki Bar.” Inside, the crowd seemed at first to be the familiar karaoke mix of wannabes and birthday celebrators you might find in any bar in any city. Someone sang “Sweet Caroline” almost as soon as we walked in. A drunken birthday girl couldn’t handle the Ting Tings song she’d chosen, so the K.J. switched midtrack to Rebecca Black’s “Friday,” which was more her speed.
But an hour in, a goofily dressed group gave an impressively committed performance of a Tenacious D song, one of them growling and snorting like Satan so enthusiastically that several audience members in the front row became visibly uncomfortable.
When they were done, I walked back to their table, where I sat down next to a guy with long straight hair and a top hat. “We’re all musicians,” the guy, Gregory Mulkern, said. He himself is a professional banjo player. “But we really love karaoke because you don’t actually have to care at all.”
“Karaoke in Portland is just different from other places,” said his friend Bruce Morrison. “There’s a lot of showmanship.”
Mulkern swept his long hair over his shoulders and put his top hat back on. “People in Portland,” he declared, “are sillier than in other places.”
In the corner of the booth, a woman with dark-rimmed eyes and black lipstick leaned forward suddenly and took my pen from my hand. She wrote a phone number in my notepad. “Do you know,” she asked, staring intently into my eyes, “about puppet karaoke?”
by Dan Kois, NY Times | Read more:
Photo: Shawn RecordsObscurity: A Better Way to Think About 'Privacy'
Facebook's announcement of its new Graph search tool on Tuesday set off yet another round of rapid-fire analysis about whether Facebook is properly handling its users' privacy. Unfortunately, most of the rapid-fire analysts haven't framed the story properly. Yes, Zuckerberg appears to be respecting our current privacy settings. And, yes, there just might be more stalking ahead. Neither framing device, however, is adequate. If we rely too much on them, we'll miss the core problem: the more accessible our Facebook information becomes, the less obscurity protects our interests.
While many debates over technology and privacy concern obscurity, the term rarely gets used. This is unfortunate, as "privacy" is an over-extended concept. It grabs our attention easily, but is hard to pin down. Sometimes, people talk about privacy when they are worried about confidentiality. Other times they evoke privacy to discuss issues associated with corporate access to personal information. Fortunately, obscurity has a narrower purview.
Obscurity is the idea that when information is hard to obtain or understand, it is, to some degree, safe. Safety, here, doesn't mean inaccessible. Competent and determined data hunters armed with the right tools can always find a way to get it. Less committed folks, however, experience great effort as a deterrent.
Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion's share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.
Discussion of obscurity in the case law remains sparse. Consequently, the concept remains under-theorized as courts continue their seemingly Sisyphean struggle with finding meaning in the concept of privacy.
Legal debates surrounding obscurity can be traced back at least to U.S. Department of Justice v. Reporters Committee for Freedom of the Press (1989). In this decision, the United States Supreme Court recognized a privacy interest in the "practical obscurity" of information that was technically available to the public, but could only be found by spending a burdensome and unrealistic amount of time and effort in obtaining it. Since this decision, discussion of obscurity in the case law remains sparse. Consequently, the concept remains under-theorized as courts continue their seemingly Sisyphean struggle with finding meaning in the concept of privacy.
Many contemporary privacy disputes are probably better classified as concern over losing obscurity. Consider the recent debate over whether a newspaper violated the privacy rights of gun owners by publishing a map comprised of information gleaned from public records. The situation left many scratching their heads. After all, how can public records be considered private? What obscurity draws our attention to, is that while the records were accessible to any member of the public prior to the rise of big data, more effort was required to obtain, aggregate, and publish them. In that prior context, technological constraints implicitly protected privacy interests. Now, in an attempt to keep pace with diminishing structural barriers, New York is considering excepting gun owners from "public records laws that normally allow newspapers or private citizens access to certain information the government collects."
The obscurity of public records and other legally available information is at issue in recent disputes over publishing mug shots and homeowner defaults. Likewise, claims for "privacy in public," as occur in discussion over license-plate readers, GPS trackers, and facial recognition technologies, are often pleas for obscurity that get either miscommunicated or misinterpreted as insistence that one's public interactions should remain secret.
While many debates over technology and privacy concern obscurity, the term rarely gets used. This is unfortunate, as "privacy" is an over-extended concept. It grabs our attention easily, but is hard to pin down. Sometimes, people talk about privacy when they are worried about confidentiality. Other times they evoke privacy to discuss issues associated with corporate access to personal information. Fortunately, obscurity has a narrower purview.
Obscurity is the idea that when information is hard to obtain or understand, it is, to some degree, safe. Safety, here, doesn't mean inaccessible. Competent and determined data hunters armed with the right tools can always find a way to get it. Less committed folks, however, experience great effort as a deterrent.
Online, obscurity is created through a combination of factors. Being invisible to search engines increases obscurity. So does using privacy settings and pseudonyms. Disclosing information in coded ways that only a limited audience will grasp enhances obscurity, too. Since few online disclosures are truly confidential or highly publicized, the lion's share of communication on the social web falls along the expansive continuum of obscurity: a range that runs from completely hidden to totally obvious.
Discussion of obscurity in the case law remains sparse. Consequently, the concept remains under-theorized as courts continue their seemingly Sisyphean struggle with finding meaning in the concept of privacy.
Legal debates surrounding obscurity can be traced back at least to U.S. Department of Justice v. Reporters Committee for Freedom of the Press (1989). In this decision, the United States Supreme Court recognized a privacy interest in the "practical obscurity" of information that was technically available to the public, but could only be found by spending a burdensome and unrealistic amount of time and effort in obtaining it. Since this decision, discussion of obscurity in the case law remains sparse. Consequently, the concept remains under-theorized as courts continue their seemingly Sisyphean struggle with finding meaning in the concept of privacy.
Many contemporary privacy disputes are probably better classified as concern over losing obscurity. Consider the recent debate over whether a newspaper violated the privacy rights of gun owners by publishing a map comprised of information gleaned from public records. The situation left many scratching their heads. After all, how can public records be considered private? What obscurity draws our attention to, is that while the records were accessible to any member of the public prior to the rise of big data, more effort was required to obtain, aggregate, and publish them. In that prior context, technological constraints implicitly protected privacy interests. Now, in an attempt to keep pace with diminishing structural barriers, New York is considering excepting gun owners from "public records laws that normally allow newspapers or private citizens access to certain information the government collects."
The obscurity of public records and other legally available information is at issue in recent disputes over publishing mug shots and homeowner defaults. Likewise, claims for "privacy in public," as occur in discussion over license-plate readers, GPS trackers, and facial recognition technologies, are often pleas for obscurity that get either miscommunicated or misinterpreted as insistence that one's public interactions should remain secret.
by Woodrow Hartzog and Evan Selinger, The Atlantic | Read more:
Photo: tajai/FlickrThursday, January 17, 2013
Speaking for My Tribe
I’ve been thinking of writing some version of this post since the days immediately after the Newtown shootings. It overlaps with but is distinct from the division between people who are pro-gun or anti-gun or pro-gun control or anti-gun control. Before you even get to these political positions, you start with a more basic difference of identity and experience: gun people and non-gun people.
So let me introduce myself. I’m a non-gun person. And I think I’m speaking for a lot of people.
It’s customary and very understandable that people often introduce themselves in the gun debate by saying, ‘Let me be clear: I’m a gun owner.’
Well, I want to be part of this debate too. I’m not a gun owner and, as I think as is the case for the more than half the people in the country who also aren’t gun owners, that means that for me guns are alien. And I have my own set of rights not to have gun culture run roughshod over me.
I don’t have any problem with people using guns to hunt. And I don’t have any problem with people having guns in their home for protection or because it’s a fun hobby. At least, I recognize that gun ownership is deeply embedded in American culture. That means not only do I not believe there’s any possibility of changing it but that I don’t need or want to change it. This is part of our culture. These folks are Americans as much as I am and as long as we can all live together safely I don’t need to or want to dictate how they live.
I’ve never owned a gun. I’ve never shot a gun. (I’m not including the bb guns I shot a few times as a kid.) Once about ten years ago, my friend John Judis and I were talking and decided it would probably be educational for us as reporters and just fun to go to a firing range and do some shooting. For whatever reason it never happened.
I also have a random and kind of scary experience from childhood. I’m probably or 4 or maybe 5 years old. We’re visiting someone’s house in St. Louis where we lived at the time. I’m off in some part of the house away from the parents playing with the little girl my age in the family. And I see a gun. Looks like a rifle or shotgun (I was too young to know which.) I pick it up, aim at the little girl and jokingly go ‘pow!’. And when I say ‘go pow!’ I mean I said ‘pow!’
But that’s when things got weird. Basically all the blood ran out of this little girl’s face at once, which was totally weird to me. And she said in something like shock, that’s a real gun.
Now, this is just an artefact of my memory. I can’t remember precisely what she said — particularly whether the gun was loaded or whether she thought it was. But her reaction made it very clear that it could have been.
The point, though, is that it was totally outside of my experience that a gun I might find in someone’s house might be a real — possibly loaded — firearm as opposed to a toy. The fact that I didn’t pull the trigger when I said ‘pow!’ was just dumb luck.
Again, I’m not a gun person. My parents weren’t.
Needless to say, this experience made an impression on me. And it sticks out as one of relatively few early childhood memories going on 40 years later. How would my life have been different had I pulled the trigger? I pointed the gun basically point blank at the little girl’s face. (Now, why the hell did I do that? No idea. Little boys are idiots. But it sounds a lot less dramatic if you think it’s a toy.) I’d have been a murderer at age 4 or 5. Be stigmatized and traumatized for the rest of my life. Probably spend at least some time in the juvenile justice system, if only to adjudicate the fact that it was an accident. Of course, the girl’s life would have been snuffed out before first grade.
Now, that’s a pretty heavy story. And you’re probably thinking, wow, no wonder Josh isn’t a gun guy. He was totally traumatized by a near-miss horrific incident as a child.
I don’t think that’s it though. It’s something I only think about once in a blue moon and obviously nothing actually happened, though I can’t deny that it’s part of my life experience.
More than this, I come from a culture where guns are not so much feared as alien, as I said. I don’t own one. I don’t think many people I know have one. It would scare me to have one in my home for a lot of reasons. Not least of which because I have two wonderful beyond belief little boys and accidents happen and I know that firearms in the home are most likely to kill their owners or their families. People have accidents. They get depressed. They get angry.
In the current rhetorical climate people seem not to want to say: I think guns are kind of scary and don’t want to be around them. Yes, plenty of people have them and use them safely. And I have no problem with that. But remember, handguns especially are designed to kill people. You may want to use it to threaten or deter. You may use it to kill people who should be killed (i.e., in self-defense). But handguns are designed to kill people. They’re not designed to hunt. You may use it to shoot at the range. But they’re designed to kill people quickly and efficiently.
That frightens me. I don’t want to have those in my home. I don’t particularly want to be around people who are carrying. Cops, I don’t mind. They’re trained, under an organized system and supposed to use them for a specific purpose. But do I want to have people carrying firearms out and about where I live my life — at the store, the restaurant, at my kid’s playground? No, the whole idea is alien and frankly scary. Because remember, guns are extremely efficient tools for killing people and people get weird and do stupid things.
by Josh Marshall, TPM | Read more:
So let me introduce myself. I’m a non-gun person. And I think I’m speaking for a lot of people.It’s customary and very understandable that people often introduce themselves in the gun debate by saying, ‘Let me be clear: I’m a gun owner.’
Well, I want to be part of this debate too. I’m not a gun owner and, as I think as is the case for the more than half the people in the country who also aren’t gun owners, that means that for me guns are alien. And I have my own set of rights not to have gun culture run roughshod over me.
I don’t have any problem with people using guns to hunt. And I don’t have any problem with people having guns in their home for protection or because it’s a fun hobby. At least, I recognize that gun ownership is deeply embedded in American culture. That means not only do I not believe there’s any possibility of changing it but that I don’t need or want to change it. This is part of our culture. These folks are Americans as much as I am and as long as we can all live together safely I don’t need to or want to dictate how they live.
I’ve never owned a gun. I’ve never shot a gun. (I’m not including the bb guns I shot a few times as a kid.) Once about ten years ago, my friend John Judis and I were talking and decided it would probably be educational for us as reporters and just fun to go to a firing range and do some shooting. For whatever reason it never happened.
I also have a random and kind of scary experience from childhood. I’m probably or 4 or maybe 5 years old. We’re visiting someone’s house in St. Louis where we lived at the time. I’m off in some part of the house away from the parents playing with the little girl my age in the family. And I see a gun. Looks like a rifle or shotgun (I was too young to know which.) I pick it up, aim at the little girl and jokingly go ‘pow!’. And when I say ‘go pow!’ I mean I said ‘pow!’
But that’s when things got weird. Basically all the blood ran out of this little girl’s face at once, which was totally weird to me. And she said in something like shock, that’s a real gun.
Now, this is just an artefact of my memory. I can’t remember precisely what she said — particularly whether the gun was loaded or whether she thought it was. But her reaction made it very clear that it could have been.
The point, though, is that it was totally outside of my experience that a gun I might find in someone’s house might be a real — possibly loaded — firearm as opposed to a toy. The fact that I didn’t pull the trigger when I said ‘pow!’ was just dumb luck.
Again, I’m not a gun person. My parents weren’t.
Needless to say, this experience made an impression on me. And it sticks out as one of relatively few early childhood memories going on 40 years later. How would my life have been different had I pulled the trigger? I pointed the gun basically point blank at the little girl’s face. (Now, why the hell did I do that? No idea. Little boys are idiots. But it sounds a lot less dramatic if you think it’s a toy.) I’d have been a murderer at age 4 or 5. Be stigmatized and traumatized for the rest of my life. Probably spend at least some time in the juvenile justice system, if only to adjudicate the fact that it was an accident. Of course, the girl’s life would have been snuffed out before first grade.
Now, that’s a pretty heavy story. And you’re probably thinking, wow, no wonder Josh isn’t a gun guy. He was totally traumatized by a near-miss horrific incident as a child.
I don’t think that’s it though. It’s something I only think about once in a blue moon and obviously nothing actually happened, though I can’t deny that it’s part of my life experience.
More than this, I come from a culture where guns are not so much feared as alien, as I said. I don’t own one. I don’t think many people I know have one. It would scare me to have one in my home for a lot of reasons. Not least of which because I have two wonderful beyond belief little boys and accidents happen and I know that firearms in the home are most likely to kill their owners or their families. People have accidents. They get depressed. They get angry.
In the current rhetorical climate people seem not to want to say: I think guns are kind of scary and don’t want to be around them. Yes, plenty of people have them and use them safely. And I have no problem with that. But remember, handguns especially are designed to kill people. You may want to use it to threaten or deter. You may use it to kill people who should be killed (i.e., in self-defense). But handguns are designed to kill people. They’re not designed to hunt. You may use it to shoot at the range. But they’re designed to kill people quickly and efficiently.
That frightens me. I don’t want to have those in my home. I don’t particularly want to be around people who are carrying. Cops, I don’t mind. They’re trained, under an organized system and supposed to use them for a specific purpose. But do I want to have people carrying firearms out and about where I live my life — at the store, the restaurant, at my kid’s playground? No, the whole idea is alien and frankly scary. Because remember, guns are extremely efficient tools for killing people and people get weird and do stupid things.
by Josh Marshall, TPM | Read more:
Photo: Jeb Harris
Edge and the Art Collector
In 1999, he bought Munch’s Madonna for $11 million. In 2004, he bought Hirst’sThe Physical Impossibility of Death in the Mind of Someone Living for $8 million. In 2006, he bought a Pollock for $52 million. In 2006, he bought de Kooning’s Woman III for $137 million. In 2007, he bought Warhol’s Turquoise Marilyn for $80 million. In 2010, he bought a Johns Flag for $110 million. There have been works by Bacon and Richter and Picasso and Koons. Probably only he knows how much he has spent. Someone on the internet estimates it at $700 million.
It was 1997, or maybe 1998, when I first heard of the Art Collector. I was working as a research analyst at a large New York investment bank. The broker at our firm whose responsibility was serving the Art Collector told a gathering of the bank’s research analysts that the Art Collector’s hedge fund was now one of the top payers of brokerage commissions to the bank. He may have said the top payer. It was an awakening: a hedge fund, five or six years old at the time, could now pay as much—or more—in commissions than the mutual fund giants that had always been our most important clients. The math, however, was straightforward. The mutual funds had a lot more money. The Art Collector traded many more times.
Even if I had heard of the Art Collector’s hedge fund before then, I still would have been surprised by the salesman’s purpose that day. The Art Collector’s firm, we were told, would happily continue to generate huge revenue for the bank. It was asking for only one thing in return. It was not for us to do better research on companies or their stocks, or to do the research more quickly, or in greater quantities. That would be too literal. Or figurative. The Art Collector wanted something more abstract: not better information, just early information. When we interpreted an event in the life of a company, we distributed a note to all of the bank’s clients. We also called the more important ones to provide context difficult to fire as bullet points. We had to call one client first. Why shouldn’t it be the one that paid us the most?
It was a startling request. The Art Collector wasn’t that interested in what we thought about companies or industries, competitive advantages or long-term growth. No, the Art Collector’s trading strategy was based on the thesis that one could make money trading stocks by anticipating whether Wall Street’s equity research analysts, collectively, were going to increase or decrease their estimates of how much a company was going to make the next quarter. The Art Collector didn’t invent the estimate revisions strategy. But the Art Collector had figured out that even if one worked tirelessly to discover the patterns of analysts’ opinions (or of the companies themselves), one still had no fundamental edge over other smart traders doing the same thing. What one could do—brazenly, unprecedentedly—was to pay the banks as much—more—than than any other client to get information first. This would potentially allow the Art Collector’s traders to hear some nuance from the analysts or the broker that would move a stock a sixteenth or two when the information was better propagated. This was not a restaurant’s biggest customer demanding a better table. This was a restaurant’s biggest customer demanding that other patrons get worse food.
I still can’t understand why the quid pro quo did not generate more outrage. (I never implemented it, nor was I asked to. I was young, without influence, and soon would leave the firm.) Equity research departments were not as regulated as they later would be. There was no clear understanding that research analysts’ opinions were public information—something that needed to be told to the entire public simultaneously. After all, research analysts did not have access to inside information (except, alas, when they did). They were just citizens with private opinions on stocks. If my dentist wanted to tell his barber to buy more shares of McDonald’s because he really liked Quarter Pounders, that was his private opinion too.
Over the next fifteen years or so, as hedge funds became larger and more tentacular and more important, Wall Street would learn hard the differences between hedge fund managers like the Art Collector who took 20 percent of the profits and the old-school mutual fund managers who worked for 0.75 percent fixed fees. It would get used to the spectacular velocity of trading like his, which has nothing to do with corporate capital formation or capital allocation or all the reasons we claim we believe in the market. But at the time, in the Art Collector’s strategy, we saw something slightly unseemly rather than illegal, blackjack instead of chess. Maybe that is because it was as interesting as it was new. The Art Collector was trying to corner the market on an edge.
The Art Collector, Steven Cohen, is in the news a lot lately. Prosecutors have accused seven former employees of his firm, SAC Capital, of insider trading. Three have pled guilty. Six others have been accused of insider trading while at other firms. The Times reports that more subpoenas are out. For a lot of people, this is all quite fun. Wall Street’s schadenfreude is as limitless as its greed.
It was 1997, or maybe 1998, when I first heard of the Art Collector. I was working as a research analyst at a large New York investment bank. The broker at our firm whose responsibility was serving the Art Collector told a gathering of the bank’s research analysts that the Art Collector’s hedge fund was now one of the top payers of brokerage commissions to the bank. He may have said the top payer. It was an awakening: a hedge fund, five or six years old at the time, could now pay as much—or more—in commissions than the mutual fund giants that had always been our most important clients. The math, however, was straightforward. The mutual funds had a lot more money. The Art Collector traded many more times.
Even if I had heard of the Art Collector’s hedge fund before then, I still would have been surprised by the salesman’s purpose that day. The Art Collector’s firm, we were told, would happily continue to generate huge revenue for the bank. It was asking for only one thing in return. It was not for us to do better research on companies or their stocks, or to do the research more quickly, or in greater quantities. That would be too literal. Or figurative. The Art Collector wanted something more abstract: not better information, just early information. When we interpreted an event in the life of a company, we distributed a note to all of the bank’s clients. We also called the more important ones to provide context difficult to fire as bullet points. We had to call one client first. Why shouldn’t it be the one that paid us the most?
It was a startling request. The Art Collector wasn’t that interested in what we thought about companies or industries, competitive advantages or long-term growth. No, the Art Collector’s trading strategy was based on the thesis that one could make money trading stocks by anticipating whether Wall Street’s equity research analysts, collectively, were going to increase or decrease their estimates of how much a company was going to make the next quarter. The Art Collector didn’t invent the estimate revisions strategy. But the Art Collector had figured out that even if one worked tirelessly to discover the patterns of analysts’ opinions (or of the companies themselves), one still had no fundamental edge over other smart traders doing the same thing. What one could do—brazenly, unprecedentedly—was to pay the banks as much—more—than than any other client to get information first. This would potentially allow the Art Collector’s traders to hear some nuance from the analysts or the broker that would move a stock a sixteenth or two when the information was better propagated. This was not a restaurant’s biggest customer demanding a better table. This was a restaurant’s biggest customer demanding that other patrons get worse food.
I still can’t understand why the quid pro quo did not generate more outrage. (I never implemented it, nor was I asked to. I was young, without influence, and soon would leave the firm.) Equity research departments were not as regulated as they later would be. There was no clear understanding that research analysts’ opinions were public information—something that needed to be told to the entire public simultaneously. After all, research analysts did not have access to inside information (except, alas, when they did). They were just citizens with private opinions on stocks. If my dentist wanted to tell his barber to buy more shares of McDonald’s because he really liked Quarter Pounders, that was his private opinion too.
Over the next fifteen years or so, as hedge funds became larger and more tentacular and more important, Wall Street would learn hard the differences between hedge fund managers like the Art Collector who took 20 percent of the profits and the old-school mutual fund managers who worked for 0.75 percent fixed fees. It would get used to the spectacular velocity of trading like his, which has nothing to do with corporate capital formation or capital allocation or all the reasons we claim we believe in the market. But at the time, in the Art Collector’s strategy, we saw something slightly unseemly rather than illegal, blackjack instead of chess. Maybe that is because it was as interesting as it was new. The Art Collector was trying to corner the market on an edge.
The Art Collector, Steven Cohen, is in the news a lot lately. Prosecutors have accused seven former employees of his firm, SAC Capital, of insider trading. Three have pled guilty. Six others have been accused of insider trading while at other firms. The Times reports that more subpoenas are out. For a lot of people, this is all quite fun. Wall Street’s schadenfreude is as limitless as its greed.
by Gary Sernovitz, N+1 | Read more:
“Stranger Being Eaten by Hirst’s Shark” Copyright (c) 2008 by Anthony EastonA Modest Proposal
Can we all just finally admit that wine people are in desperate need of a reality check on Bordeaux? The sooner we do, we will all be better off. Even Bordeaux itself — the entire region and its thousands of wine producers, not just the First Growths — will be better off. By focusing so much on the top end, Bordeaux has become almost entirely irrelevant to two generations of wine drinkers.
The Bordeaux backlash began to gain steam during all the hyperbolic critical attention for the 2009 vintage, and its record-setting prices. New York Times wine critic Eric Asimov wrote that “for a significant segment of the wine-drinking population in the United States, the raves heard around the world were not enough to elicit a response beyond, perhaps, a yawn.”
A few months later, the Wall Street Journal’s wine critic (and occasionally famous novelist) Jay McInerney bluntly asked: “Does Bordeaux still matter?” McInerney recounted boos at a fine wine auction when an offering of Bordeaux was announced. “For wine buffs with an indie sensibility,” he wrote, “Bordeaux is the equivalent of the Hollywood blockbuster, more about money than about art.” As sort of a hedge, he added: “Bordeaux bashing has become a new form of wine snobbery.”
A year later, the Journal’s other wine critic, Lettie Teague, wrote about how wine drinkers “shy away from Bordeaux, dismissing it as too expensive, too old-fashioned, too intimidating or simply too dull.”
Top sommeliers have weighed in, too. At Terroir, Paul Greico’s trend-setting New York wine bar, it is often noted that, despite over 50 wines by the glass, there is not one from Bordeaux. But perhaps the most damning rebuke of Bordeaux came last summer, from Pontus Elofsson, the sommelier at the cutting-edge Copenhagen restaurant Noma, voted “best restaurant in the world” three years running. Elofsson steadfastly refuses to carry Bordeaux on Noma’s wine list.
So it was with all this venom as backdrop that I made my first visit to Bordeaux last spring.
My friend was right. Even for someone who writes about wine, Bordeaux is totally intimidating. It hit me when I found myself sitting uneasily in the tasting parlor of Château La Mission Haut-Brion in the company of Prince Robert de Luxembourg, the chateau’s royal managing director.
Prince Robert told me that the big-time critics like Parker and James Suckling had visited here the week before. During our chit-chat, I mentioned this was my first trip to Bordeaux, and the Prince guffawed, incredulously. “Never been to Bordeaux? And you write about wine?”
“Um, well…yeah?” I said, backpedaling. “I guess I’ve just spent most of my time in places like Italy and Spain and Portugal. And other parts of France? I don’t know. Italy I guess is where most of my wine knowledge has come from.”
“Oh,” said the Prince, in a grand princely fashion, “so you are an expert in Italian wines? Ha. Well, we have an Italian wine expert here!” I haven’t felt so foolish since middle school when I forgot to wear shorts to a basketball game, and pulled down my sweatpants to reveal my tighty-whities to the crowd. The message from Prince Robert seemed to be: How the hell did you get an appointment to taste wines with me?
I looked around at the regal tasting room, with the heavy wood furniture and the bust of someone famous, and the high-seated chairs where the important wine critics swirl and spit and opine and move cases of thousand-dollar wine. And I decided to jump right in with a question that may have been impolite: “A lot of wine writers and sommeliers back in the States say that Bordeaux isn’t really relevant anymore. What do you say to those people?”
“The fact is,” said Prince Robert, “that people need to write about something. And Bordeaux is obviously so relevant that they need to write something about Bordeaux. It’s the tall poppy syndrome.”
Prince Robert clearly had answered this question many times before. “I would ask other winemakers around the world and they will tell you that Bordeaux would be the benchmark by which to judge all other wines,” he said. “There are no wines in the world that receive more excitement.”
“But wait,” I said. “Aren’t you worried that younger people aren’t drinking Bordeaux? That it’s not even on their radar? Aren’t you afraid that when this generation can finally afford your wines, they won’t care about them?”
“Yes, the young wine drinker likes the simplicity of New World wines. Wines that are easy to explain,” he said, and I’m not sure I can properly convey just how much contempt dripped from the Prince’s voice. “Anyway, I am confident that people will come back to the great wines of Bordeaux.”
“There has never been more demand for the top-end wines,” he added. This may be true, but we all know that the market is now being driven, in large part, by newer collectors in Asia. One might reasonably hypothesize that tastes will eventually change in China and India, too, just as they have in the United States in the decades since 1982 when Americans “discovered” Bordeaux (via Robert Parker). Surely by now there is a Chinese Robert Parker? And in the not-so-distant future a backlash against Bordeaux by young, tattooed, hipster Chinese sommeliers will happen?
I didn’t get to ask these questions because, apparently, our conversation bored the Prince. He rose from his chair, bid me adieu and wished me a good first trip to Bordeaux. “Enjoy those Italian wines,” he said, with a smile and a wink.
I was then left to taste nine wines from the 2011 vintage with the public relations person. How were the wines? Amazing. No doubt about it. The flagship first label wine was more complex and dense and rich than just about anything else I’ve ever tasted. But at what price? Château Haut-Brion 2009 has been listed at $1,000 a bottle. I tasted the only ounce of the 2011 that I will likely ever taste, one ounce more than most of my friends and readers will likely ever taste. Will my description inspire you to drink Bordeaux? I mean, one of my friends drove a Ferrari once and another once had sex with an underwear model, but neither of their descriptions has exactly led to me closer to the same experience.
The Bordeaux backlash began to gain steam during all the hyperbolic critical attention for the 2009 vintage, and its record-setting prices. New York Times wine critic Eric Asimov wrote that “for a significant segment of the wine-drinking population in the United States, the raves heard around the world were not enough to elicit a response beyond, perhaps, a yawn.”
A few months later, the Wall Street Journal’s wine critic (and occasionally famous novelist) Jay McInerney bluntly asked: “Does Bordeaux still matter?” McInerney recounted boos at a fine wine auction when an offering of Bordeaux was announced. “For wine buffs with an indie sensibility,” he wrote, “Bordeaux is the equivalent of the Hollywood blockbuster, more about money than about art.” As sort of a hedge, he added: “Bordeaux bashing has become a new form of wine snobbery.”
A year later, the Journal’s other wine critic, Lettie Teague, wrote about how wine drinkers “shy away from Bordeaux, dismissing it as too expensive, too old-fashioned, too intimidating or simply too dull.”
Top sommeliers have weighed in, too. At Terroir, Paul Greico’s trend-setting New York wine bar, it is often noted that, despite over 50 wines by the glass, there is not one from Bordeaux. But perhaps the most damning rebuke of Bordeaux came last summer, from Pontus Elofsson, the sommelier at the cutting-edge Copenhagen restaurant Noma, voted “best restaurant in the world” three years running. Elofsson steadfastly refuses to carry Bordeaux on Noma’s wine list.
So it was with all this venom as backdrop that I made my first visit to Bordeaux last spring.
My friend was right. Even for someone who writes about wine, Bordeaux is totally intimidating. It hit me when I found myself sitting uneasily in the tasting parlor of Château La Mission Haut-Brion in the company of Prince Robert de Luxembourg, the chateau’s royal managing director.
Prince Robert told me that the big-time critics like Parker and James Suckling had visited here the week before. During our chit-chat, I mentioned this was my first trip to Bordeaux, and the Prince guffawed, incredulously. “Never been to Bordeaux? And you write about wine?”
“Um, well…yeah?” I said, backpedaling. “I guess I’ve just spent most of my time in places like Italy and Spain and Portugal. And other parts of France? I don’t know. Italy I guess is where most of my wine knowledge has come from.”
“Oh,” said the Prince, in a grand princely fashion, “so you are an expert in Italian wines? Ha. Well, we have an Italian wine expert here!” I haven’t felt so foolish since middle school when I forgot to wear shorts to a basketball game, and pulled down my sweatpants to reveal my tighty-whities to the crowd. The message from Prince Robert seemed to be: How the hell did you get an appointment to taste wines with me?
I looked around at the regal tasting room, with the heavy wood furniture and the bust of someone famous, and the high-seated chairs where the important wine critics swirl and spit and opine and move cases of thousand-dollar wine. And I decided to jump right in with a question that may have been impolite: “A lot of wine writers and sommeliers back in the States say that Bordeaux isn’t really relevant anymore. What do you say to those people?”
“The fact is,” said Prince Robert, “that people need to write about something. And Bordeaux is obviously so relevant that they need to write something about Bordeaux. It’s the tall poppy syndrome.”
Prince Robert clearly had answered this question many times before. “I would ask other winemakers around the world and they will tell you that Bordeaux would be the benchmark by which to judge all other wines,” he said. “There are no wines in the world that receive more excitement.”
“But wait,” I said. “Aren’t you worried that younger people aren’t drinking Bordeaux? That it’s not even on their radar? Aren’t you afraid that when this generation can finally afford your wines, they won’t care about them?”
“Yes, the young wine drinker likes the simplicity of New World wines. Wines that are easy to explain,” he said, and I’m not sure I can properly convey just how much contempt dripped from the Prince’s voice. “Anyway, I am confident that people will come back to the great wines of Bordeaux.”
“There has never been more demand for the top-end wines,” he added. This may be true, but we all know that the market is now being driven, in large part, by newer collectors in Asia. One might reasonably hypothesize that tastes will eventually change in China and India, too, just as they have in the United States in the decades since 1982 when Americans “discovered” Bordeaux (via Robert Parker). Surely by now there is a Chinese Robert Parker? And in the not-so-distant future a backlash against Bordeaux by young, tattooed, hipster Chinese sommeliers will happen?
I didn’t get to ask these questions because, apparently, our conversation bored the Prince. He rose from his chair, bid me adieu and wished me a good first trip to Bordeaux. “Enjoy those Italian wines,” he said, with a smile and a wink.
I was then left to taste nine wines from the 2011 vintage with the public relations person. How were the wines? Amazing. No doubt about it. The flagship first label wine was more complex and dense and rich than just about anything else I’ve ever tasted. But at what price? Château Haut-Brion 2009 has been listed at $1,000 a bottle. I tasted the only ounce of the 2011 that I will likely ever taste, one ounce more than most of my friends and readers will likely ever taste. Will my description inspire you to drink Bordeaux? I mean, one of my friends drove a Ferrari once and another once had sex with an underwear model, but neither of their descriptions has exactly led to me closer to the same experience.
by Jason Wilson, The Smart Set | Read more:
Photo: uncredited
When Pills Fail, There Are Other Options
The treatment may sound appalling, but it works.
Transplanting feces from a healthy person into the gut of one who is sick can quickly cure severe intestinal infections caused by a dangerous type of bacteria that antibiotics often cannot control.
A new study finds that such transplants cured 15 of 16 people who had recurring infections with Clostridium difficile bacteria, whereas antibiotics cured only 3 of 13 and 4 of 13 patients in two comparison groups. The treatment appears to work by restoring the gut’s normal balance of bacteria, which fight off C. difficile.
The study is the first to compare the transplants with standard antibiotic therapy. The research, conducted in the Netherlands, was published Wednesday in The New England Journal of Medicine.
Fecal transplants have been used sporadically for years as a last resort to fight this stubborn and debilitating infection, which kills 14,000 people a year in the United States. The infection is usually caused by antibiotics, which can predispose people to C. difficile by killing normal gut bacteria. If patients are then exposed to C. difficile, which is common in many hospitals, it can take hold.
The usual treatment involves more antibiotics, but about 20 percent of patients relapse, and many of them suffer repeated attacks, with severe diarrhea, vomiting and fever.
Researchers say that, worldwide, about 500 people with the infection have had fecal transplantation. It involves diluting stool with a liquid, like salt water, and then pumping it into the intestinal tract via an enema, a colonoscope or a tube run through the nose into the stomach or small intestine.
Stool can contain hundreds or even thousands of types of bacteria, and researchers do not yet know which ones have the curative powers. So for now, feces must be used pretty much intact.
Medical journals have reported high success rates and seemingly miraculous cures in patients who have suffered for months. But until now there was room for doubt, because no controlled experiments had compared the outlandish-sounding remedy with other treatments.
The new research is the first to provide the type of evidence that skeptics have demanded, and proponents say they hope the results will help bring fecal transplants into the medical mainstream, because for some patients nothing else works.
“Those of us who do fecal transplant know how effective it is,” said Dr. Colleen R. Kelly, a gastroenterologist with the Women’s Medicine Collaborative in Providence, R.I., who was not part of the Dutch study. “The tricky part has been convincing everybody else.” (...)
Dr. Keller said that patients were so eager to receive transplants that they would not join the study unless the researchers promised that those assigned to antibiotics alone would get transplants later if the drugs failed.
Among the 16 who received transplants, 13 were cured after the first infusion. The other three were given repeat infusions from different donors, and two were also cured. In the two groups of patients who did not receive transplants, only 7 of 26 were cured.
Of the patients who did not receive transplants at first and who relapsed after receiving antibiotics only, 18 were subsequently given transplants, and 15 were cured.
The study was originally meant to include more patients, but it had to be cut short because the antibiotic groups were faring so poorly compared with the transplant patients that it was considered unethical to continue.
Transplanting feces from a healthy person into the gut of one who is sick can quickly cure severe intestinal infections caused by a dangerous type of bacteria that antibiotics often cannot control.A new study finds that such transplants cured 15 of 16 people who had recurring infections with Clostridium difficile bacteria, whereas antibiotics cured only 3 of 13 and 4 of 13 patients in two comparison groups. The treatment appears to work by restoring the gut’s normal balance of bacteria, which fight off C. difficile.
The study is the first to compare the transplants with standard antibiotic therapy. The research, conducted in the Netherlands, was published Wednesday in The New England Journal of Medicine.
Fecal transplants have been used sporadically for years as a last resort to fight this stubborn and debilitating infection, which kills 14,000 people a year in the United States. The infection is usually caused by antibiotics, which can predispose people to C. difficile by killing normal gut bacteria. If patients are then exposed to C. difficile, which is common in many hospitals, it can take hold.
The usual treatment involves more antibiotics, but about 20 percent of patients relapse, and many of them suffer repeated attacks, with severe diarrhea, vomiting and fever.
Researchers say that, worldwide, about 500 people with the infection have had fecal transplantation. It involves diluting stool with a liquid, like salt water, and then pumping it into the intestinal tract via an enema, a colonoscope or a tube run through the nose into the stomach or small intestine.
Stool can contain hundreds or even thousands of types of bacteria, and researchers do not yet know which ones have the curative powers. So for now, feces must be used pretty much intact.
Medical journals have reported high success rates and seemingly miraculous cures in patients who have suffered for months. But until now there was room for doubt, because no controlled experiments had compared the outlandish-sounding remedy with other treatments.
The new research is the first to provide the type of evidence that skeptics have demanded, and proponents say they hope the results will help bring fecal transplants into the medical mainstream, because for some patients nothing else works.
“Those of us who do fecal transplant know how effective it is,” said Dr. Colleen R. Kelly, a gastroenterologist with the Women’s Medicine Collaborative in Providence, R.I., who was not part of the Dutch study. “The tricky part has been convincing everybody else.” (...)
Dr. Keller said that patients were so eager to receive transplants that they would not join the study unless the researchers promised that those assigned to antibiotics alone would get transplants later if the drugs failed.
Among the 16 who received transplants, 13 were cured after the first infusion. The other three were given repeat infusions from different donors, and two were also cured. In the two groups of patients who did not receive transplants, only 7 of 26 were cured.
Of the patients who did not receive transplants at first and who relapsed after receiving antibiotics only, 18 were subsequently given transplants, and 15 were cured.
The study was originally meant to include more patients, but it had to be cut short because the antibiotic groups were faring so poorly compared with the transplant patients that it was considered unethical to continue.
by Denise Grady, NY Times | Read more:
Gretchen Ertl for The New York Times
Subscribe to:
Comments (Atom)






+inks+on+silk..jpg)








