Thursday, May 17, 2012
Doubt Cast on the ‘Good’ in ‘Good Cholesterol’
The name alone sounds so encouraging: HDL, the “good cholesterol.” The more of it in your blood, the lower your risk of heart disease. So bringing up HDL levels has got to be good for health.
Or so the theory went.
Now, a new study that makes use of powerful databases of genetic information has found that raising HDL levels may not make any difference to heart disease risk. People who inherit genes that give them naturally higher HDL levels throughout life have no less heart disease than those who inherit genes that give them slightly lower levels. If HDL were protective, those with genes causing higher levels should have had less heart disease.
Researchers not associated with the study, published online Wednesday in The Lancet, found the results compelling and disturbing. Companies are actively developing and testing drugs that raise HDL, although three recent studies of such treatments have failed. And patients with low HDL levels are often told to try to raise them by exercising or dieting or even by taking niacin, which raised HDL but failed to lower heart disease risk in a recent clinical trial.
“I’d say the HDL hypothesis is on the ropes right now,” said Dr. James A. de Lemos, a professor at the University of Texas Southwestern Medical Center, who was not involved in the study.
Dr. Michael Lauer, director of the division of cardiovascular sciences at the National Heart, Lung and Blood Institute, agreed.
“The current study tells us that when it comes to HDL we should seriously consider going back to the drawing board, in this case meaning back to the laboratory,” said Dr. Lauer, who also was not connected to the research. “We need to encourage basic laboratory scientists to figure out where HDL fits in the puzzle — just what exactly is it a marker for.”
But Dr. Steven Nissen, chairman of cardiovascular medicine at the Cleveland Clinic, who is helping conduct studies of HDL-raising drugs, said he remained hopeful. HDL is complex, he said, and it is possible that some types of HDL molecules might in fact protect against heart disease.
“I am an optimist,” Dr. Nissen said.
by Gina Kolata, NY Times | Read more:
Illustration: via Wikipedia
Or so the theory went.
Now, a new study that makes use of powerful databases of genetic information has found that raising HDL levels may not make any difference to heart disease risk. People who inherit genes that give them naturally higher HDL levels throughout life have no less heart disease than those who inherit genes that give them slightly lower levels. If HDL were protective, those with genes causing higher levels should have had less heart disease.
Researchers not associated with the study, published online Wednesday in The Lancet, found the results compelling and disturbing. Companies are actively developing and testing drugs that raise HDL, although three recent studies of such treatments have failed. And patients with low HDL levels are often told to try to raise them by exercising or dieting or even by taking niacin, which raised HDL but failed to lower heart disease risk in a recent clinical trial.
“I’d say the HDL hypothesis is on the ropes right now,” said Dr. James A. de Lemos, a professor at the University of Texas Southwestern Medical Center, who was not involved in the study.
Dr. Michael Lauer, director of the division of cardiovascular sciences at the National Heart, Lung and Blood Institute, agreed.
“The current study tells us that when it comes to HDL we should seriously consider going back to the drawing board, in this case meaning back to the laboratory,” said Dr. Lauer, who also was not connected to the research. “We need to encourage basic laboratory scientists to figure out where HDL fits in the puzzle — just what exactly is it a marker for.”
But Dr. Steven Nissen, chairman of cardiovascular medicine at the Cleveland Clinic, who is helping conduct studies of HDL-raising drugs, said he remained hopeful. HDL is complex, he said, and it is possible that some types of HDL molecules might in fact protect against heart disease.
“I am an optimist,” Dr. Nissen said.
by Gina Kolata, NY Times | Read more:
Illustration: via Wikipedia
Wednesday, May 16, 2012
Word for Word
It has become something of a literary cliché to bash the thesaurus, or at the very least, to warn fellow writers that it is a book best left alone. Some admonitions might be blunt, others wistful, as with Billy Collins musing on his rarely opened thesaurus. But beyond the romantic anthropomorphizing of words needing to break free from “the warehouse of Roget,” what of Collins’ more pointed criticism, that “there is no/such thing as a synonym”? That would suggest that the whole enterprise of constructing a thesaurus is predicated on a fiction.
It is only a fiction if one holds fast to the notion that synonyms must be exactly equivalent in their meaning, usage, and connotation. Of course, under this strict view, there will never be any “perfect” synonyms. No word does exactly the job of another. In the words of the linguist Roy Harris, “If we believe there are instances where two expressions cannot be differentiated in respect of meaning, we must be deceiving ourselves.”
But the synonyms that we find gathered together in a thesaurus are typically more like siblings that share a striking resemblance. “Brotherly” and “fraternal,” for instance. Or “sisterly” and “sororal.” They may correspond well enough in meaning, but that should not imply that one can always be substituted for another. Consulting a thesaurus to find these closely related sets of words is only the first step for a writer looking for le mot juste: the peculiar individuality of each would-be synonym must then be carefully judged. Mark Twain knew the perils of relying on the family resemblance of words: “Use the right word,” he wrote, “not its second cousin.”
No matter how tempting the metaphor, though, words are not people. We cannot run genetic tests on them to determine their degrees of kinship, and a thesaurus is not a pedigree chart. We can, nonetheless, look to it as a guidebook to help us travel around the semantic space of our shared lexicon, grasping both the similarities that bond words together and the nuances that differentiate them.
This was, in fact, more or less the mission of Peter Mark Roget when he published the first edition of his Thesaurus of English Words and Phrases in the spring of 1852. He organized sets of synonyms according to one thousand categories, neatly arrayed in a two-column format. Roget was utterly obsessive about making lists, keeping a notebook full of them as early as eight years old, and by age twenty-six he had compiled a hundred-page draft of what would become his greatest work. List making was a welcome relief from his chronic depression and tumultuous family life; it was a way of imposing order on a messy reality. In his autobiography, he would not bring himself to explore his personal troubles; instead he dispassionately noted places he visited, moving days, birthdays, and death days. He called it “List of Principal Events.”
In his biography of Roget, The Man Who Made Lists, Joshua Kendall argues that Roget created a “paracosm,” or alternate universe, in the orderly lists of words he began making in childhood: “both a replica of the real world as well as a private, imaginary world.” The thesaurus that would grow out of the lists was even more hyperorderly. The unruliness of language—and the world of concepts that words denote—could be tamed in his pages. When he discovered that he actually had 1,002 concepts listed instead of his planned 1,000, he simply condensed two entries to achieve his round number: “Absence of Intellect” became 450a and “Indiscrimination,” 465a.
Roget’s thesaurus was crucially a conceptual undertaking, and, according to Roget’s deeply held religious beliefs, a tribute to God’s work. His efforts to create order out of linguistic chaos harks back to the story of Adam in the Garden of Eden, who was charged with naming all that was around him, thereby creating a perfectly transparent language. It was, according to the theology of St. Augustine, a language that would lose its perfection with the Fall of Man, and then irreparably shatter following construction of the Tower of Babel. By Roget’s time, Enlightenment ideals had taken hold, suggesting that scientific pursuits and rational inquiry could discover antidotes to Babel, if not a return to the perfect language of Adam. Though we no longer cling so tightly to these Enlightenment notions about language in our postmodern age, we still carry with us Roget’s legacy, the view that language can somehow be wrangled and rationalized by fitting the lexicon into tidy conceptual categories.
Roget intended for his readers to immerse themselves in the orderly classification system of the thesaurus so that they might better understand the full possibilities for human expression. As Roget first conceived it, the book did not even have an alphabetical index—he included it later as an afterthought. His goal, then, was not to provide a simple method of replacing synonym A with synonym B but instead to encourage a fuller understanding of the world of ideas and the language representing it.
In England, the Thesaurus was widely praised upon publication. The Westminster Review lauded the work’s “ideal classification,” which meant that “the whole Thesaurus may be read through, and not prove dry reading either.” An international edition would eventually popularize his work in the United States as well, becoming a household item in the 1920s during the crossword craze. Eventually “Roget” would become synonymous with the thesaurus itself, even if many of the contemporary reference works that bear his name share little resemblance to his careful classification system.
More than a century and a half later, the impact of Roget’s creation continues to reverberate in the proliferation of thesauruses, both in print and electronic varieties. Yet the thesaurus has also come under fire time and time again—what does it have to offer the modern writer?
by Ben Zimmer, Lapham's Quarterly | Read more:
Image: Wikipedia
It is only a fiction if one holds fast to the notion that synonyms must be exactly equivalent in their meaning, usage, and connotation. Of course, under this strict view, there will never be any “perfect” synonyms. No word does exactly the job of another. In the words of the linguist Roy Harris, “If we believe there are instances where two expressions cannot be differentiated in respect of meaning, we must be deceiving ourselves.”
But the synonyms that we find gathered together in a thesaurus are typically more like siblings that share a striking resemblance. “Brotherly” and “fraternal,” for instance. Or “sisterly” and “sororal.” They may correspond well enough in meaning, but that should not imply that one can always be substituted for another. Consulting a thesaurus to find these closely related sets of words is only the first step for a writer looking for le mot juste: the peculiar individuality of each would-be synonym must then be carefully judged. Mark Twain knew the perils of relying on the family resemblance of words: “Use the right word,” he wrote, “not its second cousin.”
No matter how tempting the metaphor, though, words are not people. We cannot run genetic tests on them to determine their degrees of kinship, and a thesaurus is not a pedigree chart. We can, nonetheless, look to it as a guidebook to help us travel around the semantic space of our shared lexicon, grasping both the similarities that bond words together and the nuances that differentiate them.
This was, in fact, more or less the mission of Peter Mark Roget when he published the first edition of his Thesaurus of English Words and Phrases in the spring of 1852. He organized sets of synonyms according to one thousand categories, neatly arrayed in a two-column format. Roget was utterly obsessive about making lists, keeping a notebook full of them as early as eight years old, and by age twenty-six he had compiled a hundred-page draft of what would become his greatest work. List making was a welcome relief from his chronic depression and tumultuous family life; it was a way of imposing order on a messy reality. In his autobiography, he would not bring himself to explore his personal troubles; instead he dispassionately noted places he visited, moving days, birthdays, and death days. He called it “List of Principal Events.”
In his biography of Roget, The Man Who Made Lists, Joshua Kendall argues that Roget created a “paracosm,” or alternate universe, in the orderly lists of words he began making in childhood: “both a replica of the real world as well as a private, imaginary world.” The thesaurus that would grow out of the lists was even more hyperorderly. The unruliness of language—and the world of concepts that words denote—could be tamed in his pages. When he discovered that he actually had 1,002 concepts listed instead of his planned 1,000, he simply condensed two entries to achieve his round number: “Absence of Intellect” became 450a and “Indiscrimination,” 465a.
Roget’s thesaurus was crucially a conceptual undertaking, and, according to Roget’s deeply held religious beliefs, a tribute to God’s work. His efforts to create order out of linguistic chaos harks back to the story of Adam in the Garden of Eden, who was charged with naming all that was around him, thereby creating a perfectly transparent language. It was, according to the theology of St. Augustine, a language that would lose its perfection with the Fall of Man, and then irreparably shatter following construction of the Tower of Babel. By Roget’s time, Enlightenment ideals had taken hold, suggesting that scientific pursuits and rational inquiry could discover antidotes to Babel, if not a return to the perfect language of Adam. Though we no longer cling so tightly to these Enlightenment notions about language in our postmodern age, we still carry with us Roget’s legacy, the view that language can somehow be wrangled and rationalized by fitting the lexicon into tidy conceptual categories.
Roget intended for his readers to immerse themselves in the orderly classification system of the thesaurus so that they might better understand the full possibilities for human expression. As Roget first conceived it, the book did not even have an alphabetical index—he included it later as an afterthought. His goal, then, was not to provide a simple method of replacing synonym A with synonym B but instead to encourage a fuller understanding of the world of ideas and the language representing it.
In England, the Thesaurus was widely praised upon publication. The Westminster Review lauded the work’s “ideal classification,” which meant that “the whole Thesaurus may be read through, and not prove dry reading either.” An international edition would eventually popularize his work in the United States as well, becoming a household item in the 1920s during the crossword craze. Eventually “Roget” would become synonymous with the thesaurus itself, even if many of the contemporary reference works that bear his name share little resemblance to his careful classification system.
More than a century and a half later, the impact of Roget’s creation continues to reverberate in the proliferation of thesauruses, both in print and electronic varieties. Yet the thesaurus has also come under fire time and time again—what does it have to offer the modern writer?
by Ben Zimmer, Lapham's Quarterly | Read more:
Image: Wikipedia
Therapy
Doctor: Are you sexually active?
Me: Ha
Me: Hahahaha
Me: HAHAHAHAHAHAHAHAHAHA
Me: HAHA THAT'S A GOOD ONE.
Me: OH MY GOD WHAT IS AIR
Me: JESUS TAKE THE WHEEL OH MY GOD
Me: FORGET THAT, JESUS TAKE THE WHOLE GOD DAMN CAR
Me: Hahaha
Me: Haaa....
Me: Whooooooo, that was a good one.
Me: No, no I am not.
Adopt Me
“JITTERS”
Hi. I’m a young cocker-spaniel mix who’s still quite scared of people. Looking for a patient forever-home to help me come out of my shell. I was formerly owned by a young man with an anger-management disorder that was so serious he needed to be institutionalized. I don’t have a mean bone in my body, but I would be best off as the only pet in a quiet home without children. Until I’m back on anti-anxiety meds, I might need you to disable your doorbell. Please, I’m not a big fan of sudden movements. Kindness and positive reinforcements will go a long ways with me. I’m learning not to chew on electric cords when I get nervous. Here’s hoping you’ll be that special someone kind enough to continue applying gel to my rear-leg stitches twice a day for another month. I’m nearly seventy-five per cent capable of doing my business outside.
“SINBAD”
Talk about restless-leg syndrome! You’re not going to believe what a rambunctious free spirit I am. (They say I’m half Jack Russell terrier, half determined neighbor’s mutt.) See, once my original family started having babies, they had even less time to give me the attention I crave. Now I’m going to need all the patience you can muster to fix my unwanted behaviors. So don’t be too mad at me the first time I destroy a favorite leather shoe or couch. Hey!—I’m working on it! Won’t you commit to attending regular obedience classes with me? I’d really like to learn about these “boundary” things and what the heck “NO” means.
I’m a super-exuberant digger. This time, though, I probably shouldn’t be with any family that has once-loved pets laid to rest in shallow backyard graves. I’m definitely high energy and want nothing more than to chase anything that moves, especially squirrels and skunks. Hold onto my leash tight if a car speeds by! I can be somewhat vocal when playing, so a home and neighbors with some tolerance for fun barking is ideal. I do my share of hand mouthing but rarely break skin. SPOILER ALERT: I can be a bit of an escape artist if left alone for an instant.
by Bill Franzen, New Yorker | Read more:
Illustration by Ralph Steadman.
How Yahoo Killed Flickr and Lost the Internet
Web startups are made out of two things: people and code. The people make the code, and the code makes the people rich. Code is like a poem; it has to follow certain structural requirements, and yet out of that structure can come art. But code is art that does something. It is the assembly of something brand new from nothing but an idea.
This is the story of a wonderful idea. Something that had never been done before, a moment of change that shaped the Internet we know today. This is the story of Flickr. And how Yahoo bought it and murdered it and screwed itself out of relevance along the way.
Do you remember Flickr's tag line? It reads "almost certainly the best online photo management and sharing application in the world." It was an epic humble brag, a momentously tongue in cheek understatement.
Because until three years ago, of course Flickr was the best photo sharing service in the world. Nothing else could touch it. If you cared about digital photography, or wanted to share photos with friends, you were on Flickr.
Yet today, that tagline simply sounds like delusional posturing. The photo service that was once poised to take on the the world has now become an afterthought. Want to share photos on the Web? That's what Facebook is for. Want to look at the pictures your friends are snapping on the go? Fire up Instagram.
Even the notion of Flickr as an archive—as the place where you store all your photos as a backup—is becoming increasingly quaint as Dropbox, Microsoft, Google, Box.net, Amazon, Apple, and a host of others scramble to serve online gigs to our hungry desktops.
The site that once had the best social tools, the most vibrant userbase, and toppest-notch storage is rapidly passing into the irrelevance of abandonment. Its once bustling community now feels like an exurban neighborhood rocked by a housing crisis. Yards gone to seed. Rusting bikes in the front yard. Tattered flags. At address, after address, after address, no one is home.
It is a case study of what can go wrong when a nimble, innovative startup gets gobbled up by a behemoth that doesn't share its values. What happened to Flickr? The same thing that happened to so many other nimble, innovative startups who sold out for dollars and bandwidth: Yahoo.
Here's how it all went bad.
by Matt Honan, Gizmodo | Read more:
Image: Shutterstock/Vince Clements
This is the story of a wonderful idea. Something that had never been done before, a moment of change that shaped the Internet we know today. This is the story of Flickr. And how Yahoo bought it and murdered it and screwed itself out of relevance along the way. Do you remember Flickr's tag line? It reads "almost certainly the best online photo management and sharing application in the world." It was an epic humble brag, a momentously tongue in cheek understatement.
Because until three years ago, of course Flickr was the best photo sharing service in the world. Nothing else could touch it. If you cared about digital photography, or wanted to share photos with friends, you were on Flickr.
Yet today, that tagline simply sounds like delusional posturing. The photo service that was once poised to take on the the world has now become an afterthought. Want to share photos on the Web? That's what Facebook is for. Want to look at the pictures your friends are snapping on the go? Fire up Instagram.
Even the notion of Flickr as an archive—as the place where you store all your photos as a backup—is becoming increasingly quaint as Dropbox, Microsoft, Google, Box.net, Amazon, Apple, and a host of others scramble to serve online gigs to our hungry desktops.
The site that once had the best social tools, the most vibrant userbase, and toppest-notch storage is rapidly passing into the irrelevance of abandonment. Its once bustling community now feels like an exurban neighborhood rocked by a housing crisis. Yards gone to seed. Rusting bikes in the front yard. Tattered flags. At address, after address, after address, no one is home.
It is a case study of what can go wrong when a nimble, innovative startup gets gobbled up by a behemoth that doesn't share its values. What happened to Flickr? The same thing that happened to so many other nimble, innovative startups who sold out for dollars and bandwidth: Yahoo.
Here's how it all went bad.
by Matt Honan, Gizmodo | Read more:
Image: Shutterstock/Vince Clements
Tuesday, May 15, 2012
Why Anonymous ‘might well be the most powerful organization on Earth'
Terrorists to some, heroes to others, the jury is still out on Anonymous’s true nature. Known for its robust defence of Internet freedom – and the right to remain anonymous — Anonymous came in first place in Time Magazine’s 2012 online poll on the most influential person in the world.
Fox News, on the other hand, has branded the hackers “domestic terrorists,” a role Anonymous has been cast to play in the latest Call of Duty Black Ops II, in which Anonymous appears as the enemy who takes control of unmanned drones in the not-too-distant future. (That creative decision may have put Activision, the creator of the video-game series, at the top of the Anonymous hit list.) For its part, much of what Anonymous does and says about itself, in the far reaches of the Internet, cannot be verified. Nor do all Anons agree on who they are as a group, and where they are going.
Q: As strictly an online army of hackers, how powerful is Anonymous?
A: Anonymous is kind of like the big buff kid in school who had really bad self-esteem then all of a sudden one day he punched someone in the face and went, “Holy s— I’m really strong!” Scientology (one of Anonymous’s first targets) was the punch in the face where Anonymous began to realize how incredibly powerful they are. There’s a really good argument at this point that we might well be the most powerful organization on Earth. The entire world right now is run by information. Our entire world is being controlled and operated by tiny invisible 1s and 0s that are flashing through the air and flashing through the wires around us. So if that’s what controls our world, ask yourself who controls the 1s and the 0s? It’s the geeks and computer hackers of the world. (...)
Q: Do you think the general public is not concerned enough with online surveillance or real-life surveillance?
A: I think the general public is beginning to learn the value of information. To give an example, for a very long time nobody in the U.S. or the world was allowed to know the number of civilian casualties in Afghanistan or Iraq. There were wild guesses and they were all over the ballpark figures, until a young army private named Bradley Manning had the courage to steal that information from the U.S. government and release it. Now we know that despite their smart munitions and all their high-technology they have somehow managed to accidentally kill 150,000 civilians in two countries. … As these kinds of startling facts come out, the public will begin to realize the value of the information and they will realize that the activists are risking everything for that information to be public.
Q: What do you say to people who believe Anons are just cyber-terrorists?
A: Basically I decline the semantic argument. If you want to call me a terrorist, I have no problem with that. But I would ask you, “Who is it that’s terrified?” If it’s the bad guys who are terrified, I’m really super OK with that. If it’s the average person, the people out in the world we are trying to help who are scared of us, I’d ask them to educate themselves, to do some research on what it is we do and lose that fear. We’re fighting for the people, we are fighting, as Occupy likes to say, for the 99%. It’s the 1% people who are wrecking our planet who should be quite terrified. If to them we are terrorists, then they probably got that right.
‘I think eventually we’ll win. I’ve always believed that right will always prevail’
“Information terrorist” – what a funny concept. That you could terrorize someone with information. But who’s terrorized? Is it the common people reading the newspaper and learning what their government is doing in their name? They’re not terrorized – they’re perfectly satisfied with that situation. It’s the people trying to hide these secrets, who are trying to hide these crimes. The funny thing is every email database that I’ve ever been a part of stealing, from Pres. Assad to Stratfor security, every email database, every single one has had crimes in it. Not one time that I’ve broken into a corporation or a government, and found their emails and thought, “Oh my God, these people are perfectly innocent people, I made a mistake.”
by Catherine Solyom, Post Media News | Read more:
Photo: Louisa Gouliamaki/AFP/Getty Images
Is Death Bad for You?
We all believe that death is bad. But why is death bad?
In thinking about this question, I am simply going to assume that the death of my body is the end of my existence as a person. (If you don't believe me, read the first nine chapters of my book.) But if death is my end, how can it be bad for me to die? After all, once I'm dead, I don't exist. If I don't exist, how can being dead be bad for me?
People sometimes respond that death isn't bad for the person who is dead. Death is bad for the survivors. But I don't think that can be central to what's bad about death. Compare two stories.
Story 1. Your friend is about to go on the spaceship that is leaving for 100 Earth years to explore a distant solar system. By the time the spaceship comes back, you will be long dead. Worse still, 20 minutes after the ship takes off, all radio contact between the Earth and the ship will be lost until its return. You're losing all contact with your closest friend.
Story 2. The spaceship takes off, and then 25 minutes into the flight, it explodes and everybody on board is killed instantly.
Story 2 is worse. But why? It can't be the separation, because we had that in Story 1. What's worse is that your friend has died. Admittedly, that is worse for you, too, since you care about your friend. But that upsets you because it is bad for her to have died. But how can it be true that death is bad for the person who dies?
In thinking about this question, it is important to be clear about what we're asking. In particular, we are not asking whether or how the process of dying can be bad. For I take it to be quite uncontroversial—and not at all puzzling—that the process of dying can be a painful one. But it needn't be. I might, after all, die peacefully in my sleep. Similarly, of course, the prospect of dying can be unpleasant. But that makes sense only if we consider death itself to be bad. Yet how can sheer nonexistence be bad?
Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I'm dead I lack life—more particularly, the good things in life. That explanation of death's badness is known as the deprivation account.
Despite the overall plausibility of the deprivation account, though, it's not all smooth sailing. For one thing, if something is true, it seems as though there's got to be a time when it's true. Yet if death is bad for me, when is it bad for me? Not now. I'm not dead now. What about when I'm dead? But then, I won't exist. As the ancient Greek philosopher Epicurus wrote: "So death, the most terrifying of ills, is nothing to us, since so long as we exist, death is not with us; but when death comes, then we do not exist. It does not then concern either the living or the dead, since for the former it is not, and the latter are no more."
If death has no time at which it's bad for me, then maybe it's not bad for me. Or perhaps we should challenge the assumption that all facts are datable. Could there be some facts that aren't?
Suppose that on Monday I shoot John. I wound him with the bullet that comes out of my gun, but he bleeds slowly, and doesn't die until Wednesday. Meanwhile, on Tuesday, I have a heart attack and die. I killed John, but when? No answer seems satisfactory! So maybe there are undatable facts, and death's being bad for me is one of them.
Alternatively, if all facts can be dated, we need to say when death is bad for me. So perhaps we should just insist that death is bad for me when I'm dead. But that, of course, returns us to the earlier puzzle. How could death be bad for me when I don't exist? Isn't it true that something can be bad for you only if you exist? Call this idea the existence requirement.
Should we just reject the existence requirement? Admittedly, in typical cases—involving pain, blindness, losing your job, and so on—things are bad for you while you exist. But maybe sometimes you don't even need to exist for something to be bad for you. Arguably, the comparative bads of deprivation are like that.
by Shelly Kagan, The Chronicle Review | Read more:
Chronicle photo illustration by Scott Seymour; original image from Svensk Filmindustries
Small Cities Are Becoming a New Engine Of Economic Growth
The conventional wisdom is that the world’s largest cities are going to be the primary drivers of economic growth and innovation. Even slums, according to a fawning article in National Geographic, represent “examples of urban vitality, not blight.” In America, it is commonly maintained by pundits that “megaregions” anchored by dense urban cores will dominate the future.
Such conceits are, not surprisingly, popular among big city developers and the media in places like New York, which command the national debate by blaring the biggest horn. However, a less fevered analysis of recent trends suggests a very different reality: When it comes to growth, economic and demographic, opportunity increasingly is to be found in smaller, and often remote, places. (...)
[What] we see is a very different reality than that often promoted by big city boosters. Large, dense urban regions clearly possess some great advantages: hub airports, big labor markets, concentrations of hospitals, schools, cultural amenities and specific industrial expertise. Yet despite these advantages, they still lag in the job creation race to unheralded, smaller communities.
Why are the stronger smaller cities growing faster than most larger ones? The keys may lie in many mundane factors that are often too prosaic for urban theorists. They include things such as strong community institutions like churches and shorter commutes than can be had in New York, L.A., Boston or the Bay Area (except for those willing to pay sky-high prices to live in a box near downtown). Young families might be attracted to better schools in some areas — notably the Great Plains — and the access to natural amenities common in many of these smaller communities.
Perhaps another underappreciated factor is Americans’ overwhelming preference for a single-family home, particularly young families. A recent survey from the National Association of Realtors found that 80 percent preferred a detached, single-family home; only a small sliver, roughly 7 percent, wanted to live in a dense urban area “close to it all.” Some 87 percent expressed a strong desire for greater privacy, something that generally comes with lower-density housing.
This trend towards smaller communities — unthinkable among big city planners and urban land speculators — is likely to continue for several reasons. For one thing, new telecommunications technology serves to even the playing field for companies in smaller cities. You can now operate a sophisticated global business from Fargo, N.D., or Shreveport, La., in ways inconceivable a decade or two ago.
Another key element is the predilections of two key expanding demographic groups: boomers and their offspring, the millennials. Aging boomers are not, in large part, hankering for dense city life, as is often asserted. If anything, if they choose to move, they tend toward less dense and even rural areas. Young families and many better-educated workers also seem to be moving generally to less dense and affordable places.
by Joel Kotkin, New Geography | Read more:
Photo: Glens Falls, NY
Such conceits are, not surprisingly, popular among big city developers and the media in places like New York, which command the national debate by blaring the biggest horn. However, a less fevered analysis of recent trends suggests a very different reality: When it comes to growth, economic and demographic, opportunity increasingly is to be found in smaller, and often remote, places. (...)
[What] we see is a very different reality than that often promoted by big city boosters. Large, dense urban regions clearly possess some great advantages: hub airports, big labor markets, concentrations of hospitals, schools, cultural amenities and specific industrial expertise. Yet despite these advantages, they still lag in the job creation race to unheralded, smaller communities.
Why are the stronger smaller cities growing faster than most larger ones? The keys may lie in many mundane factors that are often too prosaic for urban theorists. They include things such as strong community institutions like churches and shorter commutes than can be had in New York, L.A., Boston or the Bay Area (except for those willing to pay sky-high prices to live in a box near downtown). Young families might be attracted to better schools in some areas — notably the Great Plains — and the access to natural amenities common in many of these smaller communities.
Perhaps another underappreciated factor is Americans’ overwhelming preference for a single-family home, particularly young families. A recent survey from the National Association of Realtors found that 80 percent preferred a detached, single-family home; only a small sliver, roughly 7 percent, wanted to live in a dense urban area “close to it all.” Some 87 percent expressed a strong desire for greater privacy, something that generally comes with lower-density housing.
This trend towards smaller communities — unthinkable among big city planners and urban land speculators — is likely to continue for several reasons. For one thing, new telecommunications technology serves to even the playing field for companies in smaller cities. You can now operate a sophisticated global business from Fargo, N.D., or Shreveport, La., in ways inconceivable a decade or two ago.
Another key element is the predilections of two key expanding demographic groups: boomers and their offspring, the millennials. Aging boomers are not, in large part, hankering for dense city life, as is often asserted. If anything, if they choose to move, they tend toward less dense and even rural areas. Young families and many better-educated workers also seem to be moving generally to less dense and affordable places.
by Joel Kotkin, New Geography | Read more:
Photo: Glens Falls, NY
Europe’s Achilles heel
The respite in the euro crisis lasted a few short months. Now, despite a €130 billion ($169 billion) second bail-out for Greece, a fiscal compact agreed on by the euro-zone leaders in December, and €1 trillion of cheap long-term loans from the European Central Bank, the night terrors are back. How dispiriting that Europe is still so ill-prepared for the ordeal to come.
Time is short. In France voters have given their new president, François Hollande, a mandate to alter the “austere” course set by his ousted predecessor, Nicolas Sarkozy, and Angela Merkel, Germany’s chancellor, and to focus on growth. Mrs Merkel says she will not change the fiscal compact, but Mr Hollande needs something to show voters in legislative polls next month. More threatening is the second election looming in Greece, where parties are struggling to form a government. If a majority of Greeks again vote to reject the spending cuts and reforms that go with their country’s bail-out, then euro-zone governments—in particular, Germany’s—will face a drastic choice. Mrs Merkel will either accommodate Greece and swallow the moral hazard of rewarding defiance or, more likely, stand firm and cut the Greeks adrift (see article).
The idea of a chaotic Greek departure from the euro at a time of Franco-German disunion should terrify everyone it touches (the damage it would do the world economy may well be the biggest risk to Barack Obama’s chances of re-election, for instance). With so much at stake, the rest of the euro zone urgently needs to lower the risk that contagion from a Greek exit would infect Portugal, Ireland and even Spain and Italy. The worry is that, just at the moment when hardheaded realpolitik is needed, politics has fallen prey to self-delusion, with leaders in all the main countries peddling seductive half-truths that promise Europe’s citizens an easier way out.
Stories that people tell…
The euro zone needs to do a lot of hard things. Our list would include at the very least: in the short term, slower fiscal adjustment, more investment, looser monetary policy to promote growth and a thicker financial firewall to protect the weaklings on the periphery from contagion (all of which the Germans dislike); in the medium term, structural reforms to Europe’s rigid markets and outsize welfare states (not popular in southern Europe), coupled with a plan to mutualise at least some of the outstanding debt and to set up a Europe-wide bank-resolution mechanism (a tricky idea for everyone). It is an ambitious agenda, but earlier this year, with the Italians, Spanish and Greeks all making some hard choices and ECB money flushing through, the politics seemed possible.
Now they have lurched into dreamland.
by The Economist | Read more:
Illustration: Jon Berkeley
Technology in America
Why are Americans addicted to technology? The question has a
distinctly contemporary ring, and we might be tempted to think it could
only have been articulated within the last decade or two. Could we,
after all, have known anything about technology addiction before the
advent of the Blackberry? Well, as it turns out, Americans have a
longstanding fascination and facility with technology, and the question
of technology addiction was one of the many Alexis de Tocqueville
thought to answer in his classic study of antebellum American society, Democracy in America.
To be precise, Tocqueville titled the tenth chapter of volume two, “Why The Americans Are More Addicted To Practical Than To Theoretical Science.” In Tocqueville’s day, the word technology did not yet carry the expansive and inclusive sense it does today. Instead, quaint sounding phrases like “the mechanical arts,” “the useful arts,” or sometimes merely “invention” did together the semantic work that we assign to the single word technology.1 “Practical science” was one more such phrase available to writers, and, as in Tocqueville’s case, “practical science” was often opposed to “theoretical science.” The two phrases captured the distinction we have in mind when we speak separately of science and technology.
To answer his question on technology addiction, Tocqueville looked at the political and economic characteristics of American society and what he took to be the attitude toward technology they encouraged. As we’ll see, much of what Tocqueville had to say over 150 years ago resonates still, and it is the compelling nature of his diagnosis that invites us to reverse the direction of the inquiry—to ask what effect the enduring American fascination with technology might have on American political and economic culture. But first, why were Americans, as early as the 1830s, addicted to technology?
“It is chiefly from these motives that a democratic people addicts itself to scientific pursuits,” Tocqueville concluded. “You may be sure,” he added, “that the more a nation is democratic, enlightened, and free, the greater will be the number of these interested promoters of scientific genius, and the more will discoveries immediately applicable to productive industry confer gain, fame, and even power on their authors.”5
by Michael Sacasas, The American | Read more:
Image by Rob Green/Bergman Group
To be precise, Tocqueville titled the tenth chapter of volume two, “Why The Americans Are More Addicted To Practical Than To Theoretical Science.” In Tocqueville’s day, the word technology did not yet carry the expansive and inclusive sense it does today. Instead, quaint sounding phrases like “the mechanical arts,” “the useful arts,” or sometimes merely “invention” did together the semantic work that we assign to the single word technology.1 “Practical science” was one more such phrase available to writers, and, as in Tocqueville’s case, “practical science” was often opposed to “theoretical science.” The two phrases captured the distinction we have in mind when we speak separately of science and technology.
To answer his question on technology addiction, Tocqueville looked at the political and economic characteristics of American society and what he took to be the attitude toward technology they encouraged. As we’ll see, much of what Tocqueville had to say over 150 years ago resonates still, and it is the compelling nature of his diagnosis that invites us to reverse the direction of the inquiry—to ask what effect the enduring American fascination with technology might have on American political and economic culture. But first, why were Americans, as early as the 1830s, addicted to technology?
We buy our books to give shape to our thinking, but it never occurs to us that the manner in which we make our purchases may have a more lasting influence on our character than the contents of the book. (...)Tocqueville understood what impressed Americans and it was not intellectually demanding and gratifying grand theory. It was rather “every new method which leads by a shorter road to wealth, every machine which spares labor, every instrument which diminishes the cost of production, every discovery which facilitates pleasures or augments them.”4 This was how democratic societies measured the value of science and America was no exception. Science was prized only insofar as it was immediately applicable to some practical and economic aim. Americans were in this sense good Baconians, they believed knowledge was power and science was valuable to the degree that it could be usefully applied.
“It is chiefly from these motives that a democratic people addicts itself to scientific pursuits,” Tocqueville concluded. “You may be sure,” he added, “that the more a nation is democratic, enlightened, and free, the greater will be the number of these interested promoters of scientific genius, and the more will discoveries immediately applicable to productive industry confer gain, fame, and even power on their authors.”5
Technologies not only allow us to act in certain ways that may or may not be ethical, their use also shapes the user and this too may have ethical consequences.We could summarize Tocqueville’s observations by saying that American society was more likely to produce and admire a Thomas Edison than an Albert Einstein. As a generalization, this seems about right still. The inventor-entrepreneur remains the preferred American icon; Steve Jobs and Bill Gates are the objects of our veneration. This was already evident in the 1830s and Tocqueville eloquently described the distinct blend of technology and economics that we might label America’s techno-start-up culture. But if Tocqueville was right in attributing American attitudes about technology to political and economic circumstances, we should go one step further to ask what might be the political and economic consequences of this enthusiastic embrace of technology.
by Michael Sacasas, The American | Read more:
Image by Rob Green/Bergman Group
Should You Purchase Long-Term-Care Insurance?
Long-term-care insurance. It's a subject most people don't want to think about—but many people know they need to.
At first blush, policies that help pay the costs of extended nursing care make perfect sense. Bills add up quickly when you can no longer take care of yourself and your needs exceed what family and friends can provide. Nursing homes, assisted-living centers and home care all are expensive, and there is no telling for how long you may need the service. Buying a long-term-care insurance policy can be a way of making sure your future physical needs will be met. Policies designed in partnership with state governments also give individuals and their families a way to protect savings in the event of burdensome care costs that stretch on for years.
Critics, however, say insurers are using scare tactics to sell their products, which come with a hefty price. For most people, these critics say, long-term-care policies are either unnecessary or cost more than their benefits are worth. They believe that a great many people would be better off essentially self-insuring or relying on government-funded programs.
Mark Meiners, a professor of health administration and policy at George Mason University, argues in favor of long-term-care insurance. Prescott Cole, a senior staff attorney at California Advocates for Nursing Home Reform, argues against.
by Mark Meiners and Prescott Cole, WSJ | Read more:
At first blush, policies that help pay the costs of extended nursing care make perfect sense. Bills add up quickly when you can no longer take care of yourself and your needs exceed what family and friends can provide. Nursing homes, assisted-living centers and home care all are expensive, and there is no telling for how long you may need the service. Buying a long-term-care insurance policy can be a way of making sure your future physical needs will be met. Policies designed in partnership with state governments also give individuals and their families a way to protect savings in the event of burdensome care costs that stretch on for years.
Critics, however, say insurers are using scare tactics to sell their products, which come with a hefty price. For most people, these critics say, long-term-care policies are either unnecessary or cost more than their benefits are worth. They believe that a great many people would be better off essentially self-insuring or relying on government-funded programs.
Mark Meiners, a professor of health administration and policy at George Mason University, argues in favor of long-term-care insurance. Prescott Cole, a senior staff attorney at California Advocates for Nursing Home Reform, argues against.
by Mark Meiners and Prescott Cole, WSJ | Read more:
Hawaii’s Beaches Are in Retreat
Little by little, Hawaii’s iconic beaches are disappearing.
Most beaches on the state’s three largest islands are eroding, and the erosion is likely to accelerate as sea levels rise, the United States Geological Survey is reporting.
Though average erosion rates are relatively low — perhaps a few inches per year — they range up to several feet per year and are highly variable from island to island and within each island, agency scientists say. The report says that over the last century, about 9 percent of the sandy coast on the islands of Hawaii, Oahu and Maui has vanished. That’s almost 14 miles of beach.
The findings have important implications for public safety, the state’s multibillion-dollar tourism economy and the way of life Hawaiians treasure, said Charles H. Fletcher, who led the work for the agency.
“This is a serious problem,” said Dr. Fletcher, a geologist at the University of Hawaii at Manoa. (...)
The new analysis, “National Assessment of Shoreline Change: Historical Shoreline Change in the Hawaiian Islands,” is the latest in a series of reports the geological survey has produced for the Atlantic and Gulf Coasts, California and some of Alaska. Over all, their findings are similar: “They all show net erosion to varying degrees,” said Asbury H. Sallenger Jr., a coastal scientist for the agency who leads the work. (...)
But that is not ordinarily the case in Hawaii, where the typical
response to erosion has been to protect buildings with sea walls and
other coastal armor. “It’s the default management tool,” Dr. Fletcher
said. But in Hawaii, as nearly everywhere else this kind of armor has
been tried, it results in the degradation or even loss of the beach, as
rising water eventually meets the wall, drowning the beach.
He suggested planners in Hawaii look to American Samoa, where, he said,
“it’s hard to find a single beach. It has been one sea wall after
another.”
by Cornelia Dean, NY Times | Read more:
Photo: Jewel Samad/Agence France-Presse — Getty Images
Subscribe to:
Comments (Atom)

















