Friday, March 23, 2012
Francis Picabia, Udnie, Jeune fille américaine, Danse, 1913, huile sur toile/oil on canvas, 290 x 300 cm, Centre Pompidou, Paris.
via:
Does It Matter Whether God Exists?
Discussions of religion are typically about God. Atheists reject religion because they don’t believe in God; Jews, Christians and Muslims take belief in God as fundamental to their religious commitment. The philosopher John Gray, however, has recently been arguing that belief in God should have little or nothing to do with religion. He points out that in many cases — for instance, “polytheism, Hinduism and Buddhism, Daoism and Shinto, many strands of Judaism and some Christian and Muslim traditions” — belief is of little or no importance. Rather, “practice — ritual, meditation, a way of life — is what counts.” He goes on to say that “it’s only religious fundamentalists and ignorant rationalists who think the myths we live by are literal truths” and that “what we believe doesn’t in the end matter very much. What matters is how we live.”
Even if God is powerful enough to save the souls of the devout, and loving enough to want to, he still might not.
The obvious response to Gray is that it all depends on what you hope to find in a religion. If your hope is simply for guidance and assistance in leading a fulfilling life here on earth, a “way of living” without firm beliefs in any supernatural being may well be all you need. But many religions, including mainline versions of Christianity and Islam, promise much more. They promise ultimate salvation. If we are faithful to their teachings, they say, we will be safe from final annihilation when we die and will be happy eternally in our life after death.
If our hope is for salvation in this sense — and for many that is the main point of religion—then this hope depends on certain religious beliefs’ being true. In particular, for the main theistic religions, it depends on there being a God who is good enough to desire our salvation and powerful enough to achieve it.
But here we come to a point that is generally overlooked in debates about theism, which center on whether there is reason to believe in God, understood as all-good and all-powerful. Suppose that the existence of such a God could be decisively established. Suppose, for example, we were to be entirely convinced that a version of the ontological argument, which claims to show that the very idea of an all-perfect being requires that such a being exist, is sound. We would then be entirely certain that there is a being of supreme power and goodness. But what would this imply about our chances for eternal salvation?
On reflection, very little. Granted, we would know that our salvation was possible: an all-powerful being could bring it about. But would we have any reason to think that God would in fact do this? Well, how could an all-good being not desire our salvation? The problem is that an all-good being needs to take account of the entire universe, not just us.
Here, discussions of the problem of evil become crucial. An all-good being, even with maximal power, may have to allow considerable local evils for the sake of the overall good of the universe; some evils may be necessary for the sake of avoiding even worse evils. We have no way of knowing whether we humans might be the victims of this necessity.
by Gary Gutting, NY Times | Read more:
Photo: Dr. Paul Wolff and Alfred Tritschler Autumn Mood In Frankfurt, 1930 via:
Even if God is powerful enough to save the souls of the devout, and loving enough to want to, he still might not.
The obvious response to Gray is that it all depends on what you hope to find in a religion. If your hope is simply for guidance and assistance in leading a fulfilling life here on earth, a “way of living” without firm beliefs in any supernatural being may well be all you need. But many religions, including mainline versions of Christianity and Islam, promise much more. They promise ultimate salvation. If we are faithful to their teachings, they say, we will be safe from final annihilation when we die and will be happy eternally in our life after death.
If our hope is for salvation in this sense — and for many that is the main point of religion—then this hope depends on certain religious beliefs’ being true. In particular, for the main theistic religions, it depends on there being a God who is good enough to desire our salvation and powerful enough to achieve it.
But here we come to a point that is generally overlooked in debates about theism, which center on whether there is reason to believe in God, understood as all-good and all-powerful. Suppose that the existence of such a God could be decisively established. Suppose, for example, we were to be entirely convinced that a version of the ontological argument, which claims to show that the very idea of an all-perfect being requires that such a being exist, is sound. We would then be entirely certain that there is a being of supreme power and goodness. But what would this imply about our chances for eternal salvation?
On reflection, very little. Granted, we would know that our salvation was possible: an all-powerful being could bring it about. But would we have any reason to think that God would in fact do this? Well, how could an all-good being not desire our salvation? The problem is that an all-good being needs to take account of the entire universe, not just us.
Here, discussions of the problem of evil become crucial. An all-good being, even with maximal power, may have to allow considerable local evils for the sake of the overall good of the universe; some evils may be necessary for the sake of avoiding even worse evils. We have no way of knowing whether we humans might be the victims of this necessity.
by Gary Gutting, NY Times | Read more:
Photo: Dr. Paul Wolff and Alfred Tritschler Autumn Mood In Frankfurt, 1930 via:
The Case Against Google
For the last two months, you've seen some version of the same story all over the Internet: Delete your search history before Google's new privacy settings take effect. A straightforward piece outlining a rudimentary technique, but also evidence that the search titan has a serious trust problem on its hands.
Our story on nuking your history was read nearly 200,000 times on this site alone—and it was a reprint of a piece originally put out by the EFF. Many other outlets republished the same piece. The Reddit page linking to the original had more than 1,000 comments. And the topic itself was debated on decidedly non-techie forums like NPR.
It's not surprising that the tracking debate had people up in arms. A Pew Internet study, conducted just before Google combined its privacy policies (and after it rolled out personalized search results in Search Plus Your World) found that three quarters of people don't want their search results tracked, and two thirds don't even want them personalized based on prior history.
The bottom line: People don't trust Google with their data. And that's new.
Google is a fundamentally different company than it has been in the past. Its culture and direction have changed radically in the past 18 months. It is trying to maneuver into position to operate in a post-pc, post-Web world, reacting to what it perceives as threats, and moving to where it thinks the puck will be.
At some point in the recent past, the Mountain View brass realized that owning the Web is not enough to survive. It makes sense—people are increasingly using non Web-based avenues to access the Internet, and Google would be remiss to not make a play for that business. The problem is that in branching out, Google has also abandoned its core principles and values.
Many of us have entered into a contract with the ur search company because its claims to be a good actor inspired our trust. Google has always claimed to put the interests of the user first. It's worth questioning whether or not that's still the case. Has Google reached a point where it must be evil?
by Mat Honan, Gizmodo | Read more:
John Stuart Mill and the Right to Die
A British man, Tony Nicklinson, wants to die. In 2005, Mr Nicklinson suffered a stroke that has left him with “locked-in syndrome”. This syndrome is, according to the National Institute of Neurological Disorders and Stroke, “a rare neurological disorder characterized by complete paralysis of voluntary muscles in all parts of the body except for those that control eye movement.” Mr Nicklinson is only able to communicate through a perspex board, which interprets his blinking. He wishes now to end his life “lawfully”, because he considers it “dull, miserable, demeaning, undignified and intolerable”. He is, therefore, seeking protection for any doctor that aids him in suicide. At the moment, the case is proceeding after a ruling from a High Court Judge.
Killing, whether oneself or others, is obviously a difficult topic. We cannot so easily dismiss it as merely a private affair of the individual, nor place it within the domain of government to restrict people from doing so. What we can be certain of is that each case demands its own engagement, looking at the facts, the evidence and the arguments. The imposition of outrage, premised on vague notions like dignity or sanctity, are at best unhelpful and at worst harmful.
What Mr Nicklinson’s case demonstrates though is the inconsistency of state interventions on individuals’ activities. Furthermore, Mr Nicklinson’s reasoning of banality and incapability – as a functioning adult – confirm findings in euthanasia research that indicates these as being the most common reasons for wanting euthanasia (or, in Mr Nicklinson's case, doctor-assisted suicide though I'll use "euthanasia" in this post) – it is not, as many people think, merely physical pain or the inevitability of death.
Destroy your lungs but don’t kill yourself
We’ve noted previously that John Stuart Mill's Harm Principle seems to be tacitly in place in Western societies, when we allow others to harm themselves through personally chosen activities: from smoking to rock-climbing. As Mill noted: “the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant.” When it comes to Mr Nicklinson’s case, we have a strange inconsistency: our principle that allows smokers to destroy their lungs disappears when it comes to the idea of ending the life we’ve otherwise allowed to be harmed. We allow a person to slowly or quickly destroy his life, but we don’t allow him to end it – even when the choice is determined by that same person.
Who else, rather than Mr Nicklinson, should decide how he should live or, indeed, whether he should live at all, when he is capable of communicating and contemplating this choice? It is true that we ought do all we can to provide him with reasons to live, since this amounts to giving more information with which he can make a more informed decision: the more information one has, the better decision it will be. This is not coercion but making available more evidence so that Mr Nicklinson is able to exercise his autonomy.
As we noted, we have no good reason to stop him from performing a self-harming act, unless it unnecessarily and excessively harms the lives of others.
by Tauriq Moosa, Big Think | Read more:
Killing, whether oneself or others, is obviously a difficult topic. We cannot so easily dismiss it as merely a private affair of the individual, nor place it within the domain of government to restrict people from doing so. What we can be certain of is that each case demands its own engagement, looking at the facts, the evidence and the arguments. The imposition of outrage, premised on vague notions like dignity or sanctity, are at best unhelpful and at worst harmful.
What Mr Nicklinson’s case demonstrates though is the inconsistency of state interventions on individuals’ activities. Furthermore, Mr Nicklinson’s reasoning of banality and incapability – as a functioning adult – confirm findings in euthanasia research that indicates these as being the most common reasons for wanting euthanasia (or, in Mr Nicklinson's case, doctor-assisted suicide though I'll use "euthanasia" in this post) – it is not, as many people think, merely physical pain or the inevitability of death.
Destroy your lungs but don’t kill yourself
We’ve noted previously that John Stuart Mill's Harm Principle seems to be tacitly in place in Western societies, when we allow others to harm themselves through personally chosen activities: from smoking to rock-climbing. As Mill noted: “the only purpose for which power can be rightfully exercised over any member of a civilised community, against his will, is to prevent harm to others. His own good, either physical or moral, is not a sufficient warrant.” When it comes to Mr Nicklinson’s case, we have a strange inconsistency: our principle that allows smokers to destroy their lungs disappears when it comes to the idea of ending the life we’ve otherwise allowed to be harmed. We allow a person to slowly or quickly destroy his life, but we don’t allow him to end it – even when the choice is determined by that same person.
Who else, rather than Mr Nicklinson, should decide how he should live or, indeed, whether he should live at all, when he is capable of communicating and contemplating this choice? It is true that we ought do all we can to provide him with reasons to live, since this amounts to giving more information with which he can make a more informed decision: the more information one has, the better decision it will be. This is not coercion but making available more evidence so that Mr Nicklinson is able to exercise his autonomy.
As we noted, we have no good reason to stop him from performing a self-harming act, unless it unnecessarily and excessively harms the lives of others.
by Tauriq Moosa, Big Think | Read more:
90 Degrees in Winter: This Is What Climate Change Looks Like
The National Weather Service is kind of the anti–Mike Daisey, a just-the-facts operation that grinds on hour after hour, day after day. It’s collected billions of records (I’ve seen the vast vaults where early handwritten weather reports from observers across the country are stored in endless rows of ledgers and files) on countless rainstorms, blizzards and pleasant summer days. So the odds that you could shock the NWS are pretty slim.
Beginning in mid-March, however, its various offices began issuing bulletins that sounded slightly shaken. “There’s extremes in weather, but seeing something like this is impressive and unprecedented,” Chicago NWS meteorologist Richard Castro told the Daily Herald. “It’s extraordinarily rare for climate locations with 100+ year long periods of records to break records day after day after day,” the office added in an official statement.
It wasn’t just Chicago, of course. A huge swath of the nation simmered under bizarre heat. International Falls, Minnesota, the “icebox of the nation,” broke its old temperature records—by twenty-two degrees, which according to weather historians may be the largest margin ever for any station with a century’s worth of records. Winner, South Dakota, reached 94 degrees on the second-to-last day of winter. That’s in the Dakotas, two days before the close of winter. Jeff Masters, founder of WeatherUnderground, the web’s go-to site for meteorological information, watched an eerie early morning outside his Michigan home and wrote, “This is not the atmosphere I grew up with,” a fact confirmed later that day when the state recorded the earliest F-3 strength tornado in its history. Other weathermen were more… weathermanish. Veteran Minneapolis broadcaster Paul Douglas, after noting that Sunday’s low temperature in Rochester broke the previous record high, blogged “this is OFF THE SCALE WEIRD even for Minnesota.”
It’s hard to overstate how impossible this weather is—when you have nearly a century and a half of records, they should be hard to break, much less smash. But this is like Barry Bonds on steroids if his steroids were on steroids, an early season outbreak of heat completely without precedent in its scale and spread. I live in Vermont, where we should be starting to slowly thaw out—but as the heat moved steadily east, ski areas shut down and golf courses opened.
And truth be told, it felt pretty good. Most people caught in the torrid zones probably reacted pretty much like President Obama: “It gets you a little nervous about what is happening to global temperatures,” he told the audience assembled at a fundraiser at Tyler Perry’s Atlanta mansion (records were falling in Georgia too). “On the other hand I have really enjoyed the nice weather.”
Anyone thinking about the seasons ahead was at least as ambivalent, and most were scared. Here are a few of the things that could happen with staggering warmth like this early in the year:
by Bill McKibben, The Nation | Read more:
Photo: Reuters/Ivan Alvarado
Why Some Words Die
[ed. From the "For What It's Worth" department. I'm amazed someone even undertook such a massive analysis.]
Words are competing daily in an almost Darwinian struggle for survival, according to new research from scientists in which they analysed more than 10 million words used over the last 200 years.
Drawing their material from Google's huge book-digitisation project, the international team of academics tracked the usage of every word recorded in English, Spanish and Hebrew over the 209-year period between 1800 and 2008. The scientists, who include Boston University's Joel Tenenbaum and IMT Lucca Institute for Advanced Studies' Alexander Petersen, said their study shows that "words are competing actors in a system of finite resources", and just as financial firms battle for market share, so words compete to be used by writers or speakers, and to then grab the attention of readers or listeners.
There has been a "drastic increase in the death rate of words" in the modern print era, the academics discovered. They attributed it to the growing use of automatic spellcheckers, and stricter editing procedures, wiping out misspellings and errors. "Most changes to the vocabulary in the last 10 to 20 years are due to the extinction of misspelled words and nonsensical print errors, and to the decreased birth rate of new misspelled variations and genuinely new words," the scientists write in their just-published study. "The words that are dying are those words with low relative use. We confirm by visual inspection that the lists of dying words contain mostly misspelled and nonsensical words."
But it is not only "defective" words that die: sometimes words are driven to extinction by aggressive competitors. The word "Roentgenogram", for example, deriving from the discoverer of the x-ray, William Röntgen, was widely used for several decades in the 20th century, but, challenged by "x-ray" and "radiogram", has now fallen out of use entirely. X-ray had beaten off its synonyms by 1980, speculate the academics, owing to its "efficient short word length" and since the English language is generally used for scientific publication. "Each of the words is competing to be a monopoly on who gets to be the name," Tenenbaum told the American Physical Society.
The phrase "the great war", meanwhile, used for a period to describe the first world war, fell out of use around 1939 when another war of equal proportions hit the world.
by Alison Flood, The Guardian | Read more:
Photograph: Markos Dolopikos/Alamy

Drawing their material from Google's huge book-digitisation project, the international team of academics tracked the usage of every word recorded in English, Spanish and Hebrew over the 209-year period between 1800 and 2008. The scientists, who include Boston University's Joel Tenenbaum and IMT Lucca Institute for Advanced Studies' Alexander Petersen, said their study shows that "words are competing actors in a system of finite resources", and just as financial firms battle for market share, so words compete to be used by writers or speakers, and to then grab the attention of readers or listeners.
There has been a "drastic increase in the death rate of words" in the modern print era, the academics discovered. They attributed it to the growing use of automatic spellcheckers, and stricter editing procedures, wiping out misspellings and errors. "Most changes to the vocabulary in the last 10 to 20 years are due to the extinction of misspelled words and nonsensical print errors, and to the decreased birth rate of new misspelled variations and genuinely new words," the scientists write in their just-published study. "The words that are dying are those words with low relative use. We confirm by visual inspection that the lists of dying words contain mostly misspelled and nonsensical words."
But it is not only "defective" words that die: sometimes words are driven to extinction by aggressive competitors. The word "Roentgenogram", for example, deriving from the discoverer of the x-ray, William Röntgen, was widely used for several decades in the 20th century, but, challenged by "x-ray" and "radiogram", has now fallen out of use entirely. X-ray had beaten off its synonyms by 1980, speculate the academics, owing to its "efficient short word length" and since the English language is generally used for scientific publication. "Each of the words is competing to be a monopoly on who gets to be the name," Tenenbaum told the American Physical Society.
The phrase "the great war", meanwhile, used for a period to describe the first world war, fell out of use around 1939 when another war of equal proportions hit the world.
by Alison Flood, The Guardian | Read more:
Photograph: Markos Dolopikos/Alamy
Thursday, March 22, 2012
The Age of Double Standards
“But, Yossarian, suppose everyone felt that way.”
Bankruptcy is intended to give a fresh start to persons and enterprises overwhelmed by creditors. In the case of American (like other airlines before it), the main “creditors” are its employees. The costs of American’s bankruptcy will be borne mainly by its workers and secondarily by taxpayers. The contracts being broken are union contracts and legal promises to honor pension obligations. American is laying off 13,000 workers, slashing wages, and reducing its annual pension contribution from $97 million to $6.5 million. The airline hopes to stick the federal Pension Benefit Guaranty Corporation with liability for much of the $6.5 billion that it owes its workers and retirees.
This national indulgence for corporate bankruptcy has a certain logic. The Wall Street Journal editorial page recently termed bankruptcy “one of the better ways in which American capitalism encourages risk-taking,” and that is the prevailing view. Thanks to Chapter 11, a potentially viable insolvent enterprise is given a fresh start as a going concern, rather than being cannibalized for the benefit of its creditors.
However, what’s good for corporate capitalism is evidently too good for the rest of us. Suppose everyone felt that way?
by Robert Kuttner, The American Prospect | Read more:
“Then,” said Yossarian, “I’d certainly be a damned fool to feel any other way, wouldn’t I?” —Joseph Heller, Catch-22
Last November 29, American Airlines declared bankruptcy under Chapter 11, the provision of the bankruptcy code that allows a corporation to stiff its creditors, break contracts, and keep operating under the supervision of a judge. This maneuver, politely termed a “reorganization,” ends with the corporation exiting bankruptcy cleansed of old debts. In opting for Chapter 11, American joined every other major airline, including Delta, Northwest, United, and US Airways, which has been in and out of Chapter 11 twice since 2002. No fewer than 189 airlines have declared bankruptcy since 1990. As the sole large carrier that had not gone bankrupt, American missed out on savings available to its rivals and thus was increasingly uncompetitive.
by Robert Kuttner, The American Prospect | Read more:
The solution is not to suppress our thoughts and desires, for this would be impossible, it would be like trying to keep a pot of water from boiling by pressing down tightly on the lid. The only sensible approach is to train ourselves to observe our thoughts without following them. This deprives them of their compulsive energy and is therefore like removing the pot of boiling water from the fire.
Lama Thubten Yeshe.
via:
Jungleland
“We have snakes,” Mary Brock said. “Long, thick snakes. Kingsnakes, rattlesnakes.”
Brock was walking Pee Wee, a small, high-strung West Highland terrier who darted into the brush at the slightest provocation — a sudden breeze, shifting gravel, a tour bus rumbling down Caffin Avenue several blocks east. But Pee Wee had reason to be anxious. Brock was anxious. Most residents of the Lower Ninth Ward in New Orleans are anxious. “A lot of people in my little area died after Katrina,” Brock said. “Because of too much stress.” The most immediate sources of stress that October morning were the stray Rottweilers. Brock had seen packs of them in the wildly overgrown lots, prowling for food. Pee Wee, it seemed, had seen them, too. “I know they used to be pets because they are beautiful animals.” Brock corrected herself: “They were beautiful animals. When I first saw them, they were nice and clean — inside-the-house animals. But now they just look sad.”
The Lower Ninth has become a dumping ground for unwanted dogs and cats. People from all over the city take the Claiborne Avenue Bridge over the Industrial Canal, bounce along the fractured streets until they reach a suitably empty area and then toss the animals out of the car. But it’s not just pets. The neighborhood has become a dumping ground for many kinds of unwanted things. Contractors, rather than drive to the city dump in New Orleans East, sweep trailers full of construction debris onto the street. Auto shops, rather than pay the tire-disposal fee ($2 a tire), dump tires by the dozen. The tire problem has become so desperate that the city is debating changes to the law. (One humble suggestion: a $2 reward per tire.) You also see burned piles of household garbage, cotton-candy-pink tufts of insulation foam, turquoise PVC pipes, sodden couches tumescing like sea sponges and abandoned cars. Sometimes the cars contain bodies. In August, the police discovered an incinerated corpse in a white Dodge Charger that was left in the middle of an abandoned lot near the intersection of Choctaw and Law, two blocks from where Mary Brock was walking Pee Wee. Nobody knew how long the car had been there; it was concealed from the closest house, half a block away, by 12-foot-high grass. That entire stretch of Choctaw Street, for that matter, was no longer visible. It had been devoured by forest. Every housing plot on both sides of the street for two blocks, between Rocheblave and Law, was abandoned. Through the weeds, you could just make out a cross marking the spot where Brock’s neighbor had drowned.
by Nathaniel Rich, NY Times | Read more:
Photo: Andrew Moore
Some kind of flowers I found in Seattle but haven't a clue what they're called. They look like little tomatoey Japanese lanterns.
Photo: markk
* I was close. They at least have an asian name: Chinese Lanterns (also called Winter Cherry or Love in a Cage). Thanks Barbara!
Hey Dude
Slang rarely has staying power. That is part of its charm; the young create it, and discard it as soon as it becomes too common. Slang is a subset of in-group language, and once that gets taken up by the out-group, it’s time for the in-crowd to come up with something new. So the long life of one piece of American slang, albeit in many different guises, is striking. Or as the kids would say, “Dude!”
Though the term seems distinctly American, it had an interesting birth: one of its first written appearances came in 1883, in the American magazine, which referred to “the social ‘dude’ who affects English dress and the English drawl”. The teenage American republic was already a growing power, with the economy booming and the conquest of the West well under way. But Americans in cities often aped the dress and ways of Europe, especially Britain. Hence dude as a dismissive term: a dandy, someone so insecure in his Americanness that he felt the need to act British. It’s not clear where the word’s origins lay. Perhaps its mouth-feel was enough to make it sound dismissive.
From the specific sense of dandy, dude spread out to mean an easterner, a city slicker, especially one visiting the West. Many westerners resented the dude, but some catered to him. Entrepreneurial ranchers set up ranches for tourists to visit and stay and pretend to be cowboys themselves, giving rise to the “dude ranch”.
By the 1950s or 1960s, dude had been bleached of specific meaning. In black culture, it meant almost any male; one sociologist wrote in 1967 of a group of urban blacks he was studying that “these were the local ‘dudes’, their term meaning not the fancy city slickers but simply ‘the boys’, ‘fellas’, the ‘cool people’.”
From the black world it moved to hip whites, and so on to its enduring associations today—California, youth, cool. In “Easy Rider” (1969) Peter Fonda explains it to the square Jack Nicholson: “Dude means nice guy. Dude means a regular sort of person.” And from this new, broader, gentler meaning, dude went vocative. Young men the world over seem to need some appellation to send across the net at each other that recognises their common masculinity while stopping short of the intimacy of a name. It starts in one country or subculture, and travels outwards. Just as the hippies gave us “man”, and British men are “mate” to one another, so, by the late 1970s or early 1980s, “dude” was filling that role. And all three words are as likely to go at the start of the sentence as the end.
by Robert Lane Greene, Intelligent Life | Read more:
Subscribe to:
Posts (Atom)