Sunday, July 5, 2015
Saturday, July 4, 2015
Regulating Sex
[ed. See also: Teenager's jailing brings a call to fix sex offender registries.]
This is a strange moment for sex in America. We’ve detached it from pregnancy, matrimony and, in some circles, romance. At least, we no longer assume that intercourse signals the start of a relationship. But the more casual sex becomes, the more we demand that our institutions and government police the line between what’s consensual and what isn’t. And we wonder how to define rape. Is it a violent assault or a violation of personal autonomy? Is a person guilty of sexual misconduct if he fails to get a clear “yes” through every step of seduction and consummation?
According to the doctrine of affirmative consent — the “yes means yes” rule — the answer is, well, yes, he is. And though most people think of “yes means yes” as strictly for college students, it is actually poised to become the law of the land.
About a quarter of all states, and the District of Columbia, now say sex isn’t legal without positive agreement, although some states undercut that standard by requiring proof of force or resistance as well.
Codes and laws calling for affirmative consent proceed from admirable impulses. (The phrase “yes means yes,” by the way, represents a ratcheting-up of “no means no,” the previous slogan of the anti-rape movement.) People should have as much right to control their sexuality as they do their body or possessions; just as you wouldn’t take a precious object from someone’s home without her permission, you shouldn’t have sex with someone if he hasn’t explicitly said he wants to.
And if one person can think he’s hooking up while the other feels she’s being raped, it makes sense to have a law that eliminates the possibility of misunderstanding. “You shouldn’t be allowed to make the assumption that if you find someone lying on a bed, they’re free for sexual pleasure,” says Lynn Hecht Schafran, director of a judicial education program at Legal Momentum, a women’s legal defense organization.
But criminal law is a very powerful instrument for reshaping sexual mores. Should we really put people in jail for not doing what most people aren’t doing? (Or at least, not yet?) It’s one thing to teach college students to talk frankly about sex and not to have it without demonstrable pre-coital assent. Colleges are entitled to uphold their own standards of comportment, even if enforcement of that behavior is spotty or indifferent to the rights of the accused. It’s another thing to make sex a crime under conditions of poor communication.
Most people just aren’t very talkative during the delicate tango that precedes sex, and the re-education required to make them more forthcoming would be a very big project. Nor are people unerringly good at decoding sexual signals. If they were, we wouldn’t have romantic comedies. “If there’s no social consensus about what the lines are,” says Nancy Gertner, a senior lecturer at Harvard Law School and a retired judge, then affirmative consent “has no business being in the criminal law.”
Perhaps the most consequential deliberations about affirmative consent are going on right now at the American Law Institute. The more than 4,000 law professors, judges and lawyers who belong to this prestigious legal association — membership is by invitation only — try to untangle the legal knots of our time. They do this in part by drafting and discussing model statutes. Once the group approves these exercises, they hold so much sway that Congress and states sometimes vote them into law, in whole or in part. For the past three years, the law institute has been thinking about how to update the penal code for sexual assault, which was last revised in 1962. When its suggestions circulated in the weeks before the institute’s annual meeting in May, some highly instructive hell broke loose.
In a memo that has now been signed by about 70 institute members and advisers, including Judge Gertner, readers have been asked to consider the following scenario: “Person A and Person B are on a date and walking down the street. Person A, feeling romantically and sexually attracted, timidly reaches out to hold B’s hand and feels a thrill as their hands touch. Person B does nothing, but six months later files a criminal complaint. Person A is guilty of ‘Criminal Sexual Contact’ under proposed Section 213.6(3)(a).”
Far-fetched? Not as the draft is written. The hypothetical crime cobbles together two of the draft’s key concepts. The first is affirmative consent. The second is an enlarged definition of criminal sexual contact that would include the touching of any body part, clothed or unclothed, with sexual gratification in mind. As the authors of the model law explain: “Any kind of contact may qualify. There are no limits on either the body part touched or the manner in which it is touched.” So if Person B neither invites nor rebukes a sexual advance, then anything that happens afterward is illegal. “With passivity expressly disallowed as consent,” the memo says, “the initiator quickly runs up a string of offenses with increasingly more severe penalties to be listed touch by touch and kiss by kiss in the criminal complaint.”
The obvious comeback to this is that no prosecutor would waste her time on such a frivolous case. But that doesn’t comfort signatories of the memo, several of whom have pointed out to me that once a law is passed, you can’t control how it will be used. For instance, prosecutors often add minor charges to major ones (such as, say, forcible rape) when there isn’t enough evidence to convict on the more serious charge. They then put pressure on the accused to plead guilty to the less egregious crime.
The example points to a trend evident both on campuses and in courts: the criminalization of what we think of as ordinary sex and of sex previously considered unsavory but not illegal.
This is a strange moment for sex in America. We’ve detached it from pregnancy, matrimony and, in some circles, romance. At least, we no longer assume that intercourse signals the start of a relationship. But the more casual sex becomes, the more we demand that our institutions and government police the line between what’s consensual and what isn’t. And we wonder how to define rape. Is it a violent assault or a violation of personal autonomy? Is a person guilty of sexual misconduct if he fails to get a clear “yes” through every step of seduction and consummation?
According to the doctrine of affirmative consent — the “yes means yes” rule — the answer is, well, yes, he is. And though most people think of “yes means yes” as strictly for college students, it is actually poised to become the law of the land.About a quarter of all states, and the District of Columbia, now say sex isn’t legal without positive agreement, although some states undercut that standard by requiring proof of force or resistance as well.
Codes and laws calling for affirmative consent proceed from admirable impulses. (The phrase “yes means yes,” by the way, represents a ratcheting-up of “no means no,” the previous slogan of the anti-rape movement.) People should have as much right to control their sexuality as they do their body or possessions; just as you wouldn’t take a precious object from someone’s home without her permission, you shouldn’t have sex with someone if he hasn’t explicitly said he wants to.
And if one person can think he’s hooking up while the other feels she’s being raped, it makes sense to have a law that eliminates the possibility of misunderstanding. “You shouldn’t be allowed to make the assumption that if you find someone lying on a bed, they’re free for sexual pleasure,” says Lynn Hecht Schafran, director of a judicial education program at Legal Momentum, a women’s legal defense organization.
But criminal law is a very powerful instrument for reshaping sexual mores. Should we really put people in jail for not doing what most people aren’t doing? (Or at least, not yet?) It’s one thing to teach college students to talk frankly about sex and not to have it without demonstrable pre-coital assent. Colleges are entitled to uphold their own standards of comportment, even if enforcement of that behavior is spotty or indifferent to the rights of the accused. It’s another thing to make sex a crime under conditions of poor communication.
Most people just aren’t very talkative during the delicate tango that precedes sex, and the re-education required to make them more forthcoming would be a very big project. Nor are people unerringly good at decoding sexual signals. If they were, we wouldn’t have romantic comedies. “If there’s no social consensus about what the lines are,” says Nancy Gertner, a senior lecturer at Harvard Law School and a retired judge, then affirmative consent “has no business being in the criminal law.”
Perhaps the most consequential deliberations about affirmative consent are going on right now at the American Law Institute. The more than 4,000 law professors, judges and lawyers who belong to this prestigious legal association — membership is by invitation only — try to untangle the legal knots of our time. They do this in part by drafting and discussing model statutes. Once the group approves these exercises, they hold so much sway that Congress and states sometimes vote them into law, in whole or in part. For the past three years, the law institute has been thinking about how to update the penal code for sexual assault, which was last revised in 1962. When its suggestions circulated in the weeks before the institute’s annual meeting in May, some highly instructive hell broke loose.
In a memo that has now been signed by about 70 institute members and advisers, including Judge Gertner, readers have been asked to consider the following scenario: “Person A and Person B are on a date and walking down the street. Person A, feeling romantically and sexually attracted, timidly reaches out to hold B’s hand and feels a thrill as their hands touch. Person B does nothing, but six months later files a criminal complaint. Person A is guilty of ‘Criminal Sexual Contact’ under proposed Section 213.6(3)(a).”
Far-fetched? Not as the draft is written. The hypothetical crime cobbles together two of the draft’s key concepts. The first is affirmative consent. The second is an enlarged definition of criminal sexual contact that would include the touching of any body part, clothed or unclothed, with sexual gratification in mind. As the authors of the model law explain: “Any kind of contact may qualify. There are no limits on either the body part touched or the manner in which it is touched.” So if Person B neither invites nor rebukes a sexual advance, then anything that happens afterward is illegal. “With passivity expressly disallowed as consent,” the memo says, “the initiator quickly runs up a string of offenses with increasingly more severe penalties to be listed touch by touch and kiss by kiss in the criminal complaint.”
The obvious comeback to this is that no prosecutor would waste her time on such a frivolous case. But that doesn’t comfort signatories of the memo, several of whom have pointed out to me that once a law is passed, you can’t control how it will be used. For instance, prosecutors often add minor charges to major ones (such as, say, forcible rape) when there isn’t enough evidence to convict on the more serious charge. They then put pressure on the accused to plead guilty to the less egregious crime.
The example points to a trend evident both on campuses and in courts: the criminalization of what we think of as ordinary sex and of sex previously considered unsavory but not illegal.
by Judith Shulevitz, NY Times | Read more:
Image: Yu Man MaFriday, July 3, 2015
The Economic Consequences of Austerity
[ed. See also: The elites are determined to end the revolt against austerity in Greece.]
On 5 June 1919, John Maynard Keynes wrote to the prime minister of Britain, David Lloyd George, “I ought to let you know that on Saturday I am slipping away from this scene of nightmare. I can do no more good here.” Thus ended Keynes’s role as the official representative of the British Treasury at the Paris Peace Conference. It liberated Keynes from complicity in the Treaty of Versailles (to be signed later that month), which he detested.
Why did Keynes dislike a treaty that ended the state of war between Germany and the Allied Powers (surely a good thing)?
Keynes was not, of course, complaining about the end of the world war, nor about the need for a treaty to end it, but about the terms of the treaty – and in particular the suffering and the economic turmoil forced on the defeated enemy, the Germans, through imposed austerity. Austerity is a subject of much contemporary interest in Europe – I would like to add the word “unfortunately” somewhere in the sentence. Actually, the book that Keynes wrote attacking the treaty, The Economic Consequences of the Peace, was very substantially about the economic consequences of “imposed austerity”. Germany had lost the battle already, and the treaty was about what the defeated enemy would be required to do, including what it should have to pay to the victors. The terms of this Carthaginian peace, as Keynes saw it (recollecting the Roman treatment of the defeated Carthage following the Punic wars), included the imposition of an unrealistically huge burden of reparation on Germany – a task that Germany could not carry out without ruining its economy. As the terms also had the effect of fostering animosity between the victors and the vanquished and, in addition, would economically do no good to the rest of Europe, Keynes had nothing but contempt for the decision of the victorious four (Britain, France, Italy and the United States) to demand something from Germany that was hurtful for the vanquished and unhelpful for all.
The high-minded moral rhetoric in favour of the harsh imposition of austerity on Germany that Keynes complained about came particularly from Lord Cunliffe and Lord Sumner, representing Britain on the Reparation Commission, whom Keynes liked to call “the Heavenly Twins”. In his parting letter to Lloyd George, Keynes added, “I leave the Twins to gloat over the devastation of Europe.” Grand rhetoric on the necessity of imposing austerity, to remove economic and moral impropriety in Greece and elsewhere, may come more frequently these days from Berlin itself, with the changed role of Germany in today’s world. But the unfavourable consequences that Keynes feared would follow from severe – and in his judgement unreasoned – imposition of austerity remain relevant today (with an altered geography of the morally upright discipliner and the errant to be disciplined).
Aside from Keynes’s fear of economic ruin of a country, in this case Germany, through the merciless scheduling of demanded payments, he also analysed the bad consequences on other countries in Europe of the economic collapse of one of their partners. The thesis of economic interdependence, which Keynes would pursue more fully later (including in his most famous book, The General Theory of Employment, Interest and Money, to be published in 1936), makes an early appearance in this book, in the context of his critique of the Versailles Treaty.
“An inefficient, unemployed, disorganised Europe faces us,” says Keynes, “torn by internal strife and international hate, fighting, starving, pillaging, and lying.” If some of these problems are visible in Europe today (as I believe to some extent they are), we have to ask: why is this so? After all, 2015 is not really anything like 1919, and yet why do the same words, taken quite out of context, look as if there is a fitting context for at least a part of them right now?
If austerity is as counterproductive as Keynes thought, how come it seems to deliver electoral victories, at least in Britain? Indeed, what truth is there in the explanatory statement in the Financial Times, aired shortly after the Conservative victory in the general election, and coming from a leading historian, Niall Ferguson (who, I should explain, is a close friend – our friendship seems to thrive on our persistent disagreement): “Labour should blame Keynes for their election defeat.”
If the point of view that Ferguson airs is basically right (and that reading is shared by several other commentators as well), the imposed austerity we are going through is not a useless nightmare (as Keynes’s analysis would make us believe), but more like a strenuous workout for a healthier future, as the champions of austerity have always claimed. And it is, in this view, a future that is beginning to unfold already in our time, at least in Britain, appreciated by grateful voters. Is that the real story now? And more generally, could “the Heavenly Twins” have been right all along?
by Amartya Sen, The Guardian | Read more:
Image: William OrpenThe Revolution Will Probably Wear Mom Jeans
Not long ago, a curious fashion trend swept through New York City’s hipster preserves, from Bushwick to the Lower East Side. Once, well-heeled twentysomethings had roamed these streets in plaid button-downs and floral playsuits. Now, the reign of the aspiring lumberjacks and their mawkish mates was coming to an end. Windbreakers, baseball caps, and polar fleece appeared among the flannel. Cargo shorts and khakis were verboten no longer. Denim went from dark-rinse to light. Sandals were worn, and sometimes with socks. It was a blast of carefully modulated blandness—one that delighted some fashion types, appalled others, and ignited the critical passions of lifestyle journalists everywhere.
They called it Normcore. Across our Fashion Nation, style sections turned out lengthy pieces exploring this exotic lurch into the quotidian, and trend watchers plumbed every possible meaning in the cool kids’ new fondness for dressing like middle-aged suburbanites. Were hipsters sacrificing their coolness in a brave act of self-renunciation? Was this an object lesson in the futility of ritually chasing down, and then repudiating, the coolness of the passing moment? Or were middle-aged dorks themselves mysteriously cool all of a sudden? Was Normcore just an elaborate prank designed to prove that style writers can be fooled into believing almost anything is trendy? (...)
The Revolt of the Mass Indie Überelite
The adventure began in 2013, and picked up steam early last year with Fiona Duncan’s “Normcore: Fashion for Those Who Realize They’re One in 7 Billion,” a blowout exploration of the anti-individualist Normcore creed for New York magazine. Duncan remembered feeling the first tremors of the revolution:
Part of the problem derives from the hipster’s ubiquity. For the past several years, hipsterism has been an idée fixe in the popular press—coy cultural shorthand in the overlapping worlds of fashion, music, art, and literature for a kind of rebellion that doesn’t quite come off on its own steam. Forward-thinking middle-class youngsters used to strike fear in the hearts of the squares by flouting social norms—at least nominally, until they grew up and settled into their own appointed professional, middle-class destinies. Now, however, the hipster is a benign and well-worn figure of fun: a lumpenbourgeois urbanite perpetually in search of ways to display her difference from the masses. (...)
Food for Thought
Things get even more complicated when you consider the Middle American booboisie on whom Normcore sets its sights. Even as Normcore jeers at neutral, fashion-backward attire, it also manages to exalt the clueless exurbanite by turning her into a fetish object: the Emma Bovary of the strip mall. It’s not clear just how and why hipsters came to fixate on the People of Walmart, but it’s not a passing fancy; one after another, hipsters are elevating dreary things to the height of fashion.
Think of the rise of kale. The once-humble vegetable has ascended to such dizzying heights that Beyoncé wore a sweatshirt emblazoned with “KALE” in one of her recent videos.
See also pizza, a closer edible analogue to Normcore. A friend with ties to the advertising industry informed me of pizza’s edginess sometime last year, directing me to a Tumblr called Slice Guyz that collects pictures of pizza-themed graffiti and the like. Former child star and current hipster Macaulay Culkin started a joke band called the Pizza Underground; it performs selections from the Velvet Underground catalogue repurposed with pizza-themed lyrics. In September, New York magazine—the same oracle that announced the rise of Normcore—anointed pizza as the “chicest new trend.” As incontrovertible evidence that the trend was indeed taking hold, the magazine’s fashion brain trust commissioned layouts of Katy Perry and Beyoncé (now the avatar of food-themed chicness, it would seem) in pizza-print outfits.
To take something recognizably bad, whether pizza or bulky fleece sweatshirts, and try to pass it off as avant-garde self-expression is an incredibly defeatist gesture, one both aware of and happy with its futility. Ceci n’est pas intéressant.
Still, pizza, like denim, is accessible to all Americans and crafted with wildly different levels of competence, self-awareness, and artisanal intent. Papa John’s or Little Caesars may deliver glorified tomato-paste-on-cardboard alongside tubs of dipping butter to a nation of indifferent proles. But if you ask New York’s infinitely more with-it pizza correspondents, they’ll tell you, with numbing precision, that pizza can be “toppings-forward” and “avant-garde.” This range makes pizza the perfect hipster quarry: sometimes mundane, sometimes aspirational, and above all, exotic. (...)
Before you can say “plain Hanes tee,” this longing can shade again into contempt. When urban hipsters fetishize the déclassé and the mundane, they rely on their understanding of middle America as a colony, one filled with happy proles to be mined for fashion inspiration. This is as true for hipsters as it is for Glenn Beck, whose bone-deep cynicism about the heartland is simply an amplified version of the same infatuated disdain cultivated by a deliberately dowdy Brooklynite. How else can one account for the steady migration of Normcore into the very corporate world that calls the shots on what we buy and how—a world in which web designers, programmers, stylists, advertising executives, and other masters of the knowledge economy now dress up like call-center drones headed to the Dollar Store?
They called it Normcore. Across our Fashion Nation, style sections turned out lengthy pieces exploring this exotic lurch into the quotidian, and trend watchers plumbed every possible meaning in the cool kids’ new fondness for dressing like middle-aged suburbanites. Were hipsters sacrificing their coolness in a brave act of self-renunciation? Was this an object lesson in the futility of ritually chasing down, and then repudiating, the coolness of the passing moment? Or were middle-aged dorks themselves mysteriously cool all of a sudden? Was Normcore just an elaborate prank designed to prove that style writers can be fooled into believing almost anything is trendy? (...)
The Revolt of the Mass Indie Überelite
The adventure began in 2013, and picked up steam early last year with Fiona Duncan’s “Normcore: Fashion for Those Who Realize They’re One in 7 Billion,” a blowout exploration of the anti-individualist Normcore creed for New York magazine. Duncan remembered feeling the first tremors of the revolution:
Sometime last summer I realized that, from behind, I could no longer tell if my fellow Soho pedestrians were art kids or middle-aged, middle-American tourists. Clad in stonewash jeans, fleece, and comfortable sneakers, both types looked like they might’ve just stepped off an R-train after shopping in Times Square. When I texted my friend Brad (an artist whose summer uniform consisted of Adidas barefoot trainers, mesh shorts and plain cotton tees) for his take on the latest urban camouflage, I got an immediate reply: “lol normcore.”Brad, however eloquent and charming, did not coin the term himself. He got it from K-HOLE, a group of trend forecasters. To judge by K-HOLE’s name alone—a slang term for the woozy aftereffects of the animal tranquilizer and recreational drug ketamine—the group was more than happy to claim Normcore as its own licensed playground. As company principals patiently explained to the New York Times, their appropriation of the name of a toxic drug hangover was itself a sly commentary on the cultural logic of the corporate world’s frenetic cooptation of young people’s edgy habits. At a London art gallery in October 2013, in a paper titled “Youth Mode: A Report on Freedom,” team K-HOLE proposed the Twitter hashtag #Normcore as a rejoinder to such cooptation:
If the rule is Think Different, being seen as normal is the scariest thing. (It means being returned to your boring suburban roots, being turned back into a pumpkin, exposed as unexceptional.) Which paradoxically makes normalcy ripe for the Mass Indie überelites to adopt as their own, confirming their status by showing how disposable the trappings of uniqueness are.Jargon aside, the report had a point: lately “Mass Indie überelites”—a group more commonly known as hipsters—have been finding it increasingly difficult to express their individuality, the very thing that confers hipster cred.
Part of the problem derives from the hipster’s ubiquity. For the past several years, hipsterism has been an idée fixe in the popular press—coy cultural shorthand in the overlapping worlds of fashion, music, art, and literature for a kind of rebellion that doesn’t quite come off on its own steam. Forward-thinking middle-class youngsters used to strike fear in the hearts of the squares by flouting social norms—at least nominally, until they grew up and settled into their own appointed professional, middle-class destinies. Now, however, the hipster is a benign and well-worn figure of fun: a lumpenbourgeois urbanite perpetually in search of ways to display her difference from the masses. (...)
Food for Thought
Things get even more complicated when you consider the Middle American booboisie on whom Normcore sets its sights. Even as Normcore jeers at neutral, fashion-backward attire, it also manages to exalt the clueless exurbanite by turning her into a fetish object: the Emma Bovary of the strip mall. It’s not clear just how and why hipsters came to fixate on the People of Walmart, but it’s not a passing fancy; one after another, hipsters are elevating dreary things to the height of fashion.
Think of the rise of kale. The once-humble vegetable has ascended to such dizzying heights that Beyoncé wore a sweatshirt emblazoned with “KALE” in one of her recent videos.
See also pizza, a closer edible analogue to Normcore. A friend with ties to the advertising industry informed me of pizza’s edginess sometime last year, directing me to a Tumblr called Slice Guyz that collects pictures of pizza-themed graffiti and the like. Former child star and current hipster Macaulay Culkin started a joke band called the Pizza Underground; it performs selections from the Velvet Underground catalogue repurposed with pizza-themed lyrics. In September, New York magazine—the same oracle that announced the rise of Normcore—anointed pizza as the “chicest new trend.” As incontrovertible evidence that the trend was indeed taking hold, the magazine’s fashion brain trust commissioned layouts of Katy Perry and Beyoncé (now the avatar of food-themed chicness, it would seem) in pizza-print outfits.
To take something recognizably bad, whether pizza or bulky fleece sweatshirts, and try to pass it off as avant-garde self-expression is an incredibly defeatist gesture, one both aware of and happy with its futility. Ceci n’est pas intéressant.
Still, pizza, like denim, is accessible to all Americans and crafted with wildly different levels of competence, self-awareness, and artisanal intent. Papa John’s or Little Caesars may deliver glorified tomato-paste-on-cardboard alongside tubs of dipping butter to a nation of indifferent proles. But if you ask New York’s infinitely more with-it pizza correspondents, they’ll tell you, with numbing precision, that pizza can be “toppings-forward” and “avant-garde.” This range makes pizza the perfect hipster quarry: sometimes mundane, sometimes aspirational, and above all, exotic. (...)
Before you can say “plain Hanes tee,” this longing can shade again into contempt. When urban hipsters fetishize the déclassé and the mundane, they rely on their understanding of middle America as a colony, one filled with happy proles to be mined for fashion inspiration. This is as true for hipsters as it is for Glenn Beck, whose bone-deep cynicism about the heartland is simply an amplified version of the same infatuated disdain cultivated by a deliberately dowdy Brooklynite. How else can one account for the steady migration of Normcore into the very corporate world that calls the shots on what we buy and how—a world in which web designers, programmers, stylists, advertising executives, and other masters of the knowledge economy now dress up like call-center drones headed to the Dollar Store?
by Eugienia Williamson, The Baffler | Read more:
Image: Hollie Chastain
The Sofalarity
[ed. See also: The problem with easy technology.]
Imagine that two people are carving a six-foot slab of wood at the same time. One is using a hand-chisel, the other, a chainsaw. If you are interested in the future of that slab, whom would you watch?
This chainsaw/chisel logic has led some to suggest that technological evolution is more important to humanity’s near future than biological evolution; nowadays, it is not the biological chisel but the technological chainsaw that is most quickly redefining what it means to be human. The devices we use change the way we live much faster than any contest among genes. We’re the block of wood, even if, as I wrote in January, sometimes we don’t even fully notice that we’re changing.
Assuming that we really are evolving as we wear or inhabit more technological prosthetics—like ever-smarter phones, helpful glasses, and brainy cars—here’s the big question: Will that type of evolution take us in desirable directions, as we usually assume biological evolution does?
Some, like the Wired founder Kevin Kelly, believe that the answer is a resounding “yes.” In his book “What Technology Wants,” Kelly writes: “Technology wants what life wants: Increasing efficiency; Increasing opportunity; Increasing emergence; Increasing complexity; Increasing diversity; Increasing specialization; Increasing ubiquity; Increasing freedom; Increasing mutualism; Increasing beauty; Increasing sentience; Increasing structure; Increasing evolvability.” (...)
Imagine that two people are carving a six-foot slab of wood at the same time. One is using a hand-chisel, the other, a chainsaw. If you are interested in the future of that slab, whom would you watch?
This chainsaw/chisel logic has led some to suggest that technological evolution is more important to humanity’s near future than biological evolution; nowadays, it is not the biological chisel but the technological chainsaw that is most quickly redefining what it means to be human. The devices we use change the way we live much faster than any contest among genes. We’re the block of wood, even if, as I wrote in January, sometimes we don’t even fully notice that we’re changing.Assuming that we really are evolving as we wear or inhabit more technological prosthetics—like ever-smarter phones, helpful glasses, and brainy cars—here’s the big question: Will that type of evolution take us in desirable directions, as we usually assume biological evolution does?
Some, like the Wired founder Kevin Kelly, believe that the answer is a resounding “yes.” In his book “What Technology Wants,” Kelly writes: “Technology wants what life wants: Increasing efficiency; Increasing opportunity; Increasing emergence; Increasing complexity; Increasing diversity; Increasing specialization; Increasing ubiquity; Increasing freedom; Increasing mutualism; Increasing beauty; Increasing sentience; Increasing structure; Increasing evolvability.” (...)
Biological evolution is driven by survival of the fittest, as adaptive traits are those that make the survival and reproduction of a population more likely. It isn’t perfect, but at least, in a rough way, it favors organisms who are adapted to their environments.
Technological evolution has a different motive force. It is self-evolution, and it is therefore driven by what we want as opposed to what is adaptive. In a market economy, it is even more complex: for most of us, our technological identities are determined by what companies decide to sell based on what they believe we, as consumers, will pay for. As a species, we often aren’t much different from the Oji-Cree. Comfort-seeking missiles, we spend the most to minimize pain and maximize pleasure. When it comes to technologies, we mainly want to make things easy. Not to be bored. Oh, and maybe to look a bit younger.
Our will-to-comfort, combined with our technological powers, creates a stark possibility. If we’re not careful, our technological evolution will take us toward not a singularity but a sofalarity. That’s a future defined not by an evolution toward superintelligence but by the absence of discomforts.
Technological evolution has a different motive force. It is self-evolution, and it is therefore driven by what we want as opposed to what is adaptive. In a market economy, it is even more complex: for most of us, our technological identities are determined by what companies decide to sell based on what they believe we, as consumers, will pay for. As a species, we often aren’t much different from the Oji-Cree. Comfort-seeking missiles, we spend the most to minimize pain and maximize pleasure. When it comes to technologies, we mainly want to make things easy. Not to be bored. Oh, and maybe to look a bit younger.
Our will-to-comfort, combined with our technological powers, creates a stark possibility. If we’re not careful, our technological evolution will take us toward not a singularity but a sofalarity. That’s a future defined not by an evolution toward superintelligence but by the absence of discomforts.
by Tim Wu, New Yorker | Read more:
Image: Hannah K. LeeWednesday, July 1, 2015
To Save California, Read “Dune”
[ed. We might also stop promoting unsustainable developments. Las Vegas, anyone? See also: Holy Crop.]
To survive their permanent desert climate, the indigenous Fremen of Dune employ every possible technology. They build “windtraps” and “dew collectors” to grab the slightest precipitation out of the air. They construct vast underground cisterns and canals to store and transport their painstakingly gathered water. They harvest every drop of moisture from the corpses of the newly dead. During each waking moment they dress in “stillsuits”—head-to-toe wetsuit-like body coverings that recycle sweat, urine, and feces back into drinking water.
Described by Dune’s “planetary ecologist,” Liet-Kynes, as “a micro-sandwich—a high-efficiency filter and heat exchange system”—the stillsuit is a potent metaphor for reuse, reclamation, and conservation. Powered by the wearer’s own breathing and movement, the stillsuit is the technical apotheosis of the principle of making do with what one has.
Someday, sooner than we’d like, it’s not inconceivable that residents of California will be shopping on Amazon for the latest in stillsuit tech. Dune is set thousands of years in the future, but in California in 2015, the future is now. Four years of drought have pummeled reservoirs and forced mandatory 25 percent water rationing cuts. The calendar year of 2014 was the driest (and hottest) since records started being kept in the 1800s. At the end of May, the Sierra Nevada snowpack—a crucial source of California’s water—hit its lowest point on record: zero. Climate models suggest an era of mega-droughts could be nigh.
Which brings us to Daniel Fernandez, a professor of science and environmental policy at California State University, Monterey Bay, and Peter Yolles, the co-founder of a San Francisco water startup, WaterSmart, that assists water utilities in encouraging conservation by crunching data on individual water consumption. Fernandez spends his days building and monitoring fogcatchers, remarkably Dune-like devices that have the property of converting fog into potable water. “I think about Dune a lot,” Fernandez says. “The ideas have really sat with me. In the book, they revere water, and ask, what do we do?” Similarly, Yolles says, “I remember being fascinated by the stillsuits. That was a striking technology, really poignant.” And inspiring. The fictional prospect of a dystopian future, Yolles says, “helped me see problems that we have, and where things might go.”
Science fiction boasts a long history of influencing the course of scientific and technological development. The inventors of the submarine and the helicopter credited Jules Verne for dreaming up both their inventions. Star Trek’s tricorder inspired generations of engineers to perfect the smartphone. Nobel Prize-winning economist Paul Krugman credits a character in Isaac Asimov’s Foundation trilogy for his motivation: “I grew up wanting to be Hari Seldon, using my understanding of the mathematics of human behavior to save civilization.” “Anything one man can imagine, another man can make real,” wrote Verne in Around the World in 80 Days. The future is as malleable as the written word.
So it shouldn’t be a surprise that two innovative thinkers devising means to address drought in California should be talking about Dune. As I visited with Yolles and Fernandez to learn about their work confronting drought, I realized the missions of both men embodied a deeper ecological message in Dune. The novel’s ecologist Kynes is famous for teaching that “the highest function of ecology is understanding consequences.” The implicit lesson for society, as it marshals technology to address a waterless world, is that technological fixes work only in the context of an environmentally and socially connected vision. It’s the vision that guided Herbert in creating Dune, and it owes as much to our ancient past as it is a speculation on the future.
According to a biography of Herbert, Dreamer of Dune, written by his son Brian, the genesis of the novel came when Herbert, a long-time journalist who worked for a string of Northern California newspapers, landed an assignment in 1957 to write a story about a United States Department of Agriculture project to control spreading sand dunes with European beach grasses on the coast of Oregon. Surveying the highway-encroaching dunes from a low-flying aircraft, Herbert became fascinated by the implications of this clash between human and nature. The project, he later wrote, “fed my interest in how we inflict ourselves upon our planet. I could begin to see the shape of a global problem, no part of it separated from any other—social ecology, political ecology, economic ecology.” He chose the title Dune, he said, because of its onomatopoetic similarity to the word “doom.” He hoped Dune would serve as an “ecological awareness handbook.”
His wish came true. Along with Rachel Carson’s environmental call to arms, Silent Spring, published in 1962, Dune, says Robert France, a professor of watershed management at Dalhousie University, “played a very important role in increasing global consciousness about environmental concerns in general.” France says the massively popular reaction to Dune was a key part in the events that led up to the creation of Earth Day. Herbert frequently corresponded with the founder of Earth Day, Ira Einhorn, and was a featured speaker at the first Earth Day, in 1970.
Herbert’s role in the budding environmental movement is proof science fiction can and does play a role in how we live in the present. But one of the more remarkable things about Dune is how rooted its story is in the ancient past. According to Brian Herbert, his father spent five years researching desert cultures and “dry-land ecology” before writing the novel. There’s a reason why the Fremen language looks and sounds like Arabic, and the Fremen people bear more than a passing resemblance to Bedouin nomads. Herbert did his homework. A civilization flourished in the Middle East 2,000 years ago that, by necessity, used every bit of available technology to maximize their access to water. “The closest historic parallel to the Dune Fremen,” says France, “are the Nabateans, proto-Semitic Arabs who lived at the southern end of the Dead Sea.”
by Andrew Leonard, Nautilus | Read more:
Image: Gary Jamroz-PalmaMachine Ethics: The Robot’s Dilemma
In his 1942 short story 'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics — engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance. In May, a panel talk on driverless cars at the Brookings Institution, a think tank in Washington DC, turned into a discussion about how autonomous vehicles would behave in a crisis. What if a vehicle's efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?
“We see more and more autonomous or automated systems in our daily life,” said panel participant Karl-Josef Kuhn, an engineer with Siemens in Munich, Germany. But, he asked, how can researchers equip a robot to react when it is “making the decision between two bad choices”?
The pace of development is such that these difficulties will soon affect health-care robots, military drones and other autonomous devices capable of making decisions that could help or harm humans. Researchers are increasingly convinced that society's acceptance of such machines will depend on whether they can be programmed to act in ways that maximize safety, fit in with social norms and encourage trust. “We need some serious progress to figure out what's relevant for artificial intelligence to reason successfully in ethical situations,” says Marcello Guarini, a philosopher at the University of Windsor in Canada.
Several projects are tackling this challenge, including initiatives funded by the US Office of Naval Research and the UK government's engineering-funding council. They must address tough scientific questions, such as what kind of intelligence, and how much, is needed for ethical decision-making, and how that can be translated into instructions for a machine. Computer scientists, roboticists, ethicists and philosophers are all pitching in.
“If you had asked me five years ago whether we could make ethical robots, I would have said no,” says Alan Winfield, a roboticist at the Bristol Robotics Laboratory, UK. “Now I don't think it's such a crazy idea.”
Learning machines
In one frequently cited experiment, a commercial toy robot called Nao was programmed to remind people to take medicine.
“On the face of it, this sounds simple,” says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. “But even in this kind of limited task, there are nontrivial ethics questions involved.” For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.
To teach Nao to navigate such quandaries, the Andersons gave it examples of cases in which bioethicists had resolved conflicts involving autonomy, harm and benefit to a patient. Learning algorithms then sorted through the cases until they found patterns that could guide the robot in new situations.
With this kind of 'machine learning', a robot can extract useful knowledge even from ambiguous inputs (see go.nature.com/2r7nav). The approach would, in theory, help the robot to get better at ethical decision-making as it encounters more situations. But many fear that the advantages come at a price. The principles that emerge are not written into the computer code, so “you have no way of knowing why a program could come up with a particular rule telling it something is ethically 'correct' or not”, says Jerry Kaplan, who teaches artificial intelligence and ethics at Stanford University in California.
Getting around this problem calls for a different tactic, many engineers say; most are attempting it by creating programs with explicitly formulated rules, rather than asking a robot to derive its own. Last year, Winfield published the results of an experiment that asked: what is the simplest set of rules that would allow a machine to rescue someone in danger of falling into a hole? Most obviously, Winfield realized, the robot needed the ability to sense its surroundings — to recognize the position of the hole and the person, as well as its own position relative to both. But the robot also needed rules allowing it to anticipate the possible effects of its own actions.
Winfield's experiment used hockey-puck-sized robots moving on a surface. He designated some of them 'H-robots' to represent humans, and one — representing the ethical machine — the 'A-robot', named after Asimov. Winfield programmed the A-robot with a rule analogous to Asimov's first law: if it perceived an H-robot in danger of falling into a hole, it must move into the H-robot's path to save it.
Winfield put the robots through dozens of test runs, and found that the A-robot saved its charge each time. But then, to see what the allow-no-harm rule could accomplish in the face of a moral dilemma, he presented the A-robot with two H-robots wandering into danger simultaneously. Now how would it behave?
The results suggested that even a minimally ethical robot could be useful, says Winfield: the A-robot frequently managed to save one 'human', usually by moving first to the one that was slightly closer to it. Sometimes, by moving fast, it even managed to save both. But the experiment also showed the limits of minimalism. In almost half of the trials, the A-robot went into a helpless dither and let both 'humans' perish. To fix that would require extra rules about how to make such choices. If one H-robot were an adult and another were a child, for example, which should the A-robot save first? On matters of judgement like these, not even humans always agree. And often, as Kaplan points out, “we don't know how to codify what the explicit rules should be, and they are necessarily incomplete”.
Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance. In May, a panel talk on driverless cars at the Brookings Institution, a think tank in Washington DC, turned into a discussion about how autonomous vehicles would behave in a crisis. What if a vehicle's efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?“We see more and more autonomous or automated systems in our daily life,” said panel participant Karl-Josef Kuhn, an engineer with Siemens in Munich, Germany. But, he asked, how can researchers equip a robot to react when it is “making the decision between two bad choices”?
The pace of development is such that these difficulties will soon affect health-care robots, military drones and other autonomous devices capable of making decisions that could help or harm humans. Researchers are increasingly convinced that society's acceptance of such machines will depend on whether they can be programmed to act in ways that maximize safety, fit in with social norms and encourage trust. “We need some serious progress to figure out what's relevant for artificial intelligence to reason successfully in ethical situations,” says Marcello Guarini, a philosopher at the University of Windsor in Canada.
Several projects are tackling this challenge, including initiatives funded by the US Office of Naval Research and the UK government's engineering-funding council. They must address tough scientific questions, such as what kind of intelligence, and how much, is needed for ethical decision-making, and how that can be translated into instructions for a machine. Computer scientists, roboticists, ethicists and philosophers are all pitching in.
“If you had asked me five years ago whether we could make ethical robots, I would have said no,” says Alan Winfield, a roboticist at the Bristol Robotics Laboratory, UK. “Now I don't think it's such a crazy idea.”
Learning machines
In one frequently cited experiment, a commercial toy robot called Nao was programmed to remind people to take medicine.
“On the face of it, this sounds simple,” says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. “But even in this kind of limited task, there are nontrivial ethics questions involved.” For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.
To teach Nao to navigate such quandaries, the Andersons gave it examples of cases in which bioethicists had resolved conflicts involving autonomy, harm and benefit to a patient. Learning algorithms then sorted through the cases until they found patterns that could guide the robot in new situations.With this kind of 'machine learning', a robot can extract useful knowledge even from ambiguous inputs (see go.nature.com/2r7nav). The approach would, in theory, help the robot to get better at ethical decision-making as it encounters more situations. But many fear that the advantages come at a price. The principles that emerge are not written into the computer code, so “you have no way of knowing why a program could come up with a particular rule telling it something is ethically 'correct' or not”, says Jerry Kaplan, who teaches artificial intelligence and ethics at Stanford University in California.
Getting around this problem calls for a different tactic, many engineers say; most are attempting it by creating programs with explicitly formulated rules, rather than asking a robot to derive its own. Last year, Winfield published the results of an experiment that asked: what is the simplest set of rules that would allow a machine to rescue someone in danger of falling into a hole? Most obviously, Winfield realized, the robot needed the ability to sense its surroundings — to recognize the position of the hole and the person, as well as its own position relative to both. But the robot also needed rules allowing it to anticipate the possible effects of its own actions.
Winfield's experiment used hockey-puck-sized robots moving on a surface. He designated some of them 'H-robots' to represent humans, and one — representing the ethical machine — the 'A-robot', named after Asimov. Winfield programmed the A-robot with a rule analogous to Asimov's first law: if it perceived an H-robot in danger of falling into a hole, it must move into the H-robot's path to save it.
Winfield put the robots through dozens of test runs, and found that the A-robot saved its charge each time. But then, to see what the allow-no-harm rule could accomplish in the face of a moral dilemma, he presented the A-robot with two H-robots wandering into danger simultaneously. Now how would it behave?
The results suggested that even a minimally ethical robot could be useful, says Winfield: the A-robot frequently managed to save one 'human', usually by moving first to the one that was slightly closer to it. Sometimes, by moving fast, it even managed to save both. But the experiment also showed the limits of minimalism. In almost half of the trials, the A-robot went into a helpless dither and let both 'humans' perish. To fix that would require extra rules about how to make such choices. If one H-robot were an adult and another were a child, for example, which should the A-robot save first? On matters of judgement like these, not even humans always agree. And often, as Kaplan points out, “we don't know how to codify what the explicit rules should be, and they are necessarily incomplete”.
by Boer Deng, Nature | Read more:
Image: Peter Adams and Day The Earth Stood Still
Labels:
Critical Thought,
Psychology,
Science,
Technology
Tuesday, June 30, 2015
The Kid Brother
Consider yourself lucky if you have one
Yes, we tossed him like a football when he was two years old. We did. And yes, we folded him like a smiling gangly awkward puppet into a kitchen cabinet. We did that. Yes, we painted his face blue once, and sent him roaring into our teenage sister’s room to wake her up on a Saturday. We did that, too. Yes, we stood in the hospital parking lot with our dad and waved up at the room where our mom stood in the window brandishing our new kid brother who looked from where we stood like a bundle of laundry more than a kid brother. Yes, we gawped at him with disappointment when he came home and was placed proudly on the couch like a mewling prize and we muttered later quietly in our room that he seemed totally useless, brotherwise. We kept checking on him the rest of the day and he never did do anything interesting that we noticed, not even wail or bellow like babies did in the movies and on television, even when you poked him with a surreptitious finger. He just sprawled there looking perfect, and after a while we lost interest and we went upstairs to plot against our sister.
As he grew, he remained the most cheerful compliant complaisant child you ever saw, never complaining in the least when we tossed him or decked him or chose him last for football games or sent him in first as lonely assault force in conflicts of all sorts, and we were always half-forgetting him when we dashed off on adventures and expeditions, and we were always half-absorbed by and half-annoyed with his littlebrotherness, happy to defend him adamantly against the taunts and shoves of others but not at all averse to burling him around like a puppy ourselves. We buried him in the sand up to his jaw at the beach. We spoke to him curtly and cuttingly when we felt that he was the apple of the grandmotherly eye and we were the peach pits, the shriveled potato skins, the sad brown pelts of dead pears. We did that.
And never once that I remember did he hit back, or assault us, or issue snide and sneering remarks, or rat on us to the authorities, or shriek with rage, or abandon us exasperated for the refuge of his friends. Never once that I can remember, and I am ferociously memorious, can I remember him sad or angry or bitter or furious. When I think of him, I see his smile, and never any other look on his face, and isn’t that amazing? Of how many of our friends and family can that be said? Not many, not many; nor can I say it of myself.
But I can say it of my kid brother, and this morning I suggest that those of us with kid brothers are immensely lucky in life, and those of us without kid brothers missed a great gentle gift unlike any other; for older brothers are stern and heroic and parental, lodestars to steer by or steer against, but kid brothers, at least in their opening chapters, are open books, eager and trusting, innocent and gentle; in some deep subtle way they are the best of you, the way you were, the way you hope some part of you will always be; in some odd way, at least for a while, they were the best of your family, too, the essence of what was good and true and holy about the blood that bound you each to each.
As he grew, he remained the most cheerful compliant complaisant child you ever saw, never complaining in the least when we tossed him or decked him or chose him last for football games or sent him in first as lonely assault force in conflicts of all sorts, and we were always half-forgetting him when we dashed off on adventures and expeditions, and we were always half-absorbed by and half-annoyed with his littlebrotherness, happy to defend him adamantly against the taunts and shoves of others but not at all averse to burling him around like a puppy ourselves. We buried him in the sand up to his jaw at the beach. We spoke to him curtly and cuttingly when we felt that he was the apple of the grandmotherly eye and we were the peach pits, the shriveled potato skins, the sad brown pelts of dead pears. We did that.And never once that I remember did he hit back, or assault us, or issue snide and sneering remarks, or rat on us to the authorities, or shriek with rage, or abandon us exasperated for the refuge of his friends. Never once that I can remember, and I am ferociously memorious, can I remember him sad or angry or bitter or furious. When I think of him, I see his smile, and never any other look on his face, and isn’t that amazing? Of how many of our friends and family can that be said? Not many, not many; nor can I say it of myself.
But I can say it of my kid brother, and this morning I suggest that those of us with kid brothers are immensely lucky in life, and those of us without kid brothers missed a great gentle gift unlike any other; for older brothers are stern and heroic and parental, lodestars to steer by or steer against, but kid brothers, at least in their opening chapters, are open books, eager and trusting, innocent and gentle; in some deep subtle way they are the best of you, the way you were, the way you hope some part of you will always be; in some odd way, at least for a while, they were the best of your family, too, the essence of what was good and true and holy about the blood that bound you each to each.
by Brian Doyle, The American Scholar | Read more:
Image: markk
Finding the Right Fit for Flying Private
[ed. I usually keep a G450 or AS350 (AStar) on standby, but loan them out to friends occasionally.]
Carlos Urrutia's job is to fly a private jet. But when he is on board the Bombardier Challenger 300, which he has flown for tens of thousands of hours, he does much more than that.
He welcomes the passengers on board. He stows their luggage. He offers each passenger a drink before takeoff, anything from water to coffee to a cocktail that he will mix. If someone can’t figure out how to work one of the eight seats that swivel, or close the lavatory door, he’ll walk back while his co-pilot takes over and explain how it works.
Mr. Urrutia’s plane will also arrive at the destination faster and with less frustration than any first-class traveler on a commercial airline could dream of. It’s a nice way to travel — if you can afford the $10,000 an hour for the trip.
This is the world of private aviation. But even in that world, there are degrees of convenience, comfort and, to many, excess.
“Sometimes people don’t know the difference between their needs and wants,” said Kevin O’Leary, president of Jet Advisors, which offers advice on private aviation options. “We help them analyze their need first and then look at services.”
Those services break down into four categories: chartering a jet, buying a set number of hours in a jet program, getting a fractional interest in a plane or putting down tens of millions of dollars for your own aircraft. Each one has its defenders and its detractors. But Mr. O’Leary says what matters the most is how a private plane is to be used, whether by just one person or several executives.
Chartering a jet works best for those who can plan their trips in advance and are less concerned with the type of aircraft they get.
“Charter is the most flexible,” said Mark H. Lefever, president and chief operating officer of Avjet, a broker and adviser. “You have no monthly bills. You make up how much you want to spend per year and how many trips you want to do.”
He said the cost of a trip from Los Angeles, where Avjet is based, to Martha’s Vineyard would depend on how many people are flying and the level of comfort desired. A smaller Gulfstream G150 would cost about $35,000 one way, while the larger, newer Gulfstream G450 would be $55,000. (...)
The next step up is an hours program, commonly called a jet card. VistaJet allows people to fix their costs by buying the hours they think they’ll need, and adding more if they go over.
The company has 50 Bombardier jets in two sizes — one for flights up to a cross-country trip and another for trans-Atlantic travel — and it is trying to appeal to a global audience with a service branded like a luxury hotel, said Thomas Flohr, VistaJet’s chairman and founder.
For the longer-range Bombardier Global, the cost is $16,000 an hour, meaning 200 hours a year would cost $3.2 million. Over five years, that works out to be about as much as the upfront cost of a quarter share of the same plane, which would be about $14 million, but any share program has additional membership fees and fuel surcharges.
Carlos Urrutia's job is to fly a private jet. But when he is on board the Bombardier Challenger 300, which he has flown for tens of thousands of hours, he does much more than that.
He welcomes the passengers on board. He stows their luggage. He offers each passenger a drink before takeoff, anything from water to coffee to a cocktail that he will mix. If someone can’t figure out how to work one of the eight seats that swivel, or close the lavatory door, he’ll walk back while his co-pilot takes over and explain how it works.Mr. Urrutia’s plane will also arrive at the destination faster and with less frustration than any first-class traveler on a commercial airline could dream of. It’s a nice way to travel — if you can afford the $10,000 an hour for the trip.
This is the world of private aviation. But even in that world, there are degrees of convenience, comfort and, to many, excess.
“Sometimes people don’t know the difference between their needs and wants,” said Kevin O’Leary, president of Jet Advisors, which offers advice on private aviation options. “We help them analyze their need first and then look at services.”
Those services break down into four categories: chartering a jet, buying a set number of hours in a jet program, getting a fractional interest in a plane or putting down tens of millions of dollars for your own aircraft. Each one has its defenders and its detractors. But Mr. O’Leary says what matters the most is how a private plane is to be used, whether by just one person or several executives.
Chartering a jet works best for those who can plan their trips in advance and are less concerned with the type of aircraft they get.
“Charter is the most flexible,” said Mark H. Lefever, president and chief operating officer of Avjet, a broker and adviser. “You have no monthly bills. You make up how much you want to spend per year and how many trips you want to do.”
He said the cost of a trip from Los Angeles, where Avjet is based, to Martha’s Vineyard would depend on how many people are flying and the level of comfort desired. A smaller Gulfstream G150 would cost about $35,000 one way, while the larger, newer Gulfstream G450 would be $55,000. (...)
The next step up is an hours program, commonly called a jet card. VistaJet allows people to fix their costs by buying the hours they think they’ll need, and adding more if they go over.
The company has 50 Bombardier jets in two sizes — one for flights up to a cross-country trip and another for trans-Atlantic travel — and it is trying to appeal to a global audience with a service branded like a luxury hotel, said Thomas Flohr, VistaJet’s chairman and founder.
For the longer-range Bombardier Global, the cost is $16,000 an hour, meaning 200 hours a year would cost $3.2 million. Over five years, that works out to be about as much as the upfront cost of a quarter share of the same plane, which would be about $14 million, but any share program has additional membership fees and fuel surcharges.
by Paul Sulivan, NY Times | Read more:
Image: Christopher CapozzielloMonday, June 29, 2015
[ed. So many incredible photos here it was impossible to pick the best, so I just selected the first. Definitely, check this out.]
Trey Ratcliff
via:
Subscribe to:
Comments (Atom)











