Monday, July 6, 2015

Thomas Piketty: Germany Has Never Paid

[ed. See also: End Greece's Bleeding. Update: See also: Austerity has failed: an open letter from Thomas Piketty to Angela Merkel.]

In a forceful interview with German newspaper Die Zeit, the star economist Thomas Piketty calls for a major conference on debt. Germany, in particular, should not withhold help from Greece. This interview has been translated from the original German.

Since his successful book, “Capital in the Twenty-First Century,” the Frenchman Thomas Piketty has been considered one of the most influential economists in the world. His argument for the redistribution of income and wealth launched a worldwide discussion. In a interview with Georg Blume of DIE ZEIT, he gives his clear opinions on the European debt debate.

DIE ZEIT: Should we Germans be happy that even the French government is aligned with the German dogma of austerity?

Thomas Piketty: Absolutely not. This is neither a reason for France, nor Germany, and especially not for Europe, to be happy. I am much more afraid that the conservatives, especially in Germany, are about to destroy Europe and the European idea, all because of their shocking ignorance of history.

ZEIT: But we Germans have already reckoned with our own history.

Piketty: But not when it comes to repaying debts! Germany’s past, in this respect, should be of great significance to today’s Germans. Look at the history of national debt: Great Britain, Germany, and France were all once in the situation of today’s Greece, and in fact had been far more indebted. The first lesson that we can take from the history of government debt is that we are not facing a brand new problem. There have been many ways to repay debts, and not just one, which is what Berlin and Paris would have the Greeks believe.

ZEIT: But shouldn’t they repay their debts?

Piketty: My book recounts the history of income and wealth, including that of nations. What struck me while I was writing is that Germany is really the single best example of a country that, throughout its history, has never repaid its external debt. Neither after the First nor the Second World War. However, it has frequently made other nations pay up, such as after the Franco-Prussian War of 1870, when it demanded massive reparations from France and indeed received them. The French state suffered for decades under this debt. The history of public debt is full of irony. It rarely follows our ideas of order and justice.

ZEIT: But surely we can’t draw the conclusion that we can do no better today?

Piketty: When I hear the Germans say that they maintain a very moral stance about debt and strongly believe that debts must be repaid, then I think: what a huge joke! Germany is the country that has never repaid its debts. It has no standing to lecture other nations.

ZEIT: Are you trying to depict states that don’t pay back their debts as winners?

Piketty: Germany is just such a state. But wait: history shows us two ways for an indebted state to leave delinquency. One was demonstrated by the British Empire in the 19th century after its expensive wars with Napoleon. It is the slow method that is now being recommended to Greece. The Empire repaid its debts through strict budgetary discipline. This worked, but it took an extremely long time. For over 100 years, the British gave up two to three percent of their economy to repay its debts, which was more than they spent on schools and education. That didn’t have to happen, and it shouldn’t happen today. The second method is much faster. Germany proved it in the 20th century. Essentially, it consists of three components: inflation, a special tax on private wealth, and debt relief.

ZEIT: So you’re telling us that the German Wirtschaftswunder [“economic miracle”] was based on the same kind of debt relief that we deny Greece today?

Piketty: Exactly. After the war ended in 1945, Germany’s debt amounted to over 200% of its GDP. Ten years later, little of that remained: public debt was less than 20% of GDP. Around the same time, France managed a similarly artful turnaround. We never would have managed this unbelievably fast reduction in debt through the fiscal discipline that we today recommend to Greece. Instead, both of our states employed the second method with the three components that I mentioned, including debt relief. Think about the London Debt Agreement of 1953, where 60% of German foreign debt was cancelled and its internal debts were restructured.

ZEIT: That happened because people recognized that the high reparations demanded of Germany after World War I were one of the causes of the Second World War. People wanted to forgive Germany’s sins this time!

Piketty: Nonsense! This had nothing to do with moral clarity; it was a rational political and economic decision. They correctly recognized that, after large crises that created huge debt loads, at some point people need to look toward the future. We cannot demand that new generations must pay for decades for the mistakes of their parents. The Greeks have, without a doubt, made big mistakes. Until 2009, the government in Athens forged its books. But despite this, the younger generation of Greeks carries no more responsibility for the mistakes of its elders than the younger generation of Germans did in the 1950s and 1960s. We need to look ahead. Europe was founded on debt forgiveness and investment in the future. Not on the idea of endless penance. We need to remember this.

ZEIT: The end of the Second World War was a breakdown of civilization. Europe was a killing field. Today is different.

Piketty: To deny the historical parallels to the postwar period would be wrong. Let’s think about the financial crisis of 2008/2009. This wasn’t just any crisis. It was the biggest financial crisis since 1929. So the comparison is quite valid. This is equally true for the Greek economy: between 2009 and 2015, its GDP has fallen by 25%. This is comparable to the recessions in Germany and France between 1929 and 1935.

ZEIT: Many Germans believe that the Greeks still have not recognized their mistakes and want to continue their free-spending ways.

Piketty: If we had told you Germans in the 1950s that you have not properly recognized your failures, you would still be repaying your debts. Luckily, we were more intelligent than that.

ZEIT: The German Minister of Finance, on the other hand, seems to believe that a Greek exit from the Eurozone could foster greater unity within Europe.

Piketty: If we start kicking states out, then the crisis of confidence in which the Eurozone finds itself today will only worsen. Financial markets will immediately turn on the next country. This would be the beginning of a long, drawn-out period of agony, in whose grasp we risk sacrificing Europe’s social model, its democracy, indeed its civilization on the altar of a conservative, irrational austerity policy.

ZEIT: Do you believe that we Germans aren’t generous enough?

Piketty: What are you talking about? Generous? Currently, Germany is profiting from Greece as it extends loans at comparatively high interest rates.

by Georg Blum, DIE ZEIT via Zero Hedge |  Read more:
Image: dpa

Sunday, July 5, 2015

Justice and Warfare in Cyberspace

There was a moment during the First Gulf War when ideologues argued that warfare technology had reached a tipping point. Gains in efficiency would reduce casualties and destruction; supremely accurate weapons would minimize unnecessary suffering without compromising military objectives. This inaugurated the age of target bombings and stealth missions enabled by precision technology. Now, we are at the threshold of yet another tipping point for war and technology. Software interference and cyber technologies threaten mass disruptionand destruction without a shot or bomb explosion. Physically waged wars—populated and won by armed bodies and manned weaponry—have given way to data and coding wars, creating vast, powerful, and yet not fully tapped, spaces and abilities.

Cyberwarfare acts are broadly understood as the use of cyber capabilities for spying or sabotage by one nation against another. However, the term “cyberaggression” can refer to everything from individual cyberbullying and harassment to sabotage that affects national interests. One example of the latter type is the infamous Stuxnet computer worm that targeted and invaded Iranian nuclear facilities in order to derail the Iranian nuclear program. The term ‘cyberaggression’ was also applied to the April 2015 breach of cybersecurity at the White House when sensitive details of the President’s schedule were obtained. It is therefore of little surprise that civilian and military resources to wage and contain cyberaggression are on the rise. Last January, there were reports that North Korea had doubled its military cyberwarfare units to over 6,000 troops.

To be sure, it is not clear when an act is merely an instance of cyberaggression as opposed an act of war. To complicate matters further, our conception of cyberwarfare and cyberaggression is taking shape against a background of increasing state domestic surveillance and other incursions to privacy, often defended on the basis of considerations of safety or convenience. (...)

In asking the question of what cyberaggression is—and when such aggression constitutes an act of war—we confront questions of how to (or if it is even meaningful to) apply the old paradigms of the state and state sovereignty, and of the laws of war based on them, to the new realities of cyberspace. One of the more important aspects of the traditional laws of war is the question of proportionality. According to standard understandings of the proportionality principle, a military in war has to weigh risk to civilians against the importance of the military objective at hand and the choice of military means to achieve their objective. How would proportionality apply in cyberspace, where victimization is not necessarily physical in personal or economic terms?

Imagine, for example, that the United States assesses a variety of serious cyberthreats coming from a foreign territory—these might range from shutting down White House cybercommunications to disrupting nuclear power plant operations in the US. As a response, the United States neutralizes or erases all of the cyber content created and hosted within that foreign territory so that individuals within this territory are no longer able to have a cyberpresence, effectively wiping out communications between the conspirators behind the serious threat in question. According to Rule 51 of the Tallinn Manual—a non-binding guide to the application of international law to cyberconflicts produced by NATO—collateral damage in cyberwar is acceptable so long as it is not “excessive in relation to the concrete and direct military advantage anticipated.”If no tangible “property”was destroyed and nobody was killed, was the act proportional? Under the current laws of war, the answer could be yes. However, given how much of life has moved or expanded to cyberspace, would this answer pass moral and legal muster?

One argument that may come into play in such new scenarios is the economic one. People’s data is immensely valuable on such a large scale—some estimates put the average value of a Facebook account at $174.17. Even if only a quarter of, say, the Russian population has a Facebook presence, erasing even selective cybercontent would amount to approximately 5.2 billion dollars in damages. However, there is also an added emotional value. Social media is becoming more and more central to the lives of individuals, and the content created, curated, and “owned”in cyberspace is very personal indeed. To lose such a cache would be, to many, devastating in a way that monetary value does not account for.

Some argue that unless and until cyberaggression escalates to the point of threatening life and limb, it should not be put into the context of warfare. Others argue that response in kind does nothing to redress the act, and that physical response to certain acts of cyberaggression is the best option. Still others believe that the traditional laws of war are not applicable to cyberwarfare and cyberaggression and that explicit rules about the consequences for cyberaggression should be created. These positions only scratch the surface of the legal and moral challenges ahead.

by Lisa Lucile Owens, Boston Review | Read more:
Image: NASA Marshall

Lee Baker, Storm
via:

Saturday, July 4, 2015

Regulating Sex

[ed. See also: Teenager's jailing brings a call to fix sex offender registries.]

This is a strange moment for sex in America. We’ve detached it from pregnancy, matrimony and, in some circles, romance. At least, we no longer assume that intercourse signals the start of a relationship. But the more casual sex becomes, the more we demand that our institutions and government police the line between what’s consensual and what isn’t. And we wonder how to define rape. Is it a violent assault or a violation of personal autonomy? Is a person guilty of sexual misconduct if he fails to get a clear “yes” through every step of seduction and consummation?

According to the doctrine of affirmative consent — the “yes means yes” rule — the answer is, well, yes, he is. And though most people think of “yes means yes” as strictly for college students, it is actually poised to become the law of the land.

About a quarter of all states, and the District of Columbia, now say sex isn’t legal without positive agreement, although some states undercut that standard by requiring proof of force or resistance as well.

Codes and laws calling for affirmative consent proceed from admirable impulses. (The phrase “yes means yes,” by the way, represents a ratcheting-up of “no means no,” the previous slogan of the anti-rape movement.) People should have as much right to control their sexuality as they do their body or possessions; just as you wouldn’t take a precious object from someone’s home without her permission, you shouldn’t have sex with someone if he hasn’t explicitly said he wants to.

And if one person can think he’s hooking up while the other feels she’s being raped, it makes sense to have a law that eliminates the possibility of misunderstanding. “You shouldn’t be allowed to make the assumption that if you find someone lying on a bed, they’re free for sexual pleasure,” says Lynn Hecht Schafran, director of a judicial education program at Legal Momentum, a women’s legal defense organization.

But criminal law is a very powerful instrument for reshaping sexual mores. Should we really put people in jail for not doing what most people aren’t doing? (Or at least, not yet?) It’s one thing to teach college students to talk frankly about sex and not to have it without demonstrable pre-coital assent. Colleges are entitled to uphold their own standards of comportment, even if enforcement of that behavior is spotty or indifferent to the rights of the accused. It’s another thing to make sex a crime under conditions of poor communication.
Most people just aren’t very talkative during the delicate tango that precedes sex, and the re-education required to make them more forthcoming would be a very big project. Nor are people unerringly good at decoding sexual signals. If they were, we wouldn’t have romantic comedies. “If there’s no social consensus about what the lines are,” says Nancy Gertner, a senior lecturer at Harvard Law School and a retired judge, then affirmative consent “has no business being in the criminal law.”

Perhaps the most consequential deliberations about affirmative consent are going on right now at the American Law Institute. The more than 4,000 law professors, judges and lawyers who belong to this prestigious legal association — membership is by invitation only — try to untangle the legal knots of our time. They do this in part by drafting and discussing model statutes. Once the group approves these exercises, they hold so much sway that Congress and states sometimes vote them into law, in whole or in part. For the past three years, the law institute has been thinking about how to update the penal code for sexual assault, which was last revised in 1962. When its suggestions circulated in the weeks before the institute’s annual meeting in May, some highly instructive hell broke loose.

In a memo that has now been signed by about 70 institute members and advisers, including Judge Gertner, readers have been asked to consider the following scenario: “Person A and Person B are on a date and walking down the street. Person A, feeling romantically and sexually attracted, timidly reaches out to hold B’s hand and feels a thrill as their hands touch. Person B does nothing, but six months later files a criminal complaint. Person A is guilty of ‘Criminal Sexual Contact’ under proposed Section 213.6(3)(a).”

Far-fetched? Not as the draft is written. The hypothetical crime cobbles together two of the draft’s key concepts. The first is affirmative consent. The second is an enlarged definition of criminal sexual contact that would include the touching of any body part, clothed or unclothed, with sexual gratification in mind. As the authors of the model law explain: “Any kind of contact may qualify. There are no limits on either the body part touched or the manner in which it is touched.” So if Person B neither invites nor rebukes a sexual advance, then anything that happens afterward is illegal. “With passivity expressly disallowed as consent,” the memo says, “the initiator quickly runs up a string of offenses with increasingly more severe penalties to be listed touch by touch and kiss by kiss in the criminal complaint.”

The obvious comeback to this is that no prosecutor would waste her time on such a frivolous case. But that doesn’t comfort signatories of the memo, several of whom have pointed out to me that once a law is passed, you can’t control how it will be used. For instance, prosecutors often add minor charges to major ones (such as, say, forcible rape) when there isn’t enough evidence to convict on the more serious charge. They then put pressure on the accused to plead guilty to the less egregious crime.

The example points to a trend evident both on campuses and in courts: the criminalization of what we think of as ordinary sex and of sex previously considered unsavory but not illegal.

by Judith Shulevitz, NY Times |  Read more:
Image: Yu Man Ma

Happy 4th!
via:

Tom Petty and the Heartbreakers

Friday, July 3, 2015

The Economic Consequences of Austerity


[ed. See also: The elites are determined to end the revolt against austerity in Greece.]

On 5 June 1919, John Maynard Keynes wrote to the prime minister of Britain, David Lloyd George, “I ought to let you know that on Saturday I am slipping away from this scene of nightmare. I can do no more good here.” Thus ended Keynes’s role as the official representative of the British Treasury at the Paris Peace Conference. It liberated Keynes from complicity in the Treaty of Versailles (to be signed later that month), which he detested.

Why did Keynes dislike a treaty that ended the state of war between Germany and the Allied Powers (surely a good thing)?

Keynes was not, of course, complaining about the end of the world war, nor about the need for a treaty to end it, but about the terms of the treaty – and in particular the suffering and the economic turmoil forced on the defeated enemy, the Germans, through imposed austerity. Austerity is a subject of much contemporary interest in Europe – I would like to add the word “unfortunately” somewhere in the sentence. Actually, the book that Keynes wrote attacking the treaty, The Economic Consequences of the Peace, was very substantially about the economic consequences of “imposed austerity”. Germany had lost the battle already, and the treaty was about what the defeated enemy would be required to do, including what it should have to pay to the victors. The terms of this Carthaginian peace, as Keynes saw it (recollecting the Roman treatment of the ­defeated Carthage following the Punic wars), included the imposition of an unrealistically huge burden of reparation on Germany – a task that Germany could not carry out without ruining its economy. As the terms also had the effect of fostering animosity between the victors and the vanquished and, in addition, would economically do no good to the rest of Europe, Keynes had nothing but contempt for the decision of the victorious four (Britain, France, Italy and the United States) to demand something from Germany that was hurtful for the vanquished and unhelpful for all.

The high-minded moral rhetoric in favour of the harsh imposition of austerity on Germany that Keynes complained about came particularly from Lord Cunliffe and Lord Sumner, representing Britain on the Reparation Commission, whom Keynes liked to call “the Heavenly Twins”. In his ­parting letter to Lloyd George, Keynes added, “I leave the Twins to gloat over the devastation of Europe.” Grand rhetoric on the necessity of imposing austerity, to remove economic and moral impropriety in Greece and elsewhere, may come more frequently these days from Berlin itself, with the changed role of Germany in today’s world. But the unfavourable consequences that Keynes feared would follow from severe – and in his judgement unreasoned – imposition of austerity remain relevant today (with an altered geography of the morally upright discipliner and the errant to be disciplined).

Aside from Keynes’s fear of economic ruin of a country, in this case Germany, through the merciless scheduling of demanded payments, he also analysed the bad consequences on other countries in Europe of the economic collapse of one of their partners. The thesis of economic interdependence, which Keynes would pursue more fully later (including in his most famous book, The General Theory of Employment, Interest and Money, to be published in 1936), makes an early appearance in this book, in the context of his critique of the Versailles Treaty.

“An inefficient, unemployed, disorganised Europe faces us,” says Keynes, “torn by internal strife and international hate, fighting, starving, pillaging, and lying.” If some of these problems are visible in Europe today (as I believe to some extent they are), we have to ask: why is this so? After all, 2015 is not really anything like 1919, and yet why do the same words, taken quite out of context, look as if there is a fitting context for at least a part of them right now?

If austerity is as counterproductive as Keynes thought, how come it seems to deliver electoral victories, at least in Britain? Indeed, what truth is there in the explanatory statement in the Financial Times, aired shortly after the Conservative victory in the general election, and coming from a leading historian, Niall Ferguson (who, I should explain, is a close friend – our friendship seems to thrive on our persistent disagreement): “Labour should blame Keynes for their election defeat.”

If the point of view that Ferguson airs is basically right (and that reading is shared by several other commentators as well), the imposed austerity we are going through is not a useless nightmare (as Keynes’s analysis would make us believe), but more like a strenuous workout for a healthier future, as the champions of austerity have always claimed. And it is, in this view, a future that is beginning to unfold already in our time, at least in Britain, appreciated by grateful voters. Is that the real story now? And more generally, could “the Heavenly Twins” have been right all along?

by Amartya Sen, The Guardian |  Read more:
Image: William Orpen

Elizabeth Colborne, Mt. Baker, Washington 1927
via:

The Revolution Will Probably Wear Mom Jeans

Not long ago, a curious fashion trend swept through New York City’s hipster preserves, from Bushwick to the Lower East Side. Once, well-heeled twentysomethings had roamed these streets in plaid button-downs and floral playsuits. Now, the reign of the aspiring lumberjacks and their mawkish mates was coming to an end. Windbreakers, baseball caps, and polar fleece appeared among the flannel. Cargo shorts and khakis were verboten no longer. Denim went from dark-rinse to light. Sandals were worn, and sometimes with socks. It was a blast of carefully modulated blandness—one that delighted some fashion types, appalled others, and ignited the critical passions of lifestyle journalists everywhere.

They called it Normcore. Across our Fashion Nation, style sections turned out lengthy pieces exploring this exotic lurch into the quotidian, and trend watchers plumbed every possible meaning in the cool kids’ new fondness for dressing like middle-aged suburbanites. Were hipsters sacrificing their coolness in a brave act of self-renunciation? Was this an object lesson in the futility of ritually chasing down, and then repudiating, the coolness of the passing moment? Or were middle-aged dorks themselves mysteriously cool all of a sudden? Was Normcore just an elaborate prank designed to prove that style writers can be fooled into believing almost anything is trendy? (...)

The Revolt of the Mass Indie Überelite

The adventure began in 2013, and picked up steam early last year with Fiona Duncan’s “Normcore: Fashion for Those Who Realize They’re One in 7 Billion,” a blowout exploration of the anti-individualist Normcore creed for New York magazine. Duncan remembered feeling the first tremors of the revolution:
Sometime last summer I realized that, from behind, I could no longer tell if my fellow Soho pedestrians were art kids or middle-aged, middle-American tourists. Clad in stonewash jeans, fleece, and comfortable sneakers, both types looked like they might’ve just stepped off an R-train after shopping in Times Square. When I texted my friend Brad (an artist whose summer uniform consisted of Adidas barefoot trainers, mesh shorts and plain cotton tees) for his take on the latest urban camouflage, I got an immediate reply: “lol normcore.”
Brad, however eloquent and charming, did not coin the term himself. He got it from K-HOLE, a group of trend forecasters. To judge by K-HOLE’s name alone—a slang term for the woozy aftereffects of the animal tranquilizer and recreational drug ketamine—the group was more than happy to claim Normcore as its own licensed playground. As company principals patiently explained to the New York Times, their appropriation of the name of a toxic drug hangover was itself a sly commentary on the cultural logic of the corporate world’s frenetic cooptation of young people’s edgy habits. At a London art gallery in October 2013, in a paper titled “Youth Mode: A Report on Freedom,” team K-HOLE proposed the Twitter hashtag #Normcore as a rejoinder to such cooptation:
If the rule is Think Different, being seen as normal is the scariest thing. (It means being returned to your boring suburban roots, being turned back into a pumpkin, exposed as unexceptional.) Which paradoxically makes normalcy ripe for the Mass Indie überelites to adopt as their own, confirming their status by showing how disposable the trappings of uniqueness are.
Jargon aside, the report had a point: lately “Mass Indie überelites”—a group more commonly known as hipsters—have been finding it increasingly difficult to express their individuality, the very thing that confers hipster cred.

Part of the problem derives from the hipster’s ubiquity. For the past several years, hipsterism has been an idée fixe in the popular press—coy cultural shorthand in the overlapping worlds of fashion, music, art, and literature for a kind of rebellion that doesn’t quite come off on its own steam. Forward-thinking middle-class youngsters used to strike fear in the hearts of the squares by flouting social norms—at least nominally, until they grew up and settled into their own appointed professional, middle-class destinies. Now, however, the hipster is a benign and well-worn figure of fun: a lumpenbourgeois urbanite perpetually in search of ways to display her difference from the masses. (...)

Food for Thought

Things get even more complicated when you consider the Middle American booboisie on whom Normcore sets its sights. Even as Normcore jeers at neutral, fashion-backward attire, it also manages to exalt the clueless exurbanite by turning her into a fetish object: the Emma Bovary of the strip mall. It’s not clear just how and why hipsters came to fixate on the People of Walmart, but it’s not a passing fancy; one after another, hipsters are elevating dreary things to the height of fashion.

Think of the rise of kale. The once-humble vegetable has ascended to such dizzying heights that Beyoncé wore a sweatshirt emblazoned with “KALE” in one of her recent videos.

See also pizza, a closer edible analogue to Normcore. A friend with ties to the advertising industry informed me of pizza’s edginess sometime last year, directing me to a Tumblr called Slice Guyz that collects pictures of pizza-themed graffiti and the like. Former child star and current hipster Macaulay Culkin started a joke band called the Pizza Underground; it performs selections from the Velvet Underground catalogue repurposed with pizza-themed lyrics. In September, New York magazine—the same oracle that announced the rise of Normcore—anointed pizza as the “chicest new trend.” As incontrovertible evidence that the trend was indeed taking hold, the magazine’s fashion brain trust commissioned layouts of Katy Perry and Beyoncé (now the avatar of food-themed chicness, it would seem) in pizza-print outfits.

To take something recognizably bad, whether pizza or bulky fleece sweatshirts, and try to pass it off as avant-garde self-expression is an incredibly defeatist gesture, one both aware of and happy with its futility. Ceci n’est pas intéressant.

Still, pizza, like denim, is accessible to all Americans and crafted with wildly different levels of competence, self-awareness, and artisanal intent. Papa John’s or Little Caesars may deliver glorified tomato-paste-on-cardboard alongside tubs of dipping butter to a nation of indifferent proles. But if you ask New York’s infinitely more with-it pizza correspondents, they’ll tell you, with numbing precision, that pizza can be “toppings-forward” and “avant-garde.” This range makes pizza the perfect hipster quarry: sometimes mundane, sometimes aspirational, and above all, exotic. (...)

Before you can say “plain Hanes tee,” this longing can shade again into contempt. When urban hipsters fetishize the déclassé and the mundane, they rely on their understanding of middle America as a colony, one filled with happy proles to be mined for fashion inspiration. This is as true for hipsters as it is for Glenn Beck, whose bone-deep cynicism about the heartland is simply an amplified version of the same infatuated disdain cultivated by a deliberately dowdy Brooklynite. How else can one account for the steady migration of Normcore into the very corporate world that calls the shots on what we buy and how—a world in which web designers, programmers, stylists, advertising executives, and other masters of the knowledge economy now dress up like call-center drones headed to the Dollar Store?

by Eugienia Williamson, The Baffler |  Read more:
Image: Hollie Chastain

Spring King

The Sofalarity

[ed. See also: The problem with easy technology.]

Imagine that two people are carving a six-foot slab of wood at the same time. One is using a hand-chisel, the other, a chainsaw. If you are interested in the future of that slab, whom would you watch?

This chainsaw/chisel logic has led some to suggest that technological evolution is more important to humanity’s near future than biological evolution; nowadays, it is not the biological chisel but the technological chainsaw that is most quickly redefining what it means to be human. The devices we use change the way we live much faster than any contest among genes. We’re the block of wood, even if, as I wrote in January, sometimes we don’t even fully notice that we’re changing.

Assuming that we really are evolving as we wear or inhabit more technological prosthetics—like ever-smarter phones, helpful glasses, and brainy cars—here’s the big question: Will that type of evolution take us in desirable directions, as we usually assume biological evolution does?

Some, like the Wired founder Kevin Kelly, believe that the answer is a resounding “yes.” In his book “What Technology Wants,” Kelly writes: “Technology wants what life wants: Increasing efficiency; Increasing opportunity; Increasing emergence; Increasing complexity; Increasing diversity; Increasing specialization; Increasing ubiquity; Increasing freedom; Increasing mutualism; Increasing beauty; Increasing sentience; Increasing structure; Increasing evolvability.” (...)

Biological evolution is driven by survival of the fittest, as adaptive traits are those that make the survival and reproduction of a population more likely. It isn’t perfect, but at least, in a rough way, it favors organisms who are adapted to their environments.

Technological evolution has a different motive force. It is self-evolution, and it is therefore driven by what we want as opposed to what is adaptive. In a market economy, it is even more complex: for most of us, our technological identities are determined by what companies decide to sell based on what they believe we, as consumers, will pay for. As a species, we often aren’t much different from the Oji-Cree. Comfort-seeking missiles, we spend the most to minimize pain and maximize pleasure. When it comes to technologies, we mainly want to make things easy. Not to be bored. Oh, and maybe to look a bit younger.

Our will-to-comfort, combined with our technological powers, creates a stark possibility. If we’re not careful, our technological evolution will take us toward not a singularity but a sofalarity. That’s a future defined not by an evolution toward superintelligence but by the absence of discomforts.

by Tim Wu, New Yorker |  Read more:
Image: Hannah K. Lee

Wednesday, July 1, 2015


Hieronymous Bosch, The Garden of Earthly Delights, c.1500 (detail)
via:

To Save California, Read “Dune”


[ed. We might also stop promoting unsustainable developments. Las Vegas, anyone? See also: Holy Crop.] 

Fifty years ago science-fiction author Frank Herbert seized the imagination of readers with his portrayal of a planet on which it never rained. In the novel Dune, the scarcest resource is water, so much so that the mere act of shedding a tear or spitting on the floor takes on weighty cultural significance.

To survive their permanent desert climate, the indigenous Fremen of Dune employ every possible technology. They build “windtraps” and “dew collectors” to grab the slightest precipitation out of the air. They construct vast underground cisterns and canals to store and transport their painstakingly gathered water. They harvest every drop of moisture from the corpses of the newly dead. During each waking moment they dress in “stillsuits”—head-to-toe wetsuit-like body coverings that recycle sweat, urine, and feces back into drinking water.

Described by Dune’s “planetary ecologist,” Liet-Kynes, as “a micro-sandwich—a high-efficiency filter and heat exchange system”—the stillsuit is a potent metaphor for reuse, reclamation, and conservation. Powered by the wearer’s own breathing and movement, the stillsuit is the technical apotheosis of the principle of making do with what one has.

Someday, sooner than we’d like, it’s not inconceivable that residents of California will be shopping on Amazon for the latest in stillsuit tech. Dune is set thousands of years in the future, but in California in 2015, the future is now. Four years of drought have pummeled reservoirs and forced mandatory 25 percent water rationing cuts. The calendar year of 2014 was the driest (and hottest) since records started being kept in the 1800s. At the end of May, the Sierra Nevada snowpack—a crucial source of California’s water—hit its lowest point on record: zero. Climate models suggest an era of mega-droughts could be nigh.

Which brings us to Daniel Fernandez, a professor of science and environmental policy at California State University, Monterey Bay, and Peter Yolles, the co-founder of a San Francisco water startup, WaterSmart, that assists water utilities in encouraging conservation by crunching data on individual water consumption. Fernandez spends his days building and monitoring fogcatchers, remarkably Dune-like devices that have the property of converting fog into potable water. “I think about Dune a lot,” Fernandez says. “The ideas have really sat with me. In the book, they revere water, and ask, what do we do?” Similarly, Yolles says, “I remember being fascinated by the stillsuits. That was a striking technology, really poignant.” And inspiring. The fictional prospect of a dystopian future, Yolles says, “helped me see problems that we have, and where things might go.”

Science fiction boasts a long history of influencing the course of scientific and technological development. The inventors of the submarine and the helicopter credited Jules Verne for dreaming up both their inventions. Star Trek’s tricorder inspired generations of engineers to perfect the smartphone. Nobel Prize-winning economist Paul Krugman credits a character in Isaac Asimov’s Foundation trilogy for his motivation: “I grew up wanting to be Hari Seldon, using my understanding of the mathematics of human behavior to save civilization.” “Anything one man can imagine, another man can make real,” wrote Verne in Around the World in 80 Days. The future is as malleable as the written word.

So it shouldn’t be a surprise that two innovative thinkers devising means to address drought in California should be talking about Dune. As I visited with Yolles and Fernandez to learn about their work confronting drought, I realized the missions of both men embodied a deeper ecological message in Dune. The novel’s ecologist Kynes is famous for teaching that “the highest function of ecology is understanding consequences.” The implicit lesson for society, as it marshals technology to address a waterless world, is that technological fixes work only in the context of an environmentally and socially connected vision. It’s the vision that guided Herbert in creating Dune, and it owes as much to our ancient past as it is a speculation on the future.

According to a biography of Herbert, Dreamer of Dune, written by his son Brian, the genesis of the novel came when Herbert, a long-time journalist who worked for a string of Northern California newspapers, landed an assignment in 1957 to write a story about a United States Department of Agriculture project to control spreading sand dunes with European beach grasses on the coast of Oregon. Surveying the highway-encroaching dunes from a low-flying aircraft, Herbert became fascinated by the implications of this clash between human and nature. The project, he later wrote, “fed my interest in how we inflict ourselves upon our planet. I could begin to see the shape of a global problem, no part of it separated from any other—social ecology, political ecology, economic ecology.” He chose the title Dune, he said, because of its onomatopoetic similarity to the word “doom.” He hoped Dune would serve as an “ecological awareness handbook.”

His wish came true. Along with Rachel Carson’s environmental call to arms, Silent Spring, published in 1962, Dune, says Robert France, a professor of watershed management at Dalhousie University, “played a very important role in increasing global consciousness about environmental concerns in general.” France says the massively popular reaction to Dune was a key part in the events that led up to the creation of Earth Day. Herbert frequently corresponded with the founder of Earth Day, Ira Einhorn, and was a featured speaker at the first Earth Day, in 1970.

Herbert’s role in the budding environmental movement is proof science fiction can and does play a role in how we live in the present. But one of the more remarkable things about Dune is how rooted its story is in the ancient past. According to Brian Herbert, his father spent five years researching desert cultures and “dry-land ecology” before writing the novel. There’s a reason why the Fremen language looks and sounds like Arabic, and the Fremen people bear more than a passing resemblance to Bedouin nomads. Herbert did his homework. A civilization flourished in the Middle East 2,000 years ago that, by necessity, used every bit of available technology to maximize their access to water. “The closest historic parallel to the Dune Fremen,” says France, “are the Nabateans, proto-Semitic Arabs who lived at the southern end of the Dead Sea.”

by Andrew Leonard, Nautilus |  Read more:
Image: Gary Jamroz-Palma

Machine Ethics: The Robot’s Dilemma

In his 1942 short story 'Runaround', science-fiction writer Isaac Asimov introduced the Three Laws of Robotics — engineering safeguards and built-in ethical principles that he would go on to use in dozens of stories and novels. They were: 1) A robot may not injure a human being or, through inaction, allow a human being to come to harm; 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law; and 3) A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Fittingly, 'Runaround' is set in 2015. Real-life roboticists are citing Asimov's laws a lot these days: their creations are becoming autonomous enough to need that kind of guidance. In May, a panel talk on driverless cars at the Brookings Institution, a think tank in Washington DC, turned into a discussion about how autonomous vehicles would behave in a crisis. What if a vehicle's efforts to save its own passengers by, say, slamming on the brakes risked a pile-up with the vehicles behind it? Or what if an autonomous car swerved to avoid a child, but risked hitting someone else nearby?

“We see more and more autonomous or automated systems in our daily life,” said panel participant Karl-Josef Kuhn, an engineer with Siemens in Munich, Germany. But, he asked, how can researchers equip a robot to react when it is “making the decision between two bad choices”?

The pace of development is such that these difficulties will soon affect health-care robots, military drones and other autonomous devices capable of making decisions that could help or harm humans. Researchers are increasingly convinced that society's acceptance of such machines will depend on whether they can be programmed to act in ways that maximize safety, fit in with social norms and encourage trust. “We need some serious progress to figure out what's relevant for artificial intelligence to reason successfully in ethical situations,” says Marcello Guarini, a philosopher at the University of Windsor in Canada.

Several projects are tackling this challenge, including initiatives funded by the US Office of Naval Research and the UK government's engineering-funding council. They must address tough scientific questions, such as what kind of intelligence, and how much, is needed for ethical decision-making, and how that can be translated into instructions for a machine. Computer scientists, roboticists, ethicists and philosophers are all pitching in.

“If you had asked me five years ago whether we could make ethical robots, I would have said no,” says Alan Winfield, a roboticist at the Bristol Robotics Laboratory, UK. “Now I don't think it's such a crazy idea.”

Learning machines

In one frequently cited experiment, a commercial toy robot called Nao was programmed to remind people to take medicine.

“On the face of it, this sounds simple,” says Susan Leigh Anderson, a philosopher at the University of Connecticut in Stamford who did the work with her husband, computer scientist Michael Anderson of the University of Hartford in Connecticut. “But even in this kind of limited task, there are nontrivial ethics questions involved.” For example, how should Nao proceed if a patient refuses her medication? Allowing her to skip a dose could cause harm. But insisting that she take it would impinge on her autonomy.

To teach Nao to navigate such quandaries, the Andersons gave it examples of cases in which bioethicists had resolved conflicts involving autonomy, harm and benefit to a patient. Learning algorithms then sorted through the cases until they found patterns that could guide the robot in new situations.

With this kind of 'machine learning', a robot can extract useful knowledge even from ambiguous inputs (see go.nature.com/2r7nav). The approach would, in theory, help the robot to get better at ethical decision-making as it encounters more situations. But many fear that the advantages come at a price. The principles that emerge are not written into the computer code, so “you have no way of knowing why a program could come up with a particular rule telling it something is ethically 'correct' or not”, says Jerry Kaplan, who teaches artificial intelligence and ethics at Stanford University in California.

Getting around this problem calls for a different tactic, many engineers say; most are attempting it by creating programs with explicitly formulated rules, rather than asking a robot to derive its own. Last year, Winfield published the results of an experiment that asked: what is the simplest set of rules that would allow a machine to rescue someone in danger of falling into a hole? Most obviously, Winfield realized, the robot needed the ability to sense its surroundings — to recognize the position of the hole and the person, as well as its own position relative to both. But the robot also needed rules allowing it to anticipate the possible effects of its own actions.

Winfield's experiment used hockey-puck-sized robots moving on a surface. He designated some of them 'H-robots' to represent humans, and one — representing the ethical machine — the 'A-robot', named after Asimov. Winfield programmed the A-robot with a rule analogous to Asimov's first law: if it perceived an H-robot in danger of falling into a hole, it must move into the H-robot's path to save it.

Winfield put the robots through dozens of test runs, and found that the A-robot saved its charge each time. But then, to see what the allow-no-harm rule could accomplish in the face of a moral dilemma, he presented the A-robot with two H-robots wandering into danger simultaneously. Now how would it behave?

The results suggested that even a minimally ethical robot could be useful, says Winfield: the A-robot frequently managed to save one 'human', usually by moving first to the one that was slightly closer to it. Sometimes, by moving fast, it even managed to save both. But the experiment also showed the limits of minimalism. In almost half of the trials, the A-robot went into a helpless dither and let both 'humans' perish. To fix that would require extra rules about how to make such choices. If one H-robot were an adult and another were a child, for example, which should the A-robot save first? On matters of judgement like these, not even humans always agree. And often, as Kaplan points out, “we don't know how to codify what the explicit rules should be, and they are necessarily incomplete”.

by Boer Deng, Nature |  Read more:
Image: Peter Adams and Day The Earth Stood Still