Monday, April 28, 2014

Growing-Ups

One of the most common insults to today’s emerging adults is that they’re lazy. According to this view, young people are ‘slackers’ who avoid work whenever possible, preferring to sponge off their parents for as long as they can get away with it. One of the reasons they avoid real work is that have an inflated sense of entitlement. They expect work to be fun, and if it’s not fun, they refuse to do it.

It’s true that emerging adults have high hopes for work, and even, yes, a sense of being entitled to enjoy their work. Ian, a 22-year-old who was interviewed for my 2004 book, chose to go into journalism, even though he knew that: ‘If I’m a journalist making $20,000 a year, my dad [a wealthy physician] makes vastly more than that.’ More important than the money was finding a job that he could love. ‘If I enjoy thoroughly doing what I’m doing in life, then I would be better off than my dad.’ Emerging adults enter the workplace seeking what I call identity-based work, meaning a job that will be a source of self-fulfillment and make the most of their talents and interests. They want a job that they will look forward to doing when they get up each morning.

You might think that this is not a realistic expectation for work, and you are right. But keep in mind it was their parents’ generation, the Baby Boomers, who invented the idea that work should be fun. No one had ever thought so before. Baby Boomers rejected the traditional assumption that work was a dreary but unavoidable part of the human condition. They declared that they didn’t want to spend their lives simply slaving away – and their children grew up in this new world, assuming that work should be meaningful and self-fulfilling. Now that those children are emerging adults, their Baby Boomer parents and employers grumble at their presumptuousness.

So, yes, emerging adults today have high and often unrealistic expectations for work, but lazy? That’s laughably false. While they look for their elusive dream job, they don’t simply sit around and play video games and update their Facebook page all day. The great majority of them spend most of their twenties in a series of unglamorous, low-paying jobs as they search for something better. The average American holds ten different jobs between the ages of 18 and 29, and most of them are the kinds of jobs that promise little respect and less money. Have you noticed who is waiting on your table at the restaurant, working the counter at the retail store, stocking the shelves at the supermarket? Most of them are emerging adults. Many of them are working and attending school at the same time, trying to make ends meet while they strive to move up the ladder. It’s unfair to tar the many hard-working emerging adults with a stereotype that is true for only a small percentage of them.

Is striving for identity-based work only for the middle class and the wealthy, who have the advantages in American society? Yes and no. The aspiration stretches across social classes: in the national Clark poll, 79 per cent of 18- to 29-year-olds agreed that: ‘It is more important for me to enjoy my job than to make a lot of money,’ and there were no differences across social class backgrounds (represented by mother’s education). However, the reality is quite different from the aspiration. Young Americans from lower social class backgrounds are far less likely than those from higher social backgrounds to obtain a college education and, without a college degree, jobs of any kind are scarce in the modern information-based economy. The current US unemployment rate is twice as high for those with only a high-school degree or less than it is for those with a four-year college degree. In the national Clark poll, emerging adults from lower social class backgrounds were far more likely than their more advantaged peers to agree that ‘I have not been able to find enough financial support to get the education I need.’ That’s not their fault. It is the fault of their society which short-sightedly fails to fund education and training adequately, and thereby squanders the potential and aspirations of the young.

Another widespread slur against emerging adults is that they are selfish. Some American researchers – most notoriously Jean Twenge, a professor at San Diego State University and a well-known writer and speaker – claim that young people today have grown more ‘narcissistic’ compared with their equivalents 30 or 40 years ago. This claim is based mainly on surveys of college students that show increased levels of self-esteem. Today’s students are more likely than in the past to agree with statements such as: ‘I am an important person.’

With this stereotype, too, there is a grain of truth that has been vastly overblown. It’s probably true that most emerging adults today grow up with a higher level of self-esteem than in previous generations. Their Baby Boomer parents have been telling them from the cradle onward: ‘You’re special!’ ‘You can be whatever you want to be!’ ‘Dream big dreams!’ and the like. Popular culture has reinforced these messages, in movies, television shows and songs. Well, they actually believed it. In the national Clark poll, nearly all 18- to 29-year-olds (89 per cent) agreed with the statement: ‘I am confident that eventually I will get what I want out of life.’

But – and this is the key point – that doesn’t mean they’re selfish. It certainly doesn’t mean they are a generation of narcissists. It simply means that they are highly confident in their abilities to make a good life for themselves, whatever obstacles they might face. Would we prefer that they cringed before the challenges of adulthood? I have come to see their high self-esteem and confidence as good psychological armour for entering a tough adult world. Most people get knocked down more than once in the course of their 20s, by love, by work, by any number of dream bubbles that are popped by rude reality. High self-esteem is what allows them to get up again and continue moving forward. For example, Nicole, 25, grew up in poverty as the oldest of four children in a household with a mentally disabled mother and no father. Her goals for her life have been repeatedly delayed or driven off track by her family responsibilities. Nevertheless, she is pursuing a college degree and is determined to reach her ultimate goal of getting a PhD. Her self-belief is what has enabled her to overcome a chaotic childhood full of disadvantages. ‘It’s like, the more you come at me, the stronger I’m going to be,’ she told me when I interviewed her for my 2004 book.

by Jeffrey Jensen Arnett, Aeon | Read more:
Image: Thomas Peter/Reuters

Leo Kottke

Goodbye Net Neutrality - Life in the Fast Lane

The principle that all Internet content should be treated equally as it flows through cables and pipes to consumers looks all but dead.

The Federal Communications Commission said on Wednesday that it would propose new rules that allow companies like Disney, Google or Netflix to pay Internet service providers like Comcast and Verizon for special, faster lanes to send video and other content to their customers.

The proposed changes would affect what is known as net neutrality — the idea that no providers of legal Internet content should face discrimination in providing offerings to consumers, and that users should have equal access to see any legal content they choose.

The proposal comes three months after a federal appeals court struck down, for the second time, agency rules intended to guarantee a free and open Internet.

Tom Wheeler, the F.C.C. chairman, defended the agency’s plans late Wednesday, saying speculation that the F.C.C. was “gutting the open Internet rule” is “flat out wrong.” Rather, he said, the new rules will provide for net neutrality along the lines of the appeals court’s decision.

Still, the regulations could radically reshape how Internet content is delivered to consumers. For example, if a gaming company cannot afford the fast track to players, customers could lose interest and its product could fail.

The rules are also likely to eventually raise prices as the likes of Disney and Netflix pass on to customers whatever they pay for the speedier lanes, which are the digital equivalent of an uncongested car pool lane on a busy freeway.

Consumer groups immediately attacked the proposal, saying that not only would costs rise, but also that big, rich companies with the money to pay large fees to Internet service providers would be favored over small start-ups with innovative business models — stifling the birth of the next Facebook or Twitter.

“If it goes forward, this capitulation will represent Washington at its worst,” said Todd O’Boyle, program director of Common Cause’s Media and Democracy Reform Initiative. “Americans were promised, and deserve, an Internet that is free of toll roads, fast lanes and censorship — corporate or governmental.”

If the new rules deliver anything less, he added, “that would be a betrayal.”

by Edward Wyatt, NY Times |  Read more:
Image: Daniel Rosenbaum for The New York Times

Hubert Wolfs (Belgian, 1899-1937), Composition with fish, 1926.
via:

Sunday, April 27, 2014

The Possibility of Self-Sacrifice


Normally, death is present in our lives as an ending-yet-to-arrive. For most of us, Simone Weil writes, “Death appears as a limit set in advance on the future.” We make plans, pursue goals, navigate relationships—all under the condition of death. We lead our lives under the condition of death; our actions are shaped by it as a surface is shaped by its boundaries.

However, as we approach this boundary, when our end is present, we are nothing but terror. All pursuits disintegrate, and our self-understanding collapses. At once we are expelled from the sphere of meaning. We are nothing more than this body. This body and its last breath. It is not simply that we cannot survive our own death; we cannot bear the sight of it. We do not want to die. Not now.

And yet the possibility of self-sacrifice suggests that this terror can be overcome, that death can be meaningful. One recent example is that of Mohamed Bouazizi, the Tunisian street vendor who set himself on fire in December 2010 and whose death put in motion the massive uprising known as the Arab Spring. But there are many less noted acts of self-sacrifice. In different places and moments in time, in different languages and cultures, soldiers, activists, lovers, friends, and parents exhibit a willingness to die that demands our attention.

Such acts, so difficult to comprehend, may seem at first sight unworthy of serious consideration. But rushing to this conclusion would be a mistake. It is not only that by dismissing acts of self-sacrifice as unintelligible we disavow a prevalent and influential human phenomenon. Understanding these acts may also shed light on the way we value things more generally. Indeed, we will see that even if most of us will never actually take such extreme measures, the possibility of self-sacrifice is part of living a meaningful life.

Consider, then, three famous individuals whose deaths are often seen as examples of self-sacrifice. As we consider their deaths, we will come to realize that only one of them found meaning in her own life and that, surprisingly, of the three it is only her death that may properly be called an act of self-sacrifice.

Three Sights of Death
First: a seventy-year-old man. His beard elongates his stocky face. He has an exceptionally broad and flattened nose. Its nostrils flare with each breath, as if each drawing of air originates in a new, voluntary decision to inhale. We watch him in the early hours of dawn, sitting quietly in his prison cell. The skin on his forehead is wrinkled and soft and covered with dried sweat. He has not bathed during these days of waiting, of which this day is second to last.

The old man’s eyes are fixed on the spot where his friend and disciple had just stood. Only a moment ago his friend spoke hopefully, pleading with the old man to flee this jail, to come with him, to save himself. Perhaps the old man is considering the state of mind his friend must be in now: walking back, his mission unfulfilled, unable to understand why his wish that life prevail was shattered by the wisest of men.

Crito has left, and Socrates, who thrived most amongst the crowds, is now alone­ in his cell, having turned down his last chance of survival. On the day after tomorrow he will drink a cup of poisoned hemlock and expire in accordance with the decision of the Athenian court. The sun rises in the sky outside and daylight fills the dank, dusty cell. Socrates breathes calmly, his nostrils flare and contract. It is summer, 399 BCE.

Socrates could not acknowledge that his death would be a terrible loss.

Second: two months short of his forty-sixth birthday, a man in uniform steps from a balcony into an office that is not his own. The office belongs to the commandant of the Japan Self-Defense Force, who is tied and held at sword-point near the wall. “I don’t even think they heard me,” the man says, as he undoes the last button of his uniform jacket.

A moment ago, standing on the balcony, he had called upon the 800 soldiers of the 32nd Regiment to rise against Japan’s liberal-democratic constitution in the name of the country’s history and tradition: “Will you abide a world in which the spirit is dead and there is only a reverence for life?” he asked. The soldiers jeered and hissed, “Let the commandant go!” “Come down off there!” He was not able to finish his speech and decided to move forward with his plan. He motioned to his soldier, Morita, and together they called out, “Long live his imperial Majesty! Long live his imperial Majesty! Long live his imperial Majesty!”

Having failed to inspire a coup d’état, the man sits on the floor of the commandant’s office, and Morita takes his place behind him and slightly to his left, a sword raised above his head. The man grasps a short sword with both hands and points it to his stomach. His strong eyebrows sharpen, but his face is still imprinted with vulnerability. He was a tender, sickly child, and though his features have hardened over the years, though he is now the commander of his own army, the Shield Society, the fragile essence of his face has remained: his right eye slightly larger than his left and higher on his face.

The next moment, the man will disembowel himself. He will then be decapitated by Morita, with the help of another soldier, Furu-Koga, who will, in turn, decapitate Morita and thereby complete the seppuku ritual and the effort of the Shield Society to revive “the spirit of Japan.” This will be the end of one of Japan’s most celebrated and prolific authors, Yukio Mishima, in Tokyo, November 25, 1970.

Third: a forty-year-old woman in the midst of a great crowd of people huddled around the Epsom Downs Racecourse, south of London. Her thin lips are pressed together, her eyes, normally weary and doubtful, are filled with intent. She stands close to the barrier that separates the masses from the racetrack and watches the horses’ hooves thump the ground.

The socialites, the gamblers, the peddlers, the riff-raff, the jockeys, King George V and his wife Queen Mary—all are present and following the race. None entertains a shred of doubt that the Derby will run its course. Emily Wilding Davison, a militant suffragette, resists this overwhelming certainty. She stands still, around her the incessant movement of things and events, the habitual pattern of human and animal affairs. She clasps the metal bar and takes a deep breath. She slips under the railing, the suffragettes’ flag on her body, and runs to the king’s horse, which has appeared around the bend. Days later she will die of her injuries. It is June 4, 1913—Derby Day.

Socrates, Yukio Mishima, and Emily Wilding Davison. Perhaps the only thing they have in common is their public embrace of death. All three had sufficient time and leisure to consider their options, to choose their ending, and all saw death as their final, life-affirming action.

by Oded Na’aman, Boston Review | Read more:
Image: uncredited

Dexter Dalwood
via:

How BP's Gulf Settlement Became Its Target

The oil rig fire and the nearly unstoppable fountain of oil that followed at the Macondo Prospect on April 20, 2010, was the largest marine oil spill in the nation’s history. The oil poured into the gulf for 87 days, fouling an estimated 68,000 square miles of waters and almost 500 miles of coastline from Louisiana to Florida.

With complex ripple effects and lingering uncertainty about the health of the gulf, the economic impact remains nearly impossible to quantify. But BP, exceeding its obligations under the federal Oil Pollution Act to compensate victims, set up a multibillion-dollar program and turned to Kenneth R. Feinberg, an expert in administering complicated programs like the Sept. 11 victims compensation fund, to run it. (...)

But in meeting halls and boardrooms along the gulf, Mr. Feinberg’s compensation program was criticized as being confusing and unpredictable.

“I can’t tell you how many times we did our financials,” said Michael Hinojosa, the owner of Midship Marine, a boat building company in Harvey. “They always asked for more documentation. They kept asking for more and more, and we kept giving it to them.”

With anger and criticism mounting, BP and a committee of plaintiffs’ lawyers began to work out, over nearly 150 negotiating sessions, a broad settlement that could help the company limit the scope of any coming litigation. Negotiators envisioned a program very different from Mr. Feinberg’s. The new claims process was intended to be accessible, and the explicitly stated goal was to help claimants get the largest amount for which they qualified.

Both sides agreed on a claims administrator: Patrick Juneau, a laid-back Louisiana lawyer and a veteran administrator of major settlement funds, including the one resulting from lawsuits over the painkilling drug Vioxx.

A central element of the agreement, however, would prove to be a time bomb. Instead of having claims calculators contend with different kinds of arguable evidence to prove that damage was linked to the spill, the negotiators came up with a formula that relied solely on financial data for proof of harm. If a business was in a certain region and could prove that its income dropped and rose again in a specific pattern during 2010, that would be enough to establish a claim.

To David A. Logan, the dean of the law school at Roger Williams University in Rhode Island, this was a creative way to avoid endless minitrials. “The whole idea is to make this proceed without laborious technical findings on causation,” he said.

The deal covered a broad array of businesses in the gulf states, stretching all the way to the Tennessee state line. It offered those claiming damages the potential of maximizing compensation. For BP, it promised to sharply reduce the number of litigants it would face.

The agreement was unusual in another critical way: There was no limit on the overall payout. Aside from a fund for the seafood industry capped at $2.3 billion, BP had agreed not to turn off the spigot as long as there were legitimate claims to pay.

Elizabeth Cabraser, a member of the group of lawyers representing damage claimants, described the deal, which runs to more than a thousand pages including exhibits, as “the most detailed, highly defined settlement agreement that I’ve ever seen.”

“It was a model,” she said. “Up until the day it wasn’t.” (...)

Lawyers all along the Gulf Coast tell the same story: of taking a casual look at the agreement, reading it with increasing disbelief and then immediately encouraging their partners to read it, too. Accountants were hired, chambers of commerce were contacted, and clients — doctors, nonprofit organizations, just about any type of enterprise — were urged to gather financial documents. Law firms, also eligible for claims under the deal, began to look at their own account books.

One lawyer in Tampa, Fla., sent out a mass mailing, later highlighted in court filings by BP, saying that “the craziest thing about the settlement is that you can be compensated for losses that are UNRELATED to the spill.”

by Campbell Robertson and John Schwarz, NY Times |  Read more:
Image: U.S. Coast Guard, via Reuters

Damon Albarn

Saturday, April 26, 2014

The World of Digital Perfume Finders

[ed. It's like another universe.]

Let me begin with a disclaimer: I am not your average fragrance consumer. I have been a beauty editor for 10 years, which has afforded me unprecedented access to hundreds of perfumes, often before they come to market. My fragrance affections being fleeting, however, I still find myself in search of that elusive "signature scent": an early love affair with a cyprus-tinged men's scent from L'Occitane preceded a brief fling with Costume National's Scent Sheer, which I recently followed with a multiparty tryst starring Acqua di Parma's Colonia, Byredo's Seven Veils and Frédéric Malle's Geranium Pour Monsieur.

Last March, I'd grown tired of them all. Wandering the streets of Paris while in the city covering fashion shows, I walked into Nose, a newly opened perfume shop in the Second Arrondissement. Intrigued, I sat down at the perfume bar, and a bearded, bow-tied gentleman in jeans assisted me with one of the iPads ranged along the counter. The tablet—and the denim—were signs that a department-store fragrance-buying experience, this wasn't.

That said, it's increasingly less rare to find a digital element in the perfume world. The fragrance-technology industry has been growing steadily, with iPhone apps like reference tool "The Ultimate Perfume Encyclopedia" and personality-driven scent-seeker "Perfumance." Last year, the Japanese company ChatPerf Inc. launched "Scentee," a small atomizer that plugs into a smartphone's headphone jack and mists a preloaded scent to notify you of an email or as an alarm.

At Nose, my affable adviser turned out to be one of the store's founders, Nicolas Cloutier, a former international I.T.-management consultant turned perfume purveyor. "We are geeks who are good at art direction," Mr. Cloutier later said of himself and a few of the seven partners with whom he launched the shop.

Using the iPad's touch screen, I entered the names of my favorite perfumes, allowing the system's algorithm to create a personal "olfactive pyramid," which then produced five personalized recommendations. Key to the Nose experience was a blind sniff test of each scent, intended to de-emphasize packaging, a strategy in line with the store's ethos: to help consumers find fragrances "without the marketing bulls—t," Mr. Cloutier said.

And it worked. I walked out with Etat Libre d'Orange's Fils de Dieu du Riz et des Agrumes, a spicy Oriental with hints of ginger, shiso and leather that snapped me right out of my fragrance rut. Nose's algorithm—currently available online, and soon to debut at a second brick-and-mortar branch in the U.S.—made me ponder the idea of online-dating my way to true perfume love.

by Celia Ellenberg, WSJ |  Read more:
Image: Pinrose.com

Real Rothko?


One man’s nearly three-decade quest to authenticate a potential Mark Rothko painting purchased at auction for $319.50 plus tax has turned up convincing evidence in the work’s favor, but the experts seem unlikely to issue a ruling, reports the Wall Street Journal.

The painting, which features Rothko’s iconic stacked rectangular blocks of color, bears the artist’s signature and the name of the California School of Fine Arts (now the San Francisco Art Institute), where Rothko taught in 1949.

A lifelong collector, Douglas Himmelfarb, spotted the painting at a South Los Angeles auction preview, and immediately did his research. When he made the connection between Rothko and the school, he knew he had to the buy the piece. After the sale, Himmelfarb traced the canvas to the collection of Mollye Teitelbaum, who had owned it since at least 1964. From there, Himmelfarb began the arduous process of finding proof that his auction find was the real deal.

In the years since, Himmelfarb has found little support from the art establishment, while his personal fortunes have taken a dramatic downturn: the government recently foreclosed on his Hawaii home. If his Rothko is real, it could be his salvation, but it seems that no one is willing to issue an opinion.

The authentications business, as another WSJ article recently highlighted, has become particularly fraught, as estate-run committees for artists such as Keith Haring and Andy Warhol have shut down rather than face the threat of litigation as a result of issuing an unpopular or erroneous opinion.

Rothko expert David Anfam, who published the artist’s catalogue raisonné in 1998, has been familiar with Himmelfarb’s painting since the late 1980s. The scholar even discovered a black-and-white photograph of the work in the archives held by Rothko’s family, but still declined to include the work in his book.

by Sarah Cascone, Artnet |  Read more:
Image: Mark Rothko?

Judgmental Maps


San Francisco by Dan Steiner via: judgmentalmaps.com
[ed. Now this is a map I could use! Hobo blowjobs, baseball hatted fitness overachivers, commie gymnast figure skaters.... check out Las Vegas and New York.]
h/t Boing Boing

Ten Best Sentences

[ed. I can think of a number of other authors whose works might have been included (eg. Raymond Carver, Reanaldo Arenas, Annie Dillard, Kazuo Ishiguro, Gabrial Garcia Marquez, Virginia Woolf, the list goes on and on), but I guess that's what makes this fun, sort of like a Rolling Stone 10 Greatest Guitar Solos exercise. American Scholar article here.]

The editors of American Scholar have chosen “Ten Best Sentences” from literature, and readers have suggested many more. They threw in an eleventh for good measure. This lovely feature caught me in the middle of a new book project, “Art of X-ray Reading,” in which I take classic passages such as these and look beneath the surface of the text. If I can see the machinery working down there, I can reveal it to writers, who can then add to their toolboxes.

With respect and gratitude to American Scholar, I offer brief interpretations below on how and why these sentences work:

Its vanished trees, the trees that had made way for Gatsby’s house, had once pandered in whispers to the last and greatest of all human dreams; for a transitory enchanted moment man must have held his breath in the presence of this continent, compelled into an aesthetic contemplation he neither understood nor desired, face to face for the last time in history with something commensurate to his capacity for wonder.
—F. Scott Fitzgerald, “The Great Gatsby”


This sentence is near the end of the novel, a buildup to its more famous conclusion. It begins with something we can “see,” vanished trees. There is a quick tension between the natural order and the artificial one, a kind of exploitation of the land that is as much part of our cultural heritage as the Myth of the West and Manifest Destiny. “Vanished” is a great word. “The Great Gatsby” sounds like the name of a magician, and he at times vanishes from sight, especially after the narrator sees him for the first time gazing out at Daisy’s dock. What amazes me about this sentence is how abstract it is. Long sentences don’t usually hold together under the weight of abstractions, but this one sets a clear path to the most important phrase, planted firmly at the end, “his capacity for wonder.”

I go to encounter for the millionth time the reality of experience and to forge in the smithy of my soul the uncreated conscience of my race.
—James Joyce, “A Portrait of the Artist as a Young Man”


This sentence also comes near the end of the novel, but is not the very end. It has the feel of an anthem, a secular credo, coming from Stephen Dedalus, who, in imitation of Joyce himself, feels the need to leave Ireland to find his true soul. The poet is a maker, of course, like a blacksmith, and the mythological character Dedalus is a craftsman who built the labyrinth and constructed a set of wings for his son Icarus. The wax in those wings melted when Icarus flew too close to the sun. He plunged into the sea to his death. This is where the magic of a single word comes into play: “forge.” For the narrator it means to strengthen metal in fire. But it also means to fake, to counterfeit, perhaps a gentle tug at Stephen’s hubris.

This private estate was far enough away from the explosion so that its bamboos, pines, laurel, and maples were still alive, and the green place invited refugees—partly because they believed that if the Americans came back, they would bomb only buildings; partly because the foliage seemed a center of coolness and life, and the estate’s exquisitely precise rock gardens, with their quiet pools and arching bridges, were very Japanese, normal, secure; and also partly (according to some who were there) because of an irresistible, atavistic urge to hide under leaves.
—John Hersey, “Hiroshima”


Great writers fear not the long sentence, and here is proof. If a short sentence speaks a gospel truth, then a long one takes us on a kind of journey. This is best done when subject and verb come at the beginning, as in this example, with the subordinate elements branching to the right. There is room here for an inventory of Japanese cultural preferences, but the real target is that final phrase, an “atavistic urge to hide under leaves,” even in the shadow of the most destructive technology ever created, the atomic bomb.

by Roy Peter Clark, Poynter |  Read more:
Image: Berenice Abbott via:

Capital in the Twenty-First Century

[ed. See also: The Picketty Panic and The Picketty Phenomenon.]

Thomas Piketty just tossed an intellectual hand grenade into the debate over the world’s struggling economy. Before the English translation of the French economist’s new book, Capital in the Twenty-first Century, hit bookstores, it was applauded, attacked and declared a must-read by pundits, left, right and center. For good reason: it challenges the fundamental assumption of American and European politics that economic growth will continue to deflect popular anger over the unequal distribution of income and wealth.

“Abundance”, observed the late sociologist Daniel Bell was “the American surrogate for socialism.” As the economic pie expands, everyone’s slice grew bigger.

The three-decade long boom that followed World War II seemed to prove Bell’s point, tossing Karl Marx’s forecast of capitalism’s collapse into the dustbin of history.

Marx predicted that as markets expand, profits from technological innovation would gradually dry up, depressions would get more severe and capitalists would drive labor’s share of income in the advanced industrial economies so low that revolution was inevitable.

But twentieth-century capitalism proved more resilient than Marx thought. New technologies continued to generate more profits and jobs. Keynesian fiscal and monetary policies prevented cyclical business downturns from triggering depressions. And the investor class, threatened by the specter of communism, agreed, grudgingly, to the New Deal model of strong unions, social insurance and other policies that forced them to share the profits from rising productivity with their workers.

In the United States, the portion of income going to the richest dropped from over 45 percent in the 1920s to under 35 percent by the 1970s. Between 1959 and 1973 the percentage of Americans living in poverty was cut in half. Other industrial countries followed the same pattern.

Ultimately, it was the communist system that collapsed, unable to match capitalism’s performance in providing the proletariat with a house, a car and the other totems of a middle-class life.

The idea that capitalism naturally led to greater equality was codified in a 1955 landmark study by the American economist Simon Kuznets, whose data showed that after an initial period of rising inequality (e.g., our nineteenth-century gilded age) the wealth generated by market economies is distributed between labor and capital more evenly. When workers’ productivity rose, so do their wages. The “Kuznets Curve” quickly became conventional wisdom for both mainstream economists and the politicians they advised. As the nautical John F. Kennedy put it: “A rising tide lifts all boats.”

The central question for Western economists then became how to keep the tide of growth rising. Liberals favored more active government interventions, conservatives more incentives for private investors. Income and wealth distribution—the issue that had preoccupied economists since Adam Smith—was narrowed to studies of the characteristics of the poor (their race, their gender, their sex life, etc.) that prevented them from rising with the tide. Almost no one studied the rich.

Then, in the late 1970s, the trend toward equality reversed. Workers’ output-per-hour continued to rise, but their wages and benefits flattened. Almost all of the gains from the increased productivity of the last three and a half decades went to corporate investors and their top managers. The poverty rate rose by a third. And the pain spread steadily up the socioeconomic ladder. (...)

Over the longer term, the prospects can be downright grim. The venerable Robert Gordon, an economist known for careful analysis, thinks that the innovation that has driven growth for over a century might well slow from its average of 2 percent per year since 1891 to 0.2 percent for the foreseeable future. Add tightening environmental costs and constraints and the good ship Abundance sinks to the sea floor.

The pessimists of course could be wrong. It’s certainly possible, if not plausible, that some unpredicted burst of entrepreneurial energy or a simultaneous reconversion to Keynesianism could propel growth faster than even Obama’s optimistic economists forecast. Couldn’t that be enough to float us back to Kuznets’s curve of rising equality?

Enter Thomas Piketty, whose impressively researched analysis (600 pages plus a detailed 165-page online technical appendix) concludes that Simon Kuznets was wrong. Not only does capitalist growth not reduce inequality; it increases it.

Using data and computer power unavailable to Kuznets, Piketty pored through 200–300 years of the economic history of the largest capitalist economies—principally the United States, Britain, France, Canada, Germany, Sweden and Japan. The numbers show that that since roughly 1700, with one exceptional period, the returns to capital (profits and interest) have exceeded the rate of overall economic growth. Since the rich own most of the re-investable capital, their wealth accumulates faster than the wealth of the vast majority of people whose income depends on wages and salaries.

The exceptions to the historical trend were the years 1914–75 in Europe and 1929–75 in the United States, in which inequality shrunk in almost all western nations. According to Piketty this era was unique: the consequences of two world wars, the Great Depression and the social democratic character of the postwar recovery in Europe, Japan and North America. Once those forces were spent, capitalism returned to its normal function as a machine for producing “inequalities that radically undermine the meritocratic values on which democratic societies are based.”

Moreover—and this is a key point—contrary to what we’re taught in Economics 101, markets appear to have no self-correcting mechanism that can halt the worsening misdistribution of wealth. If allowed to go unchecked, a tiny number of capitalists will own just about everything, with social consequences that Piketty sees as “potentially terrifying.”

by Jeff Faux, The Nation |  Read more:
Image: Reuters/Charles Platiau