Wednesday, October 11, 2017

Monopoly Men


Given our reality, it would be helpful to think of Amazon, Google, Facebook, and Twitter as the new “utilities” of the modern era. Today the idea of “public utility” conjures images of rate regulation and electric utility bureaucracies. But for Progressive Era reformers, public utility was a broad concept that, at its heart, was about creating regulations to ensure adequate checks and balances on private actors who had come to control the basic necessities of life, from telecommunications to transit to water. This historical tradition helps us identify what kinds of private power are especially troubling. The problem, ultimately, is not just raw “bigness,” or market capitalization. Rather, the central concern is about private control over infrastructure.

by K. Sabeel Rahman, Boston Review |  Monopoly Men
[ed. See more below, and: "Why is it so hard to ditch Apple, Amazon, Google and Facebook?"]

Know Thy Futurist

Have you heard? Someday we will live in a perfect society ruled by an omnipotent artificial intelligence, provably and utterly beneficial to mankind.

That is, if we don’t all die once the machines gain consciousness, take over, and kill us.

Wait, actually, they are going to take some of us with them, and we will transcend to another plane of existence. Or at least clones of us will. Or at least clones of us that are not being perpetually tortured for our current sins.

These are all outcomes that futurists of various stripes currently believe. A futurist is a person who spends a serious amount of time—either paid or unpaid—forming theories about society’s future. And although it can be fun to mock them for their silly sounding and overtly religious predictions, we should take futurists seriously. Because at the heart of the futurism movement lies money, influence, political power, and access to the algorithms that increasingly rule our private, political, and professional lives.

Google, IBM, Ford, and the Department of Defense all employ futurists. And I am myself a futurist. But I have noticed deep divisions and disagreements within the field, which has led me, below, to chart the four basic “types” of futurists. My hope is that by better understanding the motivations and backgrounds of the people involved—however unscientifically—we can better prepare ourselves for the upcoming political struggle over whose narrative of the future we should fight for: tech oligarchs that want to own flying cars and live forever, or gig economy workers that want to someday have affordable health care.

With that in mind, let me introduce two dimensions of futurism, represented by axes. That is to say, two ways to measure and plot futurists on a graph, which we can then examine more closely.

The first measurement of a futurist is the extent to which he or she believes in a singularity. Broadly speaking a singularity is a moment where technology gets so much better, at such an exponentially increasing rate, that it achieves a fundamental and meaningful technological shift of existence, transcending its original purpose and even nature. In many singularity myths the computer either becomes self-aware and intelligent, possibly in a good way but sometimes in a destructive or even vindictive way. In others humans are connected to machines and together become something new. The larger point is that some futurists believe fervently in a singularity, while others do not.

On our second axis, let’s measure the extent to which a given futurist is worried when they theorize about the future. Are they excited or scared? Cautious or jubilant? The choices futurists make are often driven by their emotions. Utopianists generally focus on all the good that technology can do; they find hope in cool gadgets and the newest AI helpers. Dystopianists are by definition focused on the harm; they consequently think about different aspects of technology altogether. The kinds of technologies these two groups consider are nearly disjoint, and even where they do intersect, the futurists’ takes are diametrically opposed.

So, now that we have our two axes, we can build quadrants and consider the group of futurists in each one. Their differences shed light on what their values are, who their audiences are, and what product they are peddling.

Q1.

First up: the people who believe in the singularity and are not worried about it. They welcome it with open arms in the name of progress. Examples of people in this quadrant are Ray Kurzweil, the inventor and author of The Age of Spiritual Machines (1999); the libertarians in the Seasteaders movement who want to create autonomous floating cities outside of any government jurisdiction; and the people who are trying to augment intelligence and live forever.

These futurists enthusiastically believe in Moore’s Law—the observation by Gordon Moore, a co-founder of Intel, that the number of transistors in a circuit doubles approximately every two years—and in exponential growth of everything in sight. Singularity University, started by Kurzweil, has no fewer than twelve mentions of the word “exponential” on its website. Its motto is “Be Exponential.”

Generally speaking these futurists are hobbyists—they have the time for these theories because, in terms of wealth, they are already in the top 0.1 percent. They think of the future in large part as a way to invest their money and become even wealthier. They once worked at or still own Silicon Valley companies, venture capital firms, or hedge funds, and they learned to think of themselves as deeply clever—possibly even wise. They wax eloquent about meritocracy over expensive wine or their drug of choice (micro-dosing, anyone?).

With enormous riches and very few worldly concerns, these futurists focus their endeavors on the only things that could actually threaten them: death and disease.

They talk publicly about augmenting intelligence through robotic assistance or better quality of life through medical breakthroughs, but privately they are interested in technical fixes to physical problems and are impatient with the medical establishment for being too cautious and insufficiently innovative. They invest heavily in cryogenics, dubious mind­­–computer interface technology, medical strategies for living forever (here’s looking at you, Sergey Brin and Larry Page), and possibly even the blood of young people.

These futurists are ready and willing to install hardware in their brains because, as they are mostly young or middle-age white men, they have never been oppressed. For them the worst-case scenario is that they live their future lives as uploaded software in the cloud, a place where they can control the excellent virtual reality graphics. (If this sounds like a science fiction fantasy for sex-starved teenagers, don’t be surprised. They got most of these ideas—as sex-starved teenagers—from writers such as Robert Heinlein and Ayn Rand.)

The problem here, of course, is the “I win” blind spot—the belief that if this system works for me, then it must be a good system. These futurists think that racism, sexism, classism, and politics are problems to be solved by technology. If they had their way, they would be asked to program the next government. They would keep it proprietary, of course, to keep the hoi polloi from gaming the system.

And herein lies the problem: whether it is the nature of existence in the super-rich bubble, or something distinctly modern and computer-oriented, futurism of this flavor is inherently elitist, genius-obsessed, and dismissive of larger society.

Q2.

Next: people who believe in a singularity but are worried about the future. They do not see the singularity as a necessarily positive force. These are the men—majority men, although more women than in the previous group—who read dystopian science fiction in their youth and think about all the things that could go wrong once the machines become self-aware, which has a small (but positive!) probability of happening. They spend time trying to estimate that probability.

A community center for these folks is the website lesswrong.com, which was created by Eliezer Yudkowsky, an artificial intelligence researcher. Yudkowsky thinks people should use rationality and avoid biases in order to lead better lives. It was a good idea, as far as practical philosophies go, but eventually he and his followers got caught up in increasingly abstract probability calculations using Bayes’ Theorem and bizarre thought experiments.

My favorite is called Roko’s basilisk, the thought experiment in which a future superintelligent and powerful AI tortures anyone who imagined its existence but didn’t go to the trouble of creating it. In other words it is a vindictive hypothetical being that puts you in danger as soon as you hear the thought experiment. Roko’s basilisk was seen by its inventor, Roko, as an incentive to donate to the cause of Friendly AI to “thereby increase the chances of a positive singularity.” But discussion of it soon so dominated Yudkowsky’s site that he banned it—a move that, not surprisingly, created more interest in the discussion.

A different but related movement in the world of AI futures comes from the Effective Altruism movement, which has been advocated for in this journal by philosopher Peter Singer. Like Yudkowsky, Effective Altruists started out well. Their basic argument was that we should care about human suffering outside our borders, not just in our close proximity, and that we should take personal responsibility for optimizing our money to improve the world.

You can go pretty far with that reasoning—and to their credit, Effective Altruists have made enormous international charitable contributions—but obsessing over the concept of effectiveness is limited by the fact that suffering, like community good, is hard to quantify.

Instead of acknowledging the limits of hard numbers, however, the group has more recently spun off into a parody of itself. Some factions believe that instead of worrying about current suffering, they should worry about “existential risks,” unlikely futuristic events that are characterized by computations besieged by powers of ten and could thus cause enormous suffering. A good example comes from Nick Bostrom’s Future of Humanity Institute website: ". . . we find that the expected value of reducing existential risk by a mere one billionth of one billionth of one percentage point is worth a hundred billion times as much as a billion human lives."

As a group these futurists are fundamentally sympathetic figures but woefully simplistic regarding current human problems. If they are not worried about existential risk, they are worried about the suffering of plankton, or perhaps every particle in the universe.

I will shove Elon Musk into this Q2 group, even though he is not a perfect fit. Being an enormously rich and powerful entrepreneur, he probably belongs in the first group, but he sometimes shows up at Effective Altruism events, and he has made noise recently about the computers getting mean and launching us into World War III. The cynics among us might suspect this is mostly a ploy to sell his services as a mediator between the superintelligent AI and humans when the time inevitably comes. After all Musk always has something to sell, including a ticket to Mars, Earth’s backup planet.

by Cathy O'Neil, Boston Review |  Read more:
Image: Maurizio Pesce
[ed. From the Boston Review's series: Global Dystopias. See also: Monopoly MenSchlesinger and the Decline of Liberalism.]

Tuesday, October 10, 2017

Nobel Prize awarded to Richard Thaler

This is a prize that is easy to understand. It is a prize for behavioral economics, for the ongoing importance of psychology in economic decision-making, and for “Nudge,” his famous and also bestselliing book co-authored with Cass Sunstein.

Here are previous MR posts on Thaler, we’ve already covered a great deal of his research. Here is Thaler on Twitter. Here is Thaler on scholar.google.com. Here is the Nobel press release, with a variety of excellent accompanying essays and materials. Here is Cass Sunstein’s overview of Thaler’s work.

Perhaps unknown to many, Thaler’s most heavily cited piece is on whether the stock market overreacts. He says yes this is possible for psychological reasons, and this article also uncovered some of the key evidence in favor of the now-vanquished “January effect” in stock returns, namely that for a while the market did very very well in the month of January. (Once discovered the effect went away.) Another excellent Thaler piece on finance is this one with Shleifer and Lee, on why closed end mutual funds sell at divergences from their true asset values. This too likely has something to do with market psychology and sentiment, as the same “asset package,” in two separate and non-arbitrageable markets, can sell for quite different prices, sometimes premia but usually discounts. This was one early and relative influential critique of the efficient markets hypothesis.

Another classic early Thaler piece is on a phenomenon known as “mental accounting,” for instance you might treat a dollar in your pocket as different from a dollar in your bank account. Or earned money may be treated different from money you just chanced upon, or won that morning in the stock market. This has significant implications for predicting consumer decisions concerning saving and spending; in particular, economists cannot simply measure income but must consider where the money came from and how it is perceived by consumers, namely how they are performing their mental accounting of the funds. Have you ever gone on a vacation with a notion that you would spend so much money, and then treated all expenditures within that range as essentially already decided? The initial piece on this topic was published in a marketing journal and it has funny terminology, a sign of how far from the mainstream this work once was. It is nonetheless a brilliant piece. Here is more Thaler on mental accounting.

Thaler, with Kahneman and Knetsch, was a major force behind discovering and measuring the so-called “endowment effect.” Once you have something, you value it much more! Maybe three or four times as much, possibly more than that. It makes policy evaluation difficult, because as economists we are not sure how much to privilege the status quo. Should we measure “willingness to pay” — what people are willing to pay for what they don’t already have? Or “willingness to be paid” — namely how eager people are to give up what they already possess? The latter magnitude will lead to much higher valuations for the assets in question. This by the way helps explain status quo bias in politics and other spheres of life. People value something much more highly once they view it as theirs.

This phenomenon also makes the Coase theorem tricky because the final allocation of resources may depend quite significantly on how the initial property rights are assigned, even when the initial wealth effect from such an allocation may appear to be quite small. See this Thaler piece with Knetsch. It’s not just that you assign property rights and let people trade, but rather how you assign the rights up front will create an endowment effect and thus significantly influence the final bargain that is struck.

With Jolls and Sunstein, here is Thaler on a behavioral approach to law and economics, a long survey but also constructive piece that became a major trend and has shaped law and economics for decades. He has done plenty and had a truly far-ranging impact, not just in one or two narrow fields.

Thaler’s “Nudge” idea, developed in conjunction with Cass Sunstein over the course of a major book and some articles, has led policymakers all over the world to focus on “choice architecture” in designing better systems, the UK even setting up a “Nudge Unit.” For instance, one way to encourage savings is to set up pension systems for employees so that the maximum contribution is the default, rather than an active choice people must make. This is sometimes referred to as a form of “soft” or “libertarian paternalism,” since choice is still present. Here is Thaler responding to some libertarian critiques of the nudge idea.

by Tyler Cowen, Marginal Revolution | Read more:
[ed. Lots of good links, even how economics affect the NFL draft. See also: The Making of Richard Thaler's Economics Nobel.]

John Severson, Surfer Magazine
via:

16 Ways QR Codes are Being Used in China

We’ve talked a lot about the rise of QR codes in Asia, but they may now finally be moving from being a “joke” to being more widely adopted in other places as well. Simply put, QR codes let you hyperlink and bookmark the physical world. Just as UPC barcodes allow machine-readable scanning of data (e.g., price) on items in stores, QR codes are a barcode-like vector between online and offline information. And unlike NFC (near-field communication), which is used for reading smart cards, keycards, and contactless payments, QR codes can be easily accessed by any phone in the world that has a camera. They enable everything from online to offline (O2O) marketplaces, which are huge in China, to augmented reality.

Some of the more obvious use cases for them include things like adding a WeChat friend in real life (IRL); subscribing to a WeChat official account (often representing media, stores, people, and others); paying a street vendor or at a convenience store; connecting to wi-fi in a shop; getting additional content from a magazine article; and learning more about styling or the brand from a clothing label. But there are also a number of less-obvious (or not as well covered) uses in China, which I share below, because they show the range of what’s possible everywhere when QR codes disintermediate existing use cases… and enable new ones.

Things people already do, but now with QR codes

#1 Give and collect gifts at a wedding
On wedding invitations, “no boxed gifts please” is basically code for “just give me cash”. And in many Asian cultures, cash is more standard and socially acceptable anyway compared to other gifts for auspicious occasions like marriage, births, etc. In China, these gifts come in the form of red envelopes — which were also a growth hack for increasing adoption of payments in messaging.

But here, a member of the bridal party wears a QR code as necklace to collect digital money from wedding guests who forgot to bring physical red envelopes… though this use case had a mixed reception.


#2 Give and collect alms
Bluntly, begging has gone digital in China thanks to the penetration of mobile wallets there. And no one can really claim that they don’t have spare change when they are almost always likely to have their cell phones on them.

In this case, the panhandlers collect physical change from kind strangers, but also (quite brilliantly) provide mobile payment QR codes as another payment option.


#3 Collect tithes

The penetration of QR codes is so deep in China that it includes other forms of social commerce besides gifting or begging. Even churches collect tithes through QR codes. Why only offer wooden collection boxes when you can offer a QR code, as with this temple in Hangzhou? (...)


Things that are now possible (or way easier to do now) because of QR codes

#7 Checking the source and authenticity of food and drinks

QR codes can already be found in restaurants in China for things like paying for or ordering a meal at a restaurant. But even food distributors are taking advantage of QR codes; supermarkets, for instance, use them in produce stands so that customers can learn about the supply chain behind a specific batch of fruits or vegetables: Which farm did it come from? How was it transported? Meanwhile, wine-makers use QR codes as a way of proving the authenticity of the bottle — source, vintage — as well as educating consumers (type of grape, suggested pairings).

by Connie Chan, Andreesen Horowitz |  Read more:
Images: uncredited

St. Vincent


Pills to wake, pills to sleep
Pills, pills, pills every day of the week
Pills to walk, pills to think
Pills, pills, pills for the family
Pills to grow, pills to shrink
Pills, pills, pills and a good stiff drink
Pills to fuck, pills to eat
Pills, pills, pills down the kitchen sink

The Country Sausage That's Going to Town

It has come to my attention that within the dog-eat-dog underworld of the culinary industry, there is a clandestine movement afoot to discredit my food writing. The chief criticisms are that I don’t actually cook and my essays aren’t about food. This has gotten on my last nerve! I put forth that any creature in possession of an alimentary canal knows plenty about food. The basics are simple: If you don’t eat, you will die, and bacon tastes better than rice cakes. Nutrition is for the food writer as ornithology is for the birds!

To reassure the skeptics, my bona fides are as follows. For more than a decade I worked in seventeen different restaurants, cafés, and bars. My career began in Morehead, Kentucky, at a Burger Queen and ended at Doyle’s Cafe in Boston, Massachusetts. I worked as a dishwasher, busboy, prep cook, steward, breakfast cook, soda pop pourer, sandwich maker, barista, and waiter. I got fired often, although never for reasons related to job performance. The most common reason was a mysterious word—“insubordination”—essentially a pretext used by power-mad bosses to shed themselves of people they didn’t like. Or in my case, a person who resisted the lure of kissing the boss’s b-hole. Of all the professional sadness in the world, the most poignant is that of assistant managers at a restaurant. Their priority is scheduling shifts for a waitstaff that makes more money than managers. Their only recourse is firing people.

A deeply personal matter led to my decade in the restaurant business and subsequent “career” as a food writer. At age fifteen I met a girl. Nothing is as powerful as the extraordinary jolt of a teenager’s first love. It’s like seeing the world after a double-cataract surgery. Life is suddenly exquisite. Each leaf becomes the bearer of unbearable beauty. Romeo and Juliet were so deliriously happy that they embraced murder and suicide as an ideal solution. I didn’t go that far, but I fell deeply and totally in love with Kim. She was smart, pretty, and laughed at my jokes. I spent all my waking hours trying to talk to her, eventually moving up to walking around holding hands. Our six-week romance was the best of my life—virtuous, finite, and gloriously unprecedented. I never again knew such an all-encompassing joy.

We met doing summer stock theatre as part of a college recruitment program for promising high school students. This occurred before the advent of portable music devices, which meant we listened to the radio. Every lyric seemed to be about us, directed exclusively at the impermeable dome in which we lived. Our favorite song was “The Joker” by the Steve Miller Band, in which he spoke of the pompitous of love. Neither Kim nor I knew exactly what the phrase meant, but we felt it described what we had, a kind of purity and truth. We were pompitous. Our love embodied pompitousness. We sought pompity.

In mid-August Kim returned home to western Kentucky, two hours away. We wrote each other daily, a pace that dwindled in frequency to weekly, then monthly. We called each other a few times, long spells with each of us clutching the receiver silently, content to know the other person was on the line. She visited me once, driving with a couple of her friends who clearly evaluated me as deficient: too short, too poor, non-athletic. Plus I was from the hills and looked it. I didn’t even own a car to reciprocate her visit. We stopped writing. I never saw Kim again, but I never forgot her.

Her grandfather was Fred Purnell, a former railroad man who founded Purnell’s “Old Folks” Sausage company in Simpsonville, Kentucky. According to family lore, Fred loved listening to the elderly talk, a trait that earned him the phenomenal nickname of “Old Folks.” I deeply envy his sobriquet. As a child I also enjoyed hearing tales of the old days. This has evolved into a secret desire that young people would be interested in hearing me talk. As it is, I can’t even get my wife to listen to a word I say. My nickname could readily be “Husby Ignored” or “He Who Talks Too Much.”

Purnell’s Sausage began as a family company and still maintains that status, which is quite unusual in the corporate era of Big Pork. For example, Smithfield Foods began as a family operation in Virginia and was America’s biggest pork company until it was bought by a Chinese corporation for seven billion dollars. (Yes, that’s 7,000,000,000 bucks!) Industrialized pork created a lucrative side business, the never-ending disposal of hog feces. If you can withstand the dreadful smell, it’s a wide-open field for a “manure entrepreneur.”

The packaging of Purnell’s sausage features a drawing of a cheerfully grinning pig’s face. It’s a great design: simple, bold, and memorable. At least as long as you ignore the obvious—what the hell does that pig have to be happy about? His entire family is encased in frozen wrappers for sale! Setting aside the complex emotional life of a hog, what enthralled me most was the slogan: “The Country Sausage that’s Going to Town!”

When I was a child living on a dirt road in the country, there was nothing better than going to the nearest town. Morehead had paved streets, multiple two-story buildings, and sidewalks. One building was rumored to contain an elevator. A ten-cent store sold model cars and Hot Wheels. The corner drugstore had comic books and nickel Cokes. The prospect of going to Morehead on Saturday sustained me throughout the tedious week of attending school. My mother’s greatest punishment was forbidding her kids from accompanying her to town. Merely the threat impelled me to prompt obedience. (The worst period of my life was being banned from Morehead for a month. After reading that Daniel Boone had stained his skin with walnut juice to pass as a Shawnee, I smeared the brown inner oil all over my body. It didn’t wash out.)

The phrase “going to town” has another, more generalized and colloquial definition. It means to carry out something with great enthusiasm, fully committed and doing the best you can—such as eating. That boy is going to town on that sausage! As a food slogan, it connotes the ambition of a country staple that’s heading out, putting the farm behind, eager for the bright lights. Maybe that’s why the pig was so happy. The enthusiasm for departure was a form of ignorance at the true destination—the bloody fate of a slaughterhouse. Like me in later years, that pig would often wish it had stayed home, safe in the hills.

I recently learned that the Purnell motto was discontinued sometime in the mid-1970s. The new slogan, currently in use on all their products, is bland and innocuous: “It’s Gooo-od.” The phrase is better utilized in a verbal fashion, drawing out the syllable for emphasis. Written, it’s harder to comprehend, leading one to mentally pronounce it as “gew-odd.” Still, it sounds less old-fashioned and is faster to say on TV and radio. (I guess they had to go with an elongated form of “good” because Tony the Tiger had already appropriated “They’re Gr-r-reat!” to endorse Frosted Flakes.) Most important, it’s true. Purnell’s sausage is the best sausage in the world.

I began wondering if the new motto was a business decision or a family mandate due to Kim’s interest in me around the same time. Maybe the Purnells didn’t want my country sausage coming to Louisville. It’s gooo-od that we got rid of that bumpkin early! Feeling rejected, I realized I could simply call Kim and ask her what motivated the new motto. I immediately became terrified that she wouldn’t remember me, or that she’d consider me a stalker with a forty-year delay. The best-case scenario would be if we talked for hours, met in person, fell in love, and I left my wife to move to Louisville and eat sausage forever. That would also be the worst-case scenario.

by Chris Offutt, Oxford American | Read more:
Image: "Alice,” by Kevin Horan

Monday, October 9, 2017

Same Time, Another Planet


PRK24 is sitting at the kitchen table when KNT32 glides into the room. PRK24 is so beautiful that one could die from it, thinks KNT32; how, she thinks, as she hides her head in her hands, can one do something so painful, as she must, to something so handsome? Or someone? PRK24 looks at her with amazement, then a polaroid picture slides out of his head, he takes it down with his hands, gives it to her, it shows KNT32 as she is standing now, with her hands around her head. Then he pulls out another picture that is blank, but with this symbol: “?” KNT32 shakes her head. Then she pulls out a picture. It shows KNT32, naked, against this kitchen table, with PRK27 behind her. PRK24 slides slowly back from the chair while staring at the picture, which slowly dissolves before his eyes. KNT32’s heart is hammering. Then she pulls out a picture that shows a little embryo. It is so handsome. It is so little, and the light around it is so red. It is sucking its thumb. It looks as though it is dreaming. Things one cannot know. PRK24 closes his eyes, because it hurts! He is both furious and completely lost. He pulls a picture out of his head: PRK24 and KNT32 eating hotdogs at a hotdog stand. KNT32 has her mouth wide open around a gigantic hotdog with too much onion. PRK24 is laughing. A new picture: PRK24 has won a pink teddybear for KNT32, and KNT32 is hugging it. A new picture: PRK24 and KNT32 are walking hand in hand on a sandy beach, the sun is setting, they are not wearing shoes. KNT32’s heart is about to break. She takes out a picture that shows how her heart is breaking. But PRK24 does not see it. He sits with his eyes closed. He takes out a picture that shows the surface of a body of water. He sits for a little. Then he takes out a new picture: A large bubble of air is about to burst against the surface of the water. KNT32 throws herself toward the picture, trying to dive into it, but too late, it dissolves, she shakes PRK24, but he has disappeared into himself.

by Gunnhild Øyehaug, Paris Review | Read more:
Image: Santos Gonzales, via Flickr

Autopilot Wars

Sixteen Years, But Who’s Counting?

Consider, if you will, these two indisputable facts. First, the United States is today more or less permanently engaged in hostilities in not one faraway place, but at least seven. Second, the vast majority of the American people could not care less.

Nor can it be said that we don’t care because we don’t know. True, government authorities withhold certain aspects of ongoing military operations or release only details that they find convenient. Yet information describing what U.S. forces are doing (and where) is readily available, even if buried in recent months by barrages of presidential tweets. Here, for anyone interested, are press releases issued by United States Central Command for just one recent week:

September 19: Military airstrikes continue against ISIS terrorists in Syria and Iraq

September 20: Military airstrikes continue against ISIS terrorists in Syria and Iraq

Iraqi Security Forces begin Hawijah offensive

September 21: Military airstrikes continue against ISIS terrorists in Syria and Iraq

September 22: Military airstrikes continue against ISIS terrorists in Syria and Iraq

September 23: Military airstrikes continue against ISIS terrorists in Syria and Iraq

Operation Inherent Resolve Casualty

September 25: Military airstrikes continue against ISIS terrorists in Syria and Iraq

September 26: Military airstrikes continue against ISIS terrorists in Syria and Iraq

Ever since the United States launched its war on terror, oceans of military press releases have poured forth. And those are just for starters. To provide updates on the U.S. military’s various ongoing campaigns, generals, admirals, and high-ranking defense officials regularly testify before congressional committees or brief members of the press. From the field, journalists offer updates that fill in at least some of the details -- on civilian casualties, for example -- that government authorities prefer not to disclose. Contributors to newspaper op-ed pages and “experts” booked by network and cable TV news shows, including passels of retired military officers, provide analysis. Trailing behind come books and documentaries that put things in a broader perspective.

But here’s the truth of it. None of it matters.

Like traffic jams or robocalls, war has fallen into the category of things that Americans may not welcome, but have learned to live with. In twenty-first-century America, war is not that big a deal.

While serving as defense secretary in the 1960s, Robert McNamara once mused that the “greatest contribution” of the Vietnam War might have been to make it possible for the United States “to go to war without the necessity of arousing the public ire.” With regard to the conflict once widely referred to as McNamara’s War, his claim proved grotesquely premature. Yet a half-century later, his wish has become reality.

Why do Americans today show so little interest in the wars waged in their name and at least nominally on their behalf? Why, as our wars drag on and on, doesn’t the disparity between effort expended and benefits accrued arouse more than passing curiosity or mild expressions of dismay? Why, in short, don’t we give a [expletive deleted]?

Perhaps just posing such a question propels us instantly into the realm of the unanswerable, like trying to figure out why people idolize Justin Bieber, shoot birds, or watch golf on television.

Without any expectation of actually piercing our collective ennui, let me take a stab at explaining why we don’t give a @#$%&! Here are eight distinctive but mutually reinforcing explanations, offered in a sequence that begins with the blindingly obvious and ends with the more speculative.

Americans don’t attend all that much to ongoing American wars because:

1. U.S. casualty rates are low. By using proxies and contractors, and relying heavily on airpower, America’s war managers have been able to keep a tight lid on the number of U.S. troops being killed and wounded. In all of 2017, for example, a grand total of 11 American soldiers have been lost in Afghanistan -- about equal to the number of shooting deaths in Chicago over the course of a typical week. True, in Afghanistan, Iraq, and other countries where the U.S. is engaged in hostilities, whether directly or indirectly, plenty of people who are not Americans are being killed and maimed. (The estimated number of Iraqi civilians killed this year alone exceeds 12,000.) But those casualties have next to no political salience as far as the United States is concerned. As long as they don’t impede U.S. military operations, they literally don’t count (and generally aren’t counted).

2. The true costs of Washington’s wars go untabulated. In a famous speech, dating from early in his presidency, Dwight D. Eisenhower said that “Every gun that is made, every warship launched, every rocket fired signifies, in the final sense, a theft from those who hunger and are not fed, those who are cold and are not clothed.” Dollars spent on weaponry, Ike insisted, translated directly into schools, hospitals, homes, highways, and power plants that would go unbuilt. “This is not a way of life at all, in any true sense,” he continued. “[I]t is humanity hanging from a cross of iron.” More than six decades later, Americans have long since accommodated themselves to that cross of iron. Many actually see it as a boon, a source of corporate profits, jobs, and, of course, campaign contributions. As such, they avert their eyes from the opportunity costs of our never-ending wars. The dollars expended pursuant to our post-9/11 conflicts will ultimately number in the multi-trillions. Imagine the benefits of investing such sums in upgrading the nation’s aging infrastructure. Yet don’t count on Congressional leaders, other politicians, or just about anyone else to pursue that connection.

3. On matters related to war, American citizens have opted out. Others have made the point so frequently that it’s the equivalent of hearing “Rudolph the Red-Nosed Reindeer” at Christmastime. Even so, it bears repeating: the American people have defined their obligation to “support the troops” in the narrowest imaginable terms, ensuring above all that such support requires absolutely no sacrifice on their part. Members of Congress abet this civic apathy, while also taking steps to insulate themselves from responsibility. In effect, citizens and their elected representatives in Washington agree: supporting the troops means deferring to the commander in chief, without inquiring about whether what he has the troops doing makes the slightest sense. Yes, we set down our beers long enough to applaud those in uniform and boo those who decline to participate in mandatory rituals of patriotism. What we don’t do is demand anything remotely approximating actual accountability. (...)

6. Besides, we’re too busy. Think of this as a corollary to point five. Even if the present-day American political scene included figures like Senators Robert La Follette or J. William Fulbright, who long ago warned against the dangers of militarizing U.S. policy, Americans may not retain a capacity to attend to such critiques. Responding to the demands of the Information Age is not, it turns out, conducive to deep reflection. We live in an era (so we are told) when frantic multitasking has become a sort of duty and when being overscheduled is almost obligatory. Our attention span shrinks and with it our time horizon. The matters we attend to are those that happened just hours or minutes ago. Yet like the great solar eclipse of 2017 -- hugely significant and instantly forgotten -- those matters will, within another few minutes or hours, be superseded by some other development that briefly captures our attention. As a result, a dwindling number of Americans -- those not compulsively checking Facebook pages and Twitter accounts -- have the time or inclination to ponder questions like: When will the Afghanistan War end? Why has it lasted almost 16 years? Why doesn’t the finest fighting force in history actually win? Can’t package an answer in 140 characters or a 30-second made-for-TV sound bite? Well, then, slowpoke, don’t expect anyone to attend to what you have to say.

7. Anyway, the next president will save us. At regular intervals, Americans indulge in the fantasy that, if we just install the right person in the White House, all will be well. Ambitious politicians are quick to exploit this expectation. Presidential candidates struggle to differentiate themselves from their competitors, but all of them promise in one way or another to wipe the slate clean and Make America Great Again. Ignoring the historical record of promises broken or unfulfilled, and presidents who turn out not to be deities but flawed human beings, Americans -- members of the media above all -- pretend to take all this seriously. Campaigns become longer, more expensive, more circus-like, and ever less substantial. One might think that the election of Donald Trump would prompt a downward revision in the exalted expectations of presidents putting things right. Instead, especially in the anti-Trump camp, getting rid of Trump himself (Collusion! Corruption! Obstruction! Impeachment!) has become the overriding imperative, with little attention given to restoring the balance intended by the framers of the Constitution. The irony of Trump perpetuating wars that he once roundly criticized and then handing the conduct of those wars to generals devoid of ideas for ending them almost entirely escapes notice.

by Andrew Bacevich, TomDispatch | Read more:
Image: America’s War for the Greater Middle East

Sunday, October 8, 2017

By What Measure?

On Catalonia and the referendum

How did things in Catalonia end up the way they did? Under Francoism, the Spanish government committed itself to a shameful pattern of cultural and linguistic repression in Spain’s so-denominated “historical” communities—Galicia, the Basque Country, and Catalonia. The peaceful constitution (in both senses of the word) of post-dictatorial Spain depends in large part on the restoration (or concession) of a significant measure of autonomy to those same communities. But the response to Catalonia’s October 1 independence referendum suggests a crisis for the status quo.

From a merely “cosmetic” perspective—to borrow a turn of the phrase from our own current President—the footage was horrific: military and federal police forces decked out in riot gear breaking down the doors at polling places, clashing openly with members of the Catalan Mossos d’Esquadra, dragging voters and protesters by their hair and ears, beating them to the ground with their truncheons and then continuing to beat them after they’d fallen, breaking fingers, leaving children and elderly people bleeding and in tears. These events aren’t just embarrassing domestically: for a Spanish government hoping to keep its European allies lined up against the idea of Catalan secession, the violence can’t help but weaken its bargaining position.

But the whole lamentable mess—all the clubs and shattered glass and the approximately 900 injured—wasn’t unforeseeable. In October 2015, following the previous month’s electoral victory of the pro-independence alliance Junts pel Sí, the Catalan parliament set into motion an “hoja de ruta,” or roadmap, to independence, agreed upon by the alliance’s constitutive parties the previous month. In October 2016, in line with that original plan, a parliamentary resolution was passed calling for a binding and binary independence referendum to be convened no later than September 2017, followed by an immediate or virtually immediate declaration of independence in the (expected) case of a Yes victory.

The Spanish central government’s response to these circumstances was, for a long time, denial. Time and again, Spanish president Mariano Rajoy, leader of the ruling Partido Popular (a political party itself constructed from the legislative remnants of Francoism), stood there blinking through his rimless eyeglasses and declared that there would be no referendum.1 The referendum-day violence, then, can be understood as a mere extension of that attitude of denial from something that could happen to something that was, in fact, happening. And as “it won’t happen” began to transform into “it sure appears to be happening,” the Spanish government opted to criminalize the whole affair, sending in the military and federal police to conduct raids on the warehouses where ballots and ballot boxes had been stored. Less than a week before the referendum was scheduled to be held, the police arrested fourteen middle and high-ranking government officials for their involvement in its organization. Those who turned out to vote, to manage precincts, to collect and count ballots, were treated as dangerous criminals by the military and militarized police.

The Spanish government’s lengthy refusal to engage even rhetorically with the referendum—and its subsequent criminalization of it—were predicated on its unconstitutionality. And, thanks to various court rulings on the subject, its illegality. For months, El País, the newspaper of record of constitutional Spain, has described the Catalan referendum as “el referéndum ilegal” or “el referéndum independentista ilegal,” as though the one concept were literally unthinkable without the other. The problems with this approach are obvious. Under the Spanish constitution, and as confirmed by Spain’s highest court, calling the referendum, organizing the referendum, and voting in the referendum may have been illegal, but these actions were no more criminal than sitting at the front of the bus when the law dictates that you sit in the back. Arresting people for trying to organize a vote and bludgeoning people trying to vote doesn’t look to very many people like a democratically elected government fulfilling its obligation to protect its citizens. It looks instead like violent political repression, which is, of course, exactly what it was.

Yet the more fundamental problem is disingenuousness. To suggest that the issue with the referendum specifically, and the Catalan government’s pursuit of independence from Spain more generally, is that it is not legal under Spanish law presumes that under Spanish law there exists some legal and democratic path to independence. But the Spanish constitution makes no such provisions for secession. Under Spanish law, and under the Spanish constitution, illegality is effectively built into the pursuit of independence as such and so cannot, on its own, provide grounds for disqualifying this or that particular such pursuit.

... It is true that Catalonia pays more money in taxes to the Spanish government than it receives in return (and no less true that not all of this unreturned tax money is, as the Spanish government claims, redistributed to Spain’s needier communities). But it is also true that Catalonia is an extremely wealthy community with many exceedingly wealthy individuals (many of them current and former Catalan government officials, and many of them suspected of or charged with corruption of fraud). Whatever problems Catalonia may have with social and economic inequities owe as much to the disproportionate distribution of that wealth (when not the outright pillaging of it) by the Catalan government as to its partial appropriation by the even more deeply corrupt central government in Madrid. This is not, in other words, a straightforward case of exploitation.

by Elis S. Evans, N+1 |  Read more:
Image: Adolfo Lujan

Thoughts and Prayers


Thoughts and Prayers, The Game.
via:
[ed. See also: Thoughts and Prayers]

Actually, Do Read the Comments - They Can Be the Best Part

Imagine you want to collect donations for a food bank. You could place an empty box on the street, walk away, and hope there’s food inside when you return. The likely result? Your box will be filled with trash.

Alternatively, you could think strategically. Where should you put the box? Outside a grocery store, perhaps. How will people know what to put in the box? You can write, “Donations for Food Bank,” on the side. You can also stand near the box, so that if people throw trash inside, you can remove it quickly. And when people put tins of food inside, you can make them visible, so others know what to buy inside the store.

Right now, many publishers are placing an empty box at the bottom of their stories and walking away. And then they’re frustrated, maybe even disgusted, at the trash that collects there.

Abuse, trolling, harassment, racism, misogyny—these are all real problems down in the comments, and they’re a symptom of wider problems: societal, yes, but also strategic. The current process goes like this: Journalist writes an article. Article is published. People write comments. Journalist peeks at the comments, and sees a lot of meanness and abuse (especially if they’re a woman, a person of color, or especially a woman of color). Journalist vows not to engage with such horrid readers. The organization listens to its journalists when they say that comments are worthless and puts fewer resources into them. The comments then get worse due to lack of engagement and strategy, leaving the space to a small number of argumentative types corralled by a tiny battled-hardened community team.

A few sites have nixed comments completely, saying that the conversation is now better had elsewhere. “We encourage our audience to continue to interact with us [on social media],” said Al Jazeera English when it removed comments last month.

If a site chooses not to dedicate resources to community management, then closing the comments is probably the best option. However, this is a dangerous and short-sighted position for the news industry to adopt. It’s damaging not only to the bottom line, but also to the future of journalism as an industry.

Let’s start with three of the key metrics that advertisers care about: number of views, time spent on a page, and the loyalty of the audience. Who spends the most time on the page? People reading comments after the article and engaging in the discussion. Who creates multiple page views? Commenters who return to reply to conversations they’re involved in. Who are the most loyal audience members? Almost certainly your commenters.

Earlier this year, The Financial Times found that its commenters are seven times more engaged than the rest of its readers. The Times of London revealed recently that the 4 percent of its readers who comment are by far its most valuable.

“You can see the benefits in terms of engaging readers and renewing subscriptions,” Ben Whitelaw, head of audience development at the Times and The Sunday Times, told the online news site Digiday.

When an organization moves these communities onto Facebook, it is handing over everything to the big blue thumb: all of the readers’ data, the control of the moderation tools, control of the advertising, even the opportunity to manage subscriptions — and all in a place where people are more likely to comment without even opening the article. (Not to mention that Facebook has hardly solved its own abuse problem.)

Yes, some community members can be demanding, argumentative, aggressive, mean. But others can be helpful. David Fahrenthold won a Pulitzer Prize for his investigations into Donald Trump; his readers helped him uncover various pieces of information, including the location of a painting of the now-president that he bought for himself at a charity auction. Comments are where many people share personal anecdotes related to a news story, and where experts sometimes share links to their research. Sometimes, the community can be supportive and meaningful for its members: for example, a commenter on the Carolyn Hax advice blog at The Washington Post was so beloved that the newspaper wrote a piece about her after she died. Memorials to her writing were then left, yes, in the comments.

Commenters can even become potential hires: The Atlantic’s current politics editor, Yoni Appelbaum, was plucked from the comments section. It’s easy to forget that behind the anonymous usernames are real people with something to say. (...)

Right now, many news sites with comments spend their resources policing the actions of a small minority of people. Those organizations need to shift the focus from merely removing the negative to building positive, flexible community spaces, where a small, antisocial subset are no longer able to dominate the space or abuse the people within it with impunity.

Every site needs to be thinking about more than just improving comments, Publishers should also be more clear about what the goals of the space are, and should try to build strong digital communities that members are actively involved in managing. If more comments sections become places where people actually talk to each other—and to the person who wrote the story—they will encourage ideas and empathy, not insults. Potential sources and new story ideas will emerge.

We can achieve this through better technology and more flexible tools, and by hiring people who have actually suffered harassment to help build the solutions. Those tools should make it easy to highlight the best parts of the conversation and for journalists to engage without making themselves vulnerable to abuse. At Coral, where I work, we have two open source tools that are being adopted by newsrooms. Ask collects reader submissions and displays them back to readers. (It’s used by Univision, PBS Frontline, and others.) Talk reimagines how comment moderation and conversation function. (It’s used by the Washington Post, the Brisbane Times, and Estadao in Brazil.) (...)

There is no single approach or tool that will work everywhere. The best community strategies are adaptable. For example: If a topic is unlikely to spur thoughtful discussion in a comment section, the editors should consider other kinds of engagement—such as a form to submit stories and experiences that can help future reporting.

by Andrew Losowsky, Wired | Read more:
Image: Getty

Saturday, October 7, 2017

Christ in the Garden of Endless Breadsticks

In the fall of 1889, when he was 41 years old, the painter Paul Gauguin was brutally, furiously alone. Famous now for his saturated, almost hallucinatory paintings of life in Tahiti, at the time he was living in Brittany, still two years away from his first visit to French Polynesia. He was penniless and adrift, trying to paint his way through the devastations of his dying marriage, his rejection by the cliques of the Parisian art establishment, and the precarity of his friendship with Vincent van Gogh, who shortly before Christmas had assaulted him with a razor and, after Gauguin’s departure that evening, used the same blade to cut off his own ear.

Gauguin and Van Gogh had a tumultuous acquaintance, one that served both men better in writing than in person. In their extensive correspondence, Gauguin — originally a stockbroker — refined his beliefs about the purpose of art. Impressionism had thundered into the salons, upending classical formality and with it the rubrics by which a painting could be considered a success. Beauty was no longer the standard, nor was faithful representation of a subject; the artist himself was now part of the consideration, judged by the nuance of his thoughts and his facility with their artistic evocation. Gauguin was dazzled by this idea of art as a vehicle for emotion, a way to depict not things or people, but their essences.

A religious man, he found profundity in the practice of art: the brushes and paints, the forms and colors on the canvas, and the distillation and expression of his own mind. It was from that last point that his solitude sprang. Gauguin’s contemporaries, including Van Gogh, found it inoffensive — even useful! — to paint from life, referring to models and objects and scenery. To Gauguin, direct observation was anathema, a tool for overwriting the memories and emotions that make a painting worthwhile. He was furious at his cohort for their weakness, disdainful of their inability to see the truth in his vision. He painted it: a garden of sinuous trees, with primitive, black-clad figures in the background hazily merging with the twilight landscape. Filling the foreground is a figure with blazing orange hair and beard, his face — Gauguin’s face — rendered in intricate detail, full of life and warmth, looking to the ground with an expression of infinite wisdom and sorrow.

“There I have painted my own portrait,” he wrote of the work. “But it also represents the crushing of an ideal, and a pain that is both divine and human. Jesus is totally abandoned; his disciples are leaving him, in a setting as sad as his soul.” Gauguin found great richness in the story of Jesus, and often painted himself as the savior. He called this painting, which now hangs in the Norton Museum of Art in West Palm Beach, Florida, Le Christ au Jardin des Oliviers, or, Christ in the Garden of Olives.
***
There are two globally renowned olive gardens: Gethsemane, the grove where Jesus and his disciples prayed the night before his betrayal and crucifixion, its agony painted by Gauguin and by hundreds of other painters, and the fictional Tuscan hillside that lends its name to Olive Garden, a massive restaurant chain with more than 800 locations in North America. The two appear to be unconnected: According to Darden Restaurants, owner of the Olive Garden chain, the phrase is intended to call to mind ideas of the olive harvest and Tuscan authenticity, not the final, anguished night of a prophet, dark hours spent in prayer, wrath, and silence.

Despite the promises of the name, it can be a challenge to find actual olives at Olive Garden. The omission is intentional, though the irony is not. It's a simple matter of marketing: People don't like olives. They don't know what to do with them. They show up occasionally on the menu; their most recent engagement, on a “Mediterranean flatbread,” seems to no longer be available, part of an unbroken chain of olive-adorned dishes that have languished, unordered and unloved, before being dispatched by less culinarily threatening options like Meatball Stuffed Pizza Fritta.

Still, there are two places you'll always find olives at Olive Garden, no matter which way the menu consultants declare that the wind is blowing: The bar, where green spheroids wait, limply piled, to be pressed into service for a martini, and in the salad bowls. Two black olives — exactly two — are supposed to be in every family-size bowl, though when I was at an Olive Garden in Michigan City, Indiana, my server admitted that about half her tables ask for them to be kept out, or simply leave them on the side.

She was a little surprised when I asked where all the olives were — she said it’s usually the middle-aged men who fling that joke at her, which maybe I should have seen coming. According to her, they all order the Tour of Italy, a three-way sampler of lasagna, chicken parm, and fettuccine alfredo. No one really wants to eat any olives. The other joke she gets, usually from the same sort of men, is “Where’s the garden?” No one actually wants to see a garden, they just want to make the pretty waitress blush.

This was the third Olive Garden I’d been to in two weeks, and in the weeks to come I’d eat at half a dozen more — a grand tour of Tours of Italy, a chain of chains stretching from New York to California. The brand is in the middle of a grand reimagining, an overhauling of its hundreds of stores, that will dispense with its tile and faux-stucco and genially middlebrow upholstery in favor of a more streamlined, anodyne aesthetic of white walls, dark wood, and colorblocking. It’s a massive undertaking — not all locations are transforming at once — so while some restaurants I went to have entered the chain’s glossy future, many were still the Olive Gardens of the prior era. In these, you can still find some olives: On the shoulder-height half-walls that carve cavernous dining rooms into sections, sit potted rows of faux olive trees, slim shoots sprouting dusty green leaves and clusters of dark plastic footballs. You can’t eat them, but they remind you that somewhere, the real thing is growing on a real tree, and maybe you could.

I feel an intense affinity for Olive Garden, which — like the lack of olives on its menu — is by design. The restaurant was built for affinity, constructed from the foundations to the faux-finished rafters to create a sense of connection, of vague familiarity, to bring to mind some half-lost memory of old-world simplicity and ease. Even if you’ve never been to the Olive Garden before, you’re supposed to feel like you have. You know the next song that’s going to play. You know how the chairs roll against the carpet. You know where the bathrooms are. Its product is nominally pasta and wine, but what Olive Garden is actually selling is Olive Garden, a room of comfort and familiarity, a place to return to over and over.

In that way, it’s just like any other chain restaurant. For any individual mid-range restaurant, return customers have always been an easy majority of the clientele, and chain-wide, it’s overwhelmingly the case: If you’ve been to one Olive Garden, odds are very high you’ve been to two or more. If the restaurant is doing it right, though, all the Olive Gardens of your life will blur together into one Olive Garden, one host stand, one bar, one catacomb of dining alcoves warmly decorated in Toscana-lite. Each Olive Garden is a little bit different, but their souls are all the same. (...)

It’s not a coincidence that Olive Gardens tend to spring up near highways and shopping malls, within the orbit of mid-range hotels. Chain begets chain, or maybe chains are more comfortable among other chains — and in sufficient concentration they cause a little hiccup in the psychospace of reality, erasing any locality or sense of place, replacing it with a sanitized, brand-driven commercial hospitality. In downtown Salt Lake City or western Massachusetts or on the southern edge of the Chicago suburbs, wherever you see an Olive Garden, you’ll find something like a Quality Inn & Suites nearby. These accretions of commercial activity, stripped from geographic or historical identity, are what the French anthropologist Marc Augé talks about as “non-places.” (He also finds non-place in, of all places, Tahiti — specifically as seen through the eyes of a traveler, someone who is more interested in the fulfillment of his self-conception than in the spectacle that surrounds him.) What it means to be a non-place is the same thing it means to be a chain: A plural nothingness, a physical space without an anchor to any actual location on Earth, or in time, or in any kind of spiritual arc. In its void, it simply is.

Despite its flirtation with the existential abyss, a non-place isn't necessarily a bad thing for a place to be. It may be bad sometimes, or even frequently, but it isn’t always. One of the things I love about the Olive Garden, the reason I continue to love it, despite its gummy pasta and its maladaptive, kale-forward response to modern food culture, is its nowhereness. I love that I can walk in the door of an Olive Garden in Michigan City, Indiana, and feel like I’m in the same room I enter when I step into an Olive Garden in Queens or Rhode Island or the middle of Los Angeles. There is only one Olive Garden, but it has a thousand doors. (...)

The well-paid suits who run Olive Garden have tried, many times, to breathe new life into their chain, and it always backfires spectacularly. They’ve flirted with small plates, they put kale and polenta on the menu, they recently started slicing the breadsticks down the middle and making sandwiches out of them. Most tables and bar seats have little unobtrusive video screens on which customers can hail their server for a refill, or pay $1.99 to test their trivia knowledge against other players who allegedly are real, but almost certainly are not. At most locations, the fake olive plants with their twisty branches have already been chucked in the trash, the walls have been un-stuccoed, and the chairs have been stripped of their exquisitely smooth-rolling wheels. By next year, they’ll all be gone.

Every time Olive Garden tries to freshen its image, to move away from its cultural role as a punchline for faux authenticity and mediocre mall food, everything collapses. Nobody wants to eat kale at Olive Garden. Nobody wants garlic hummus. We want soup and salad and unlimited breadsticks, we want never-ending bowls of pasta with a variety of sauces, we want giant glasses full of Coke and tiny wine glasses full of plonky reds and fruity whites. Just about the only stunt Olive Garden has ever pulled that’s been successful — and it’s been a raging success, an astounding, nearly unbelievable one — has been the Pasta Pass. For $100, you can buy a card that entitles you to seven weeks of unlimited unlimited soup, salad, and breadsticks, and unlimited never-ending pasta bowls. Or you could buy it, if you were one of the 22,000 people who managed to snatch them up before they sold out in one second. One. Second. That’s how much no one cares if Olive Garden serves kale.

by Helen Rosner, Eater |  Read more:
Image: Paul Gaugin, Christ in the Garden of Olives
[ed. See also: As Goes the Middle Class, So Goes TGI Fridays]

Friday, October 6, 2017

How Economists Turned Corporations into Predators

The Idea That Businesses Exist Solely to Enrich Shareholders Is Harmful Nonsense
In a new INET paper featured in the Financial Times, economist William Lazonick lays out a theory about how corporations can work for everyone – not just a few executives and Wall Streeters. He challenges a set of controversial ideas that became gospel in business schools and the mainstream media starting in the 1980s. He sat down with INET’s Lynn Parramore to discuss.
Lynn Parramore: Since the 1980s, business schools have touted “agency theory,” a controversial set of ideas meant to explain how corporations best operate. Proponents say that you run a business with the goal of channeling money to shareholders instead of, say, creating great products or making any efforts at socially responsible actions such as taking account of climate change. Many now take this view as gospel, even though no less a business titan than Jack Welch, former CEO of GE, called the notion that a company should be run to maximize shareholder value “the dumbest idea in the world.” Why did Welch say that?

William Lazonick: Welch made that statement in a 2009 interview, just ahead of the news that GE had lost its S&P Triple-A rating in the midst of the financial crisis. He explained that, “shareholder value is a result, not a strategy” and that a company’s “main constituencies are your employees, your customers and your products.” During his tenure as GE CEO from 1981 to 2001, Welch had an obsession with increasing the company’s stock price and hitting quarterly earnings-per-share targets, but he also understood that revenues come when your company generates innovative products. He knew that the employees’ skills and efforts enable the company to develop those products and sell them.

If a publicly-listed corporation succeeds in creating innovative goods or services, then shareholders stand to gain from dividend payments if they hold shares or if they sell at a higher price. But where does the company’s value actually come from? It comes from employees who use their collective and cumulative learning to satisfy customers with great products. It follows that these employees are the ones who should be rewarded when the business is a success. We’ve become blinded to this simple, obvious logic.

LP: What have these academic theorists missed about how companies really operate and perform? How have their views impacted our economy and society?

WL: As I show in my new INET paper “Innovative Enterprise Solves the Agency Problem,” agency theorists don’t have a theory of innovative enterprise. That’s strange, since they are talking about how companies succeed.

They believe that to be efficient, business corporations should be run to “maximize shareholder value.” But as I have argued in another recent INET paper, public shareholders at a company like GE are not investors in the company’s productive capabilities.

LP: Wait, as a stockholder I’m not an investor in the company’s capabilities?

WL: When you buy shares of a stock, you are not creating value for the company — you’re just a saver who buys shares outstanding on the stock market for the sake of a yield on your financial portfolio. Public shareholders are value extractors, not value creators.

By touting public shareholders as a corporation’s value creators, agency theorists lay the groundwork for some very harmful activities. They legitimize “hedge fund activists,” for example. These are aggressive corporate predators who buy shares of a company on the stock market and then use the power bestowed upon them by the ill-conceived U.S. proxy voting system, endorsed by the Securities and Exchange Commission (SEC), to demand that the corporation inflate profits by cutting costs. That often means mass layoffs and depressed incomes for anybody who remains. In an industry like pharmaceuticals, the activists also press for extortionate product price increases. The higher profits tend to boost stock prices for the activists and other shareholders if they sell their shares on the market.

LP: So the hedge fund activists are extracting value from a corporation instead of creating it, and yet they are the ones who get enriched.

WL: Right. Agency theory aids and abets this value extraction by advocating, in the name of “maximizing shareholder value,” massive distributions to shareholders in the form of dividends for holding shares as well as stock buybacks that you hear about, which give manipulative boosts to stock prices. Activists get rich when they sell the shares. The people who created the value — the employees — often get poorer.

“downsize-and-distribute” —something that corporations have been doing since the 1980s, which has resulted in extreme concentration of income among the richest households and the erosion of middle-class employment opportunities.

LP: You’ve called stock buybacks — what happens when a company buys back its own shares from the marketplace, often to manipulate the stock price upwards— the “legalized looting of the U.S. business corporation.” What’s the problem with this practice?

WL: If you buy shares in Apple, for example, you can get a dividend for holding shares and, possibly, a capital gain when you sell the shares. Since 2012, when Apple made its first dividend payment since 1996, the company has shelled out $57.4 billion as dividends, equivalent to over 22 percent of net income. That’s fine. But the company has also spent $157.9 billion on stock buybacks, equal to 62 percent of net income.

Yet the only time in its history that Apple ever raised funds on the public stock market was in 1980, when it collected $97 million in its initial public offering. How can a corporation return capital to parties that never supplied it with capital? It’s a very misleading concept.

The vast majority of people who hold Apple’s publicly-listed shares have simply bought outstanding shares on the stock market. They have contributed nothing to Apple’s value-creating capabilities. That includes veteran corporate raider Carl Icahn, who raked in $2 billion by holding $3.6 billion in Apple shares for about 32 months, while using his influence to encourage Apple to do $80.3 billion in buybacks in 2014-2015, the largest repurchases ever. Over this period, Apple, the most cash-rich company in history, increased its debt by $47.6 billion to do buybacks so that it would not have to repatriate its offshore profits, sheltered from U.S. corporate taxes.

There are many ways in which the company could have returned its profits to employees and taxpayers — the realvalue creators — that are consistent with an innovative business model. Instead, in doing massive buybacks, Apple’s board (which includes former Vice President Al Gore) has endorsed legalized looting. The SEC bears a lot of blame. It’s supposed to protect investors and make sure financial markets are free of manipulation. But back in 1982, the SEC bought into agency theory under Reagan and came up with a rule that gives corporate executives a “safe harbor” against charges of stock-price manipulation when they do billions of dollars of buybacks for the sole purpose of manipulating their company’s stock price.

LP: But don’t shareholders deserve some of the profits as part owners of the corporation?

WL: Let’s say you buy stock in General Motors. You are just buying a share that is outstanding on the market. You are contributing nothing to the company. And you will only buy the shares because the stock market is highly liquid, enabling you to easily sell some or all of the shares at any moment that you so choose.

In contrast, people who work for General Motors supply skill and effort to generate the company’s innovative products. They are making productive contributions with expectations that, if the innovative strategy is successful, they will share in the gains — a bigger paycheck, employment security, a promotion. In providing their labor services, these employees are the real value creators whose economic futures are at risk.

by Lynn Parramore and William Lazonick, Institute for New Economic Thinking via: Naked Capitalism | Read more:

Sorrento's


Sorrento's pizza (Anchorage). World's best.
photo: markk