Thursday, February 22, 2018

The Poison We Pick

This nation pioneered modern life. Now epic numbers of Americans are killing themselves with opioids to escape it.

How does an opioid make you feel? We tend to avoid this subject in discussing recreational drugs, because no one wants to encourage experimentation, let alone addiction. And it’s easy to believe that weak people take drugs for inexplicable, reckless, or simply immoral reasons. What few are prepared to acknowledge in public is that drugs alter consciousness in specific and distinct ways that seem to make people at least temporarily happy, even if the consequences can be dire. Fewer still are willing to concede that there is a significant difference between these various forms of drug-induced “happiness” — that the draw of crack, say, is vastly different than that of heroin. But unless you understand what users get out of an illicit substance, it’s impossible to understand its appeal, or why an epidemic takes off, or what purpose it is serving in so many people’s lives. And it is significant, it seems to me, that the drugs now conquering America are downers: They are not the means to engage in life more vividly but to seek a respite from its ordeals.

The alkaloids that opioids contain have a large effect on the human brain because they tap into our natural “mu-opioid” receptors. The oxytocin we experience from love or friendship or orgasm is chemically replicated by the molecules derived from the poppy plant. It’s a shortcut — and an instant intensification — of the happiness we might ordinarily experience in a good and fruitful communal life. It ends not just physical pain but psychological, emotional, even existential pain. And it can easily become a lifelong entanglement for anyone it seduces, a love affair in which the passion is more powerful than even the fear of extinction.

Perhaps the best descriptions of the poppy’s appeal come to us from the gifted writers who have embraced and struggled with it. Many of the Romantic luminaries of the early-19th century — including the poets Coleridge, Byron, Shelley, Keats, and Baudelaire, and the novelist Walter Scott — were as infused with opium as the late Beatles were with LSD. And the earliest and in many ways most poignant account of what opium and its derivatives feel like is provided by the classic memoir Confessions of an English Opium-Eater, published in 1821 by the writer Thomas De Quincey.

De Quincey suffered trauma in childhood, losing his sister when he was 6 and his father a year later. Throughout his life, he experienced bouts of acute stomach pain, as well as obvious depression, and at the age of 19 he endured 20 consecutive days of what he called “excruciating rheumatic pains of the head and face.” As his pain drove him mad, he finally went into an apothecary and bought some opium (which was legal at the time, as it was across the West until the war on drugs began a century ago).

An hour after he took it, his physical pain had vanished. But he was no longer even occupied by such mundane concerns. Instead, he was overwhelmed with what he called the “abyss of divine enjoyment” that overcame him: “What an upheaving from its lowest depths, of the inner spirit! … here was the secret of happiness, about which philosophers had disputed for many ages.” The sensation from opium was steadier than alcohol, he reported, and calmer. “I stood at a distance, and aloof from the uproar of life,” he wrote. “Here were the hopes which blossom in the paths of life, reconciled with the peace which is in the grave.” A century later, the French writer Jean Cocteau described the experience in similar ways: “Opium remains unique and the euphoria it induces superior to health. I owe it my perfect hours.”

The metaphors used are often of lightness, of floating: “Rising even as it falls, a feather,” as William Brewer, America’s poet laureate of the opioid crisis, describes it. “And then, within a fog that knows what I’m going to do, before I do — weightlessness.” Unlike cannabis, opium does not make you want to share your experience with others, or make you giggly or hungry or paranoid. It seduces you into solitude and serenity and provokes a profound indifference to food. Unlike cocaine or crack or meth, it doesn’t rev you up or boost your sex drive. It makes you drowsy — somniferum means “sleep-inducing” — and lays waste to the libido. Once the high hits, your head begins to nod and your eyelids close. (...)

One of the more vivid images that Americans have of drug abuse is of a rat in a cage, tapping a cocaine-infused water bottle again and again until the rodent expires. Years later, as recounted in Johann Hari’s epic history of the drug war, Chasing the Scream, a curious scientist replicated the experiment. But this time he added a control group. In one cage sat a rat and a water dispenser serving diluted morphine. In another cage, with another rat and an identical dispenser, he added something else: wheels to run in, colored balls to play with, lots of food to eat, and other rats for the junkie rodent to play or have sex with. Call it rat park. And the rats in rat park consumed just one-fifth of the morphine water of the rat in the cage. One reason for pathological addiction, it turns out, is the environment. If you were trapped in solitary confinement, with only morphine to pass the time, you’d die of your addiction pretty swiftly too. Take away the stimulus of community and all the oxytocin it naturally generates, and an artificial variety of the substance becomes much more compelling.

One way of thinking of postindustrial America is to imagine it as a former rat park, slowly converting into a rat cage. Market capitalism and revolutionary technology in the past couple of decades have transformed our economic and cultural reality, most intensely for those without college degrees. The dignity that many working-class men retained by providing for their families through physical labor has been greatly reduced by automation. Stable family life has collapsed, and the number of children without two parents in the home has risen among the white working and middle classes. The internet has ravaged local retail stores, flattening the uniqueness of many communities. Smartphones have eviscerated those moments of oxytocin-friendly actual human interaction. Meaning — once effortlessly provided by a more unified and often religious culture shared, at least nominally, by others — is harder to find, and the proportion of Americans who identify as “nones,” with no religious affiliation, has risen to record levels. Even as we near peak employment and record-high median household income, a sense of permanent economic insecurity and spiritual emptiness has become widespread. Some of that emptiness was once assuaged by a constantly rising standard of living, generation to generation. But that has now evaporated for most Americans. (...)

It’s been several decades since Daniel Bell wrote The Cultural Contradictions of Capitalism, but his insights have proven prescient. Ever-more-powerful market forces actually undermine the foundations of social stability, wreaking havoc on tradition, religion, and robust civil associations, destroying what conservatives value the most. They create a less human world. They make us less happy. They generate pain.

This was always a worry about the American experiment in capitalist liberal democracy. The pace of change, the ethos of individualism, the relentless dehumanization that capitalism abets, the constant moving and disruption, combined with a relatively small government and the absence of official religion, risked the construction of an overly atomized society, where everyone has to create his or her own meaning, and everyone feels alone. The American project always left an empty center of collective meaning, but for a long time Americans filled it with their own extraordinary work ethic, an unprecedented web of associations and clubs and communal or ethnic ties far surpassing Europe’s, and such a plethora of religious options that almost no one was left without a purpose or some kind of easily available meaning to their lives. Tocqueville marveled at this American exceptionalism as the key to democratic success, but he worried that it might not endure forever.

And it hasn’t. What has happened in the past few decades is an accelerated waning of all these traditional American supports for a meaningful, collective life, and their replacement with various forms of cheap distraction. Addiction — to work, to food, to phones, to TV, to video games, to porn, to news, and to drugs — is all around us. The core habit of bourgeois life — deferred gratification — has lost its grip on the American soul. We seek the instant, easy highs, and it’s hard not to see this as the broader context for the opioid wave. This was not originally a conscious choice for most of those caught up in it: Most were introduced to the poppy’s joys by their own family members and friends, the last link in a chain that included the medical establishment and began with the pharmaceutical companies. It may be best to think of this wave therefore not as a function of miserable people turning to drugs en masse but of people who didn’t realize how miserable they were until they found out what life without misery could be. To return to their previous lives became unthinkable. For so many, it still is. (...)

To see this epidemic as simply a pharmaceutical or chemically addictive problem is to miss something: the despair that currently makes so many want to fly away. Opioids are just one of the ways Americans are trying to cope with an inhuman new world where everything is flat, where communication is virtual, and where those core elements of human happiness — faith, family, community — seem to elude so many. Until we resolve these deeper social, cultural, and psychological problems, until we discover a new meaning or reimagine our old religion or reinvent our way of life, the poppy will flourish.

by Andrew Sullivan, NY Magazine |  Read more:
Image: Joe Darrow
[ed. Finally someone gets it... it's an epidemic of despair.]

Wednesday, February 21, 2018

Let's Get Ready to Rumble: Trademarking Your Catch Phrase

You and your significant other go to the movies. During the coming attractions, you nearly choke on some Raisinettes when you hear something familiar: a series of catchy words spoken by the lead character. That was your catch phrase! You'd even considered printing it up on some t-shirts. Was it stolen? Well, maybe, but then again, maybe not. This article will give you some tips on ensuring that your catch phrase idea stays just that, yours.

A catch phrase is an expression usually popularized through repeated use by a real person or fictional character. Today, catch phrases are increasingly seen as an important component of marketing and promoting a product or service. See if you recognize some of these well-known catch phrases:

"Ancient Chinese secret, huh?" - from a Calgon commercial

"Dy-no-mite!" - Jimmie Walker as J.J. Evans from "Good Times"

"Hasta la vista, baby." - The Terminator

"Show me the money!" - Tom Cruise in "Jerry Maguire"

"Whazzup?" - Budweiser ad campaign

"Where's the Beef?" - Clara Peller in a Wendy's commercial

A catch phrase is essentially a trademark. A trademark is any word, name, slogan, design, or symbol used in commerce to identify a particular product and distinguish it from others. Like copyrights, trademarks are protected as a form of property. Owners of valid trademarks are granted exclusive rights to their use in commerce. The main purpose of trademark protection is to increase the reliability of marketplace identification and thereby help consumers select goods and services. A distinctive trademark quickly identifies a product, and over time the mark may be equated with a particular level of quality.

As with copyrights, legal rights to trademarks arise automatically without governmental formalities. But unlike copyrights, trademark rights don't begin at the moment a word, symbol, or phrase is first scribbled on paper. Rather, trademark rights stem from the actual use of a distinctive mark in commerce.

If you develop a catch phrase, you should register it with the US Patent and Trademark Office (USPTO). You might wonder why this is important, but the benefits of federal registration will hopefully convince you. First, registration means the trademark is legally valid. This protects you in case of an infringement lawsuit. If you've registered the mark, the burden of proof shifts to the defendant to show why the registered mark is undeserving of protection. Second, registration is a nationwide notice of the registrant's claim of ownership. That means someone else using the mark in another part of the country can't claim territorial ownership rights. Third, federal registration comes with the right to file infringement lawsuits in the federal courts. In case you're not already convinced, registration simply serves as a deterrent for others who won't use your mark in fear of a legal battle.

Federal registration only goes so far. The owner of the mark still bears the burden of protecting it. The primary method of protecting a catch phrase is to file an infringement lawsuit. The plaintiff may sue for financial damages, an injunction against further use, or both. The basic test in infringement lawsuits is whether the allegedly infringing phrase is similar enough to create a "likelihood of confusion."

In short, here's how to get the maximum mileage out of your catch phrase: Develop a distinctive one, use it in interstate commerce, and register it with the US Patent and Trademark Office.

by Donald R. Simon, Legal Zoom |  Read more:

The Rise of Virtual Citizenship

“If you believe you are a citizen of the world, you are a citizen of nowhere. You don’t understand what citizenship means,” the British prime minister, Theresa May, declared in October 2016. Not long after, at his first postelection rally, Donald Trump asserted, “There is no global anthem. No global currency. No certificate of global citizenship. We pledge allegiance to one flag and that flag is the American flag.” And in Hungary, Prime Minister Viktor Orbán has increased his national-conservative party’s popularity with statements like “all the terrorists are basically migrants” and “the best migrant is the migrant who does not come.”

Citizenship and its varying legal definition has become one of the key battlegrounds of the 21st century, as nations attempt to stake out their power in a G-Zero, globalized world, one increasingly defined by transnational, borderless trade and liquid, virtual finance. In a climate of pervasive nationalism, jingoism, xenophobia, and ever-building resentment toward those who move, it’s tempting to think that doing so would become more difficult. But alongside the rise of populist, identitarian movements across the globe, identity itself is being virtualized, too. It no longer needs to be tied to place or nation to function in the global marketplace.

Hannah Arendt called citizenship “the right to have rights.” Like any other right, it can be bestowed and withheld by those in power, but in its newer forms it can also be bought, traded, and rewritten. Virtual citizenship is a commodity that can be acquired through the purchase of real estate or financial investments, subscribed to via an online service, or assembled by peer-to-peer digital networks. And as these options become available, they’re also used, like so many technologies, to exclude those who don’t fit in.

In a world that increasingly operates online, geography and physical infrastructure still remain crucial to control and management. Undersea fiber-optic cables trace the legacy of imperial trading routes. Google and Facebook erect data centers in Scandinavia and the Pacific Northwest, close to cheap hydroelectric power and natural cooling. The trade in citizenship itself often manifests locally as architecture. From luxury apartments in the Caribbean and the Mediterranean to data centers in Europe and refugee settlements in the Middle East, a scattered geography of buildings brings a different reality into focus: one in which political decisions and national laws transform physical space into virtual territory.

The sparkling seafront of Limassol, the second-largest city in Cyprus, stretches for several miles along the southwestern coast of the island. In recent years it has become particularly popular among Russian tourists and emigrants, who have settled in the area. Almost 20 percent of the population is now Russian-speaking. Along 28 October Avenue, which borders the seafront, new towers have sprung up, as well as a marina and housing complex, filled with international coffee and restaurant chains. The 19-floor Olympic Residence towers are the tallest residential buildings on the island, along with the Oval building, a 16-floor structure shaped like its name. Soon a crop of new skyscrapers will join them, including three 37- to 39-story towers called Trilogy and the 170-meter Onebuilding. Each building’s website features text in English, Russian, and in several cases, Chinese. China’s Juwai property portal lists other, cheaper options, from hillside holiday apartments to sprawling villas. Many are illustrated with computer renderings—they haven’t actually been built yet.

The appeal of Limassol isn’t limited to its excellent climate and proximity to the ocean. The real attraction, as many of the advertisements make clear, is citizenship. The properties are proxies for a far more valuable prize: a golden visa.

Visas are nothing new; they allow foreigners to travel and work within a host nation’s borders for varying lengths of time. But the golden visa is a relatively recent innovation. Pioneered in the Caribbean, golden visas trade citizenship for cash by setting a price on passports. If foreign nationals invest in property above a certain price threshold, they can buy their way into a country—and beyond, once they hold a citizenship and passport.

A luxury holiday home on Saint Kitts and Nevis or Grenada in the West Indies might be useful for those looking to take advantage of those islands’ liberal tax regimes. But a passport acquired through Cyprus’s golden-visa scheme makes the bearer a citizen of the European Union, with all the benefits that accrue therewith. Moreover, there’s no requirement to reside in or even to visit Cyprus. The whole business, including acquisition of suitably priced real estate, can be carried out without ever setting foot on the island. The real estate doesn’t even have to exist yet—it can be completely virtual, just a computer rendering on a website. All for just 2 million euros, the minimum spend for the citizenship by investment.

As a result, Cypriot real-estate websites are filled with investment guides and details on how to apply for a new passport. This is the new era of virtual citizenship, where your papers and your identity—and all the rights that flow from them—owe more to legal frameworks and investment vehicles than any particular patch of ground where you might live. (...)

Juwai, the Chinese portal, casts a wider eye than just Cyprus. Its website hosts a side-by-side comparison of various golden-visa schemes, laying out the costs and benefits of each, from the price of the investment to how long buyers must wait for a new passport to come through. Not all the schemes are created equally. Cyprus’s neighbor Greece has one of the cheapest schemes going, with residency available for just 250,000 euros. But that’s only residency—the right to stay in the country—not local, let alone EU, citizenship, which can take years to obtain and might never be granted. Sometimes the schemes have gone awry, too. Some 400,000 foreign investors in Portugal’s 500,000-euro golden-visa scheme have been left in limbo by bureaucratic collapse, waiting years for a passport which was promised within months. Chinese homeowners have been forced to fly in and out of the country every couple of months in order to maintain short-term visas, despite having paid thousands for property. (...)

The world is in the midst of the greatest movement of people since the end of the World War II, and the combination of increasing global inequality and climate change will only increase its pace. Two hundred million people are on the move now, and as many as a billion might become migratory by 2050. Citizenship, the only tool we have for guaranteeing rights and responsibilities in a world of nation-states, is subject to increasing pressure to adapt. Today’s virtual citizenship caters mostly to the wealthy, or the poor. Could tomorrow provide new opportunities for everyone? And if possible, will the results look more like what’s been done for the global elite or for the most disadvantaged?

by James Bridle, The Atlantic |  Read more:
Image: Sean Gallup / Getty

‘The Twilight Zone,’ from A to Z

The planet has been knocked off its elliptical orbit and overheats as it hurtles toward the sun; the night ceases to exist, oil paintings melt, the sidewalks in New York are hot enough to fry an egg on, and the weather forecast is “more of the same, only hotter.” Despite the unbearable day-to-reality of constant sweat, the total collapse of order and decency, and, above all, the scarcity of water, Norma can’t shake the feeling that one day she’ll wake up and find that this has all been a dream. And she’s right. Because the world isn’t drifting toward the sun at all, it’s drifting away from it, and the paralytic cold has put Norma into a fever dream.

This is “The Midnight Sun,” my favorite episode of The Twilight Zone, and one that has come to seem grimly familiar. I also wake up adrift, in a desperate and unfamiliar reality, wondering if the last year in America has been a dream—I too expect catastrophe, but it’s impossible to know from which direction it will come, whether I am right to trust my senses or if I’m merely sleepwalking while the actual danger becomes ever-more present. One thing I do know is that I’m not alone: since the election of Donald Trump, it’s become commonplace to compare the new normal to living in the Twilight Zone, as Paul Krugman did in a 2017 New York Times op-ed titled “Living in the Trump Zone,” in which he compared the President to the all-powerful child who terrorizes his Ohio hometown in “It’s a Good Life,” policing their thoughts and arbitrarily striking out at the adults. But these comparisons do The Twilight Zone a disservice. The show’s articulate underlying philosophy was never that life is topsy-turvy, things are horribly wrong, and misrule will carry the day—it is instead a belief in a cosmic order, of social justice and a benevolent irony that, in the end, will wake you from your slumber and deliver you unto the truth.

The Twilight Zone has dwelt in the public imagination, since its cancellation in 1964, as a synecdoche for the kind of neat-twist ending exemplified by “To Serve Man” (it’s a cookbook), “The After Hours” (surprise, you’re a mannequin), and “The Eye of the Beholder” (everyone has a pig-face but you). It’s probably impossible to feel the original impact of each show-stopping revelation, as the twist ending has long since been institutionalized, clichéd, and abused in everything from the 1995 film The Usual Suspects to Twilight Zone-style anthology series like Black Mirror. Rewatching these episodes with the benefit of Steven Jay Rubin’s new, 429-page book, The Twilight Zone Encyclopedia, (a bathroom book if ever I saw one), I realized that the punchlines are actually the least reason for the show’s enduring hold over the imagination. That appeal lies, rather, in its creator Rod Serling’s rejoinders to the prevalent anti-Communist panic that gripped the decade: stories of witch-hunting paranoia tend to end badly for everyone, as in “The Monsters Are Due on Maple Street,” in which the population of a town turns on each other in a panic to ferret out the alien among them, or in “Will the Real Martian Please Stand Up?” which relocates the premise to a diner in which the passengers of a bus are temporarily stranded and subject to interrogation by a pair of state troopers.

The show’s most prevalent themes are probably best distilled as “you are not what you took yourself to be,” “you are not where you thought you were,” and “beneath the façade of mundane American society lurks a cavalcade of monsters, clones, and robots.” Serling had served as a paratrooper in the Philippines in 1945 and returned with PTSD; he and his eventual audience were indeed caught between the familiar past and an unknown future. They stood dazed in a no-longer-recognizable world, flooded with strange new technologies, vastly expansionist corporate or federal jurisdictions, and once-unfathomable ideologies. The culture was shifting from New Deal egalitarianism to the exclusionary persecution and vigilantism of McCarthyism, the “southern strategy” of Goldwater and Nixon, and the Cold War-era emphasis on mandatory civilian conformity, reinforced across the board in schools and the media. In “The Obsolete Man,” a totalitarian court tries a crusty, salt-of-the-earth librarian (played by frequent Twilight Zone star Burgess Meredith, blacklisted since the 1950s, who breaks his glasses in “Time Enough At Last” and plays the titular milquetoast in “Mr. Dingle, the Strong”), who has outlived his bookish medium; but his obsolescence is something every US veteran would have recognized given the gulf between the country they defended and the one that had so recently taken root and was beginning to resemble, in its insistence on purity and obedience to social norms, the fascist states they had fought against in the war. From Serling’s opening narration:
You walk into this room at your own risk, because it leads to the future, not a future that will be but one that might be. This is not a new world, it is simply an extension of what began in the old one. It has patterned itself after every dictator who has ever planted the ripping imprint of a boot on the pages of history since the beginning of time. It has refinements, technological advances, and a more sophisticated approach to the destruction of human freedom. But like every one of the superstates that preceded it, it has one iron rule: logic is an enemy and truth is a menace. (...)
And then there’s the remarkable case of Charles Beaumont, the most prolific and celebrated of the show’s writers next to Sterling. At the time of his death at thirty-eight in 1967, he physically resembled a man of ninety years old, having abruptly aged into unrecognizable infirmity—due, depending on whom you ask, to a unique combination of Alzheimer’s and Pick’s Disease, or an addiction to Bromo-Seltzer, a shady over-the-counter antacid and hangover cure that was withdrawn from the market in 1975 due to toxicity. Beaumont was credited with twenty-two episodes of The Twilight Zone, including “Living Doll,” “Number 12 Looks Just Like You,” and “The Howling Man,” the last adapted from one of his many short stories (collected by Penguin Classics in 2015, with an afterword by William Shatner). While Serling’s Twilight Zone scripts tended to concentrate on supernatural reversals of social norms or the just deserts of assorted pretenders, reactionaries, and bigots, Beaumont’s topics trafficked in existential despair, returning to themes of futility and isolation. A man on death row is caught in a cyclical dream where the stay of execution always arrives too late (“Shadow Play”); a lonely man can only function inside the fantasies taking place inside a dollhouse (“Miniature”); a dead man quickly tires of Heaven (“A Nice Place to Visit”); and in “Printer’s Devil,” a beleaguered small-town publisher, despairing of the death of print in 1963, more-or-less-knowingly hires the Devil as his new linotype operator (Burgess Meredith again). Charles Beaumont’s cultural contribution might, in other words, be termed the most salient and pitch-black representations of irony-in-action to have graced the small screen when The Twilight Zone began airing on CBS in 1959. More than any writer up to that point, Beaumont discovered an intersection between pulp fare and Sophocles, making sociopolitical morality plays out of dime-rack science fiction, a contribution—shared with Serling—without which contemporary pop culture, with its strong tendency to couch social commentary in a metaphysical vernacular borrowed from comics and monster movies, would be impossible to imagine. (...)

Television idea or not, The Twilight Zone was an American idea, and one whose commitment to the ideals of equanimity, brotherhood, and social activism gave rise to satire at its most pointed and Juvenalian, disguised as a supernatural anthology series. Educated at the Ohio liberal arts college Antioch, Serling recalled in his last interview, before dying during heart surgery in 1975 the age of fifty, that he was motivated by his disgust at postwar bias and prejudice, which he railed against so virulently that he confessed “to creating daydreams about how I could… bump off some of these pricks.” But writing ultimately covers more ground, and Serling confined his daydreams to television and film (he famously co-wrote Planet of the Apes, another buffet of Cold War anxieties served up as an alternate-reality blockbuster).

It’s not quite the case that The Twilight Zone has been consistently influential since its early 1960s heyday. Instead, the anthology of “weird tales” format comes into vogue every fifteen years or so, with The Twilight Zone providing the obvious benchmark. Serling contributed to and hosted one of the first of these, Night Gallery, but rightly recognized that the new breed of shows abandoned the real spirit of The Twilight Zone in favor of cheap scares and special effects. Of its contemporary heirs, Black Mirror most resembles The Twilight Zone’s perception of technology as a tool for flattening the individual beneath the corrupting gullibility of the masses. But, with the exception of the very best episodes—such as the immediately canonized “San Junipero” and fourth season premiere “USS Callister,” both of which introduce simulated realities where disenfranchised members of society can dwell indefinitely—these current episodes are merely chilling visions of what already is, in which technology is the villain, not people.

by J.W. McCormack, NY Review of Books | Read more:
Image: via

Tuesday, February 20, 2018

The Singular Pursuit of Comrade Bezos

It was explicitly and deliberately a ratchet, designed to effect a one-way passage from scarcity to plenty by way of stepping up output each year, every year, year after year. Nothing else mattered: not profit, not the rate of industrial accidents, not the effect of the factories on the land or the air. The planned economy measured its success in terms of the amount of physical things it produced.
— Francis Spufford,
Red Plenty

But isn’t a business’s goal to turn a profit? Not at Amazon, at least in the traditional sense. Jeff Bezos knows that operating cash flow gives the company the money it needs to invest in all the things that keep it ahead of its competitors, and recover from flops like the Fire Phone. Up and to the right.
— Recode, “Amazon’s Epic 20-Year Run as a Public Company, Explained in Five Charts


From a financial point of view, Amazon doesn’t behave much like a successful 21st-century company. Amazon has not bought back its own stock since 2012. Amazon has never offered its shareholders a dividend. Unlike its peers Google, Apple, and Facebook, Amazon does not hoard cash. It has only recently started to record small, predictable profits. Instead, whenever it has resources, Amazon invests in capacity, which results in growth at a ridiculous clip. When the company found itself with $13.8 billion lying around, it bought a grocery chain for $13.7 billion. As the Recode story referenced above summarizes in one of the graphs: “It took Amazon 18 years as a public company to catch Walmart in market cap, but only two more years to double it.” More than a profit-seeking corporation, Amazon is behaving like a planned economy.

If there is one story on Americans who grew up after the fall of the Berlin Wall know about planned economies, I’d wager it’s the one about Boris Yeltsin in a Texas supermarket.

In 1989, recently elected to the Supreme Soviet, Yeltsin came to America, in part to see Johnson Space Center in Houston. On an unscheduled jaunt, the Soviet delegation visited a local supermarket. Photos from the Houston Chronicle capture the day: Yeltsin, overcome by a display of Jell-O Pudding Pops; Yeltsin inspecting the onions; Yeltsin staring down a full display of shiny produce like a line of enemy soldiers. Planning could never master the countless variables that capitalism calculated using the tireless machine of self-interest. According to the story, the overflowing shelves filled Yeltsin with despair for the Soviet system, turned him into an economic reformer, and spelled the end for state socialism as a global force. We’re taught this lesson in public schools, along with Animal Farm: Planned economies do not work.

It’s almost 30 years later, but if Comrade Yeltsin had visited today’s most-advanced American grocery stores, he might not have felt so bad. Journalist Hayley Peterson summarized her findings in the title of her investigative piece, “‘Seeing Someone Cry at Work Is Becoming Normal’: Employees Say Whole Foods Is Using ‘Scorecards’ to Punish Them.” The scorecard in question measures compliance with the (Amazon subsidiary) Whole Foods OTS, or “on-the-shelf” inventory management. OTS is exhaustive, replacing a previously decentralized system with inch-by-inch centralized standards. Those standards include delivering food from trucks straight to the shelves, skipping the expense of stockrooms. This has resulted in produce displays that couldn’t bring down North Korea. Has Bezos stumbled into the problems with planning?

Although OTS was in play before Amazon purchased Whole Foods last August, stories about enforcement to tears fit with the Bezos ethos and reputation. Amazon is famous for pursuing growth and large-scale efficiencies, even when workers find the experiments torturous and when they don’t make a lot of sense to customers, either. If you receive a tiny item in a giant Amazon box, don’t worry. Your order is just one small piece in an efficiency jigsaw that’s too big and fast for any individual human to comprehend. If we view Amazon as a planned economy rather than just another market player, it all starts to make more sense: We’ll thank Jeff later, when the plan works. And indeed, with our dollars, we have.

In fact, to think of Amazon as a “market player” is a mischaracterization. The world’s biggest store doesn’t use suggested retail pricing; it sets its own. Book authors (to use a personal example) receive a distinctly lower royalty for Amazon sales because the site has the power to demand lower prices from publishers, who in turn pass on the tighter margins to writers. But for consumers, it works! Not only are books significantly cheaper on Amazon, the site also features a giant stock that can be shipped to you within two days, for free with Amazon Prime citizensh…er, membership. All 10 or so bookstores I frequented as a high school and college student have closed, yet our access to books has improved — at least as far as we seem to be able to measure. It’s hard to expect consumers to feel bad enough about that to change our behavior.

Although they attempt to grow in a single direction, planned economies always destroy as well as build. In the 1930s, the Soviet Union compelled the collectivization of kulaks, or prosperous peasants. Small farms were incorporated into a larger collective agricultural system. Depending on who you ask, dekulakization was literal genocide, comparable to the Holocaust, and/or it catapulted what had been a continent-sized expanse of peasants into a modern superpower. Amazon’s decimation of small businesses (bookstores in particular) is a similar sort of collectivization, purging small proprietors or driving them onto Amazon platforms. The process is decentralized and executed by the market rather than the state, but don’t get confused: Whether or not Bezos is banging on his desk, demanding the extermination of independent booksellers — though he probably is — these are top-down decisions to eliminate particular ways of life. (...)

Amazon has succeeded in large part because of the company’s uncommon drive to invest in growth. And today, not only are other companies slow to spend, so are governments. Austerity politics and decades of privatization put Amazon in a place to take over state functions. If localities can’t or won’t invest in jobs, then Bezos can get them to forgo tax dollars (and dignity) to host HQ2. There’s no reason governments couldn’t offer on-demand cloud computing services as a public utility, but instead the feds pay Amazon Web Services to host their sites. And if the government outsources health care for its population to insurers who insist on making profits, well, stay tuned. There’s no near-term natural end to Amazon’s growth, and by next year the company’s annual revenue should surpass the GDP of Vietnam. I don’t see any reason why Amazon won’t start building its own cities in the near future.

America never had to find out whether capitalism could compete with the Soviets plus 21st-century technology. Regardless, the idea that market competition can better set prices than algorithms and planning is now passé. Our economists used to scoff at the Soviets’ market-distorting subsidies; now Uber subsidizes every ride. Compared to the capitalists who are making their money by stripping the copper wiring from the American economy, the Bezos plan is efficient. So, with the exception of small business owners and managers, why wouldn’t we want to turn an increasing amount of our life-world over to Amazon? I have little doubt the company could, from a consumer perspective, improve upon the current public-private mess that is Obamacare, for example. Between the patchwork quilt of public- and private-sector scammers that run America today and “up and to the right,” life in the Amazon with Lex Luthor doesn’t look so bad. At least he has a plan, unlike some people.

From the perspective of the average consumer, it’s hard to beat Amazon. The single-minded focus on efficiency and growth has worked, and delivery convenience is perhaps the one area of American life that has kept up with our past expectations for the future. However, we do not make the passage from cradle to grave as mere average consumers. Take a look at package delivery, for example: Amazon’s latest disruptive announcement is “Shipping with Amazon,” a challenge to the USPS, from which Amazon has been conniving preferential rates. As a government agency bound to serve everyone, the Postal Service has had to accept all sorts of inefficiencies, like free delivery for rural customers or subsidized media distribution to realize freedom of the press. Amazon, on the other hand, is a private company that doesn’t really have to do anything it doesn’t want to do. In aggregate, as average consumers, we should be cheering. Maybe we are. But as members of a national community, I hope we stop to ask if efficiency is all we want from our delivery infrastructure. Lowering costs as far as possible sounds good until you remember that one of those costs is labor. One of those costs is us.

Earlier this month, Amazon was awarded two patents for a wristband system that would track the movement of warehouse employees’ hands in real time. It’s easy to see how this is a gain in efficiency: If the company can optimize employee movements, everything can be done faster and cheaper. It’s also easy to see how, for those workers, this is a significant step down the path into a dystopian hellworld. Amazon is a notoriously brutal, draining place to work, even at the executive levels. The fear used to be that if Amazon could elbow out all its competitors with low prices, it would then jack them up, Martin Shkreli style. That’s not what happened. Instead, Amazon and other monopsonists have used their power to drive wages and the labor share of production down. If you follow the Bezos strategy all the way, it doesn’t end in fully automated luxury communism or even Wall-E. It ends in The Matrix, with workers swaddled in a pod of perfect convenience and perfect exploitation. Central planning in its capitalist form turns people into another cost to be reduced as low as possible.

Just because a plan is efficient doesn’t mean it’s good. Postal Service employees are unionized; they have higher wages, paths for advancement, job stability, negotiated grievance procedures, health benefits, vacation time, etc. Amazon delivery drivers are not and do not. That difference counts as efficiency when we measure by price, and that is, to my mind, a very good argument for not handing the world over to the king of efficiency.

by Malcolm Harris, Medium |  Read more:

Salon to Ad Blockers: Can We Use Your Browser to Mine Cryptocurrency?

Salon.com has a new, cryptocurrency-driven strategy for making money when readers block ads. If you want to read Salon without seeing ads, you can do so—as long as you let the website use your spare computing power to mine some coins.

If you visit Salon with an ad blocker enabled, you might see a pop-up that asks you to disable the ad blocker or "Block ads by allowing Salon to use your unused computing power."

Salon explains what's going on in a new FAQ. "How does Salon make money by using my processing power?" the FAQ says. "We intend to use a small percentage of your spare processing power to contribute to the advancement of technological discovery, evolution, and innovation. For our beta program, we'll start by applying your processing power to help support the evolution and growth of blockchain technology and cryptocurrencies."

While that's a bit vague, a second Salon.com pop-up says that Salon is using Coinhive for "calculations [that] are securely executed in your browser's sandbox." The Coinhive pop-up on Salon.com provides the option to cancel or allow the mining to occur for one browser session. Clicking "more info" brings you to a Coinhive page.

We wrote about Coinhive in October 2017. Coinhive "harnesses the CPUs of millions of PCs to mine the Monero crypto currency. In turn, Coinhive gives participating sites a tiny cut of the relatively small proceeds."

It really does use a lot of CPU power

I enabled the mining on Salon.com today in order to see how much computing power it used. In Chrome's task manager, I got CPU readings of 426.7 and higher for a Salon tab:


The Chrome helper's CPU use shot up to 499 on my 2016 MacBook Pro, a highly unusual total on my computer even for the Chrome browser. That's out of a total of 800%, which accounts for four cores that each run two threads:


The bottom of my laptop started heating up a little, but the computer still worked normally otherwise. With that high Chrome usage, the Mac Activity Monitor said I had about 24 percent of my CPU power still in idle. After I disabled Salon's cryptocurrency mining, my idle CPU power went back up to a more typical 70 to 80 percent.

The computer I used for this experiment has a quad-core, Intel Core i7 Skylake processor. People with different computers will obviously get different results. While Salon's mining might not lock your computer up, I still wouldn't want it running in the background, especially if I were away from a power outlet.

Salon: No risk to security

On Salon, readers aren't forced into cryptocurrency mining because of the site's opt-in system. But in other cases, users have been unaware that Coinhive was being used on their systems. Researchers "from security firm Sucuri warned that at least 500 websites running the WordPress content management system alone had been hacked to run the Coinhive mining scripts," we wrote in the October 2017 article.

Cryptojacking continues to be a problem, as we've detailed in several additional articles, including one yesterday.

Users being caught unaware shouldn't happen at Salon, which makes it clear that readers don't have to opt in to the mining and says that users' security isn't compromised.

"This happens only when you are browsing Salon.com," the site's FAQ says. "Nothing is ever installed on your computer, and Salon never has access to your personal information or files."

Salon notes that ads allow the site to make money from readers without requiring them to pay for subscriptions.

by Jon Brodkin, Ars Technica |  Read more:
Images: Jon Brodkin

Monday, February 19, 2018

Chuck Berry

The Politics of Shame

Donald Trump is a bad president. But that’s not why we loathe him.

Indifference to the environment, the human cost of a tattered social safety net, and the risks attendant to reckless nuclear threats are hardly unique aspects of Trump’s presidency: they’re the American way. It’s certainly alarming that Trump has repealed common-sense environmental regulations, threatened social services, and withdrawn from the Iran deal. But those acts, which would also feature in a hypothetical Ted Cruz presidency, don’t explain the scale of the reaction to Trump. They don’t account for the existence of neologisms like “Trumpocalypse,” or tell us why late-night hosts and satirists are constantly inventing new, creative ways to mock POTUS’s weave.

The feature that makes Trump unique, and the focus of a particular kind of outrage and contempt, is not his policy prescriptions or even his several hundred thousand character failings. God knows plenty of presidents have been horrible people. What sets Trump apart is his shamelessness.

For example, Trump is not the first president or popular public figure to be accused of sexual assault—it’s a crowded field these days. But he was the first to adopt a “takes one to know one” defense—using his political opponent’s husband’s accusers as a human shield to deflect personal responsibility.

Instead of following the prescribed political ritual for making amends after being caught in flagrante, namely, a contrite press conference featuring a stiff if loyal wife, Trump chose to go on the offensive, even insisting that the infamous “pussy tape” must have been a fabrication. Trump established the pattern of “doubling down” early on when he refused to walk back his comment that John McCain’s capture and subsequent torture during the Vietnam War disqualified him from being considered a war hero. It seems attempts to shame Trump only provoke more shameful acts which fail to faze him.

Where other presidents have been cagey, Trump is brazen. He did not invent the Southern Strategy, but he was the first to employ it with so little discretion that the term “dog whistling” now feels too subtle. (Remember, the tiki-marchers shouting “Jew[s] will not replace us” contained among them some “very fine people.”) There is no shortage of vain politicians, but while John Edwards felt compelled to apologize for his $400 haircut, Trump flaunts his saffron pompadour and matching face. Nepotism may be as old as the Borgias, but the boldness with which Trump has appointed family members and their agents to positions of authority still manages to stun. And while nuclear brinksmanship was a defining feature of 20th century presidencies, never before has the “leader of the free world” literally bragged about the size of his big red button and attempted to fat-shame the leader of a rival nuclear power.

Even when it appears as if Trump is on the verge of an apology or admission, he quickly lapses back into shamelessness. When Trump was criticized for lamenting violence on “many sides” following Heather Heyer’s murder in Charlottesville, Trump was pressured by advisers into releasing a statement explicitly condemning neo-Nazis. But he soon walked it back, once again blaming “both sides” and the “alt-left” for being “very violent.” (Again, remember which side featured a white supremacist who killed a woman.)

This impudence, this shamelessness, is essentially Trump’s calling card. And those who object to it have often sought to restore the balance by trying even harder to shame him, or, in the alternative, by trying to shame his followers into acting like reasonable human beings. “Shame on all of us for making Donald Trump a Thing,” wrote conservative writer Pascal-Emmanuel Gobry back in 2015. Throngs of protestors chanted “shame, shame, shame” along Trump’s motorcade route after his “fine people” remark. The Guardian’s Jessica Valenti wrote that shaming is both justified and “necessary” because “there are people right now who should be made to feel uncomfortable” because “what they have done is shameful.”

MSNBC host Joy Ann Reid has also sought to shame Trump voters, for example by tweeting: “Last November, 63 million of you voted to pretty much hand this country over to a few uber wealthy families and the religious far right. Well done.” Washington Monthly contributor David Atkins has echoed this sentiment, tweeting: “Good news white working class! Your taxes will go up, your Medicare will be cut and your kid’s student loans will be more expensive. But at least Don Jr can bring back elephant trunks on his tax deductible private jet, so it’s all good.”

“How could he/they!?” is a popular way to start sentences about either Trump or his supporters. The statistic that 53% of white women voted for Trump (how could they??) is a useful tool both for shaming others and the self-flagellation-cum-virtue-signaling characteristic of some white women who “knew better.” Even suggesting that politicians talk to Trump voters is grounds for ridicule. Forget scarlet letters—nothing short of community expulsion will do. They’re “irredeemable” after all. So why bother “reaching out”?

Believe me, I empathize. Trump’s policies hurt people, and the people who voted for him did so willingly. Given the easy-to-anticipate consequences of their votes, Trump voters do seem like bad people who should be ashamed. We’re often encouraged to engage more civilly with “people who disagree with us,” but the divergent value systems reflected by America’s two major political parties cut to the core of who we are. They are not necessarily mere disagreements, but deep moral schisms, which is why commentators like Valenti insist that a high level of outrage is appropriate to the circumstances. If you’re not outraged, you’re not taking seriously enough the harm done to the immigrant families torn apart by ICE. Mere fact-based criticisms of various policy positions feel inadequate, as if they trivialize the moral issues involved. It seems important to add that various beliefs, themselves, are shameful. No wonder, then, that the shared impulse isn’t just to disagree, but to “drag,” destroy, and decimate.

Given what’s at stake, I understand why shaming feels not only appropriate, but compulsory. It’s an inclination I share and sympathize with.

But in practice, I think it’s a mistake.

by Briahna Joy Gray, Current Affairs |  Read more:
Image: Tyler Rubenfeld

Freestyle Skier's Complex Path Offers Olympic Rorschach Test

The words left Liz Swaney's lips without an ounce of irony. No telling curl of the lips. No wink. Nothing. She meant them. All of them.

"I didn't qualify for finals so I'm really disappointed," the 33-year-old Californian said after coming in last in the 24-woman field during Olympic women's halfpipe qualifying on Monday.

She seemed ... surprised.

Even though her score of 31.40 was more than 40 points behind France's Anais Caradeux, whose 72.80 marked the lowest of the 12 skiers to move on to Tuesday's medal round.

Even though Swaney finished in about the same position in each of the dozen events she competed across the globe over the last four years in the run-up to the Pyeongchang Games.

Even though her two qualifying runs at Phoenix Snow Park featured little more than Swaney riding up the halfpipe wall before turning around in the air and skiing to the other side. It was a sequence she repeated a handful of times before capping her final trip with a pair of "alley oops", basically inward 180 degree turns more fitting for the local slopes than the world's largest sporting event.

Halfpipe is judged on a 100-point scale. Swaney has yet to break 40 in an FIS-sanctioned competition, not because she regularly wipes out trying to throw difficult tricks but because she doesn't even try them.

Yet she's here in South Korea anyway as part of the Hungarian delegation, the latest in a series of quixotic pursuits that include running for governor of California as a 19-year-old student at Berkeley to trying out for the Oakland Raiders cheerleading team to mounting a push to reach the Olympics as skeleton racer for Venezuela. She only started skiing eight years ago and only got serious about it after the skeleton thing didn't take.

"I still want to inspire people to get involved with athletics or a new sport or a new challenge at any age in life," she said.

A tale that's hardly new, though Swaney's unusual path offers a Rorschach test of sorts on what the Olympics actually mean.

The games have long trafficked in the soft-focus narrative of plucky dreamers with no shot. Think Eddie "The Eagle" Edwards, the bespectacled British ski jumper; the Jamaican bobsled team or relentlessly shirtless Tongan Olympian Pita Taufatofua , who came in 114th out of 116 skiers in the 15-kilometer cross-country race last week.

Korea entered 19-year-old Kyoungeun Kim in women's aerials earlier in the Games, the first skier from the host country to take on the event at the Olympics. Kim came in dead last while going off the smallest of the five "kicker" ramps used at the Games, the back flip she completed in the second round akin to something a teenager might do off a diving board at the neighborhood pool.

Taufatofua and Edwards embraced the uniqueness of their stories, fully allowing they were simply happy to be at the Olympics and nothing more. Kim represented Korea's first foray into the sport. All three of them competed for the countries they were born in.

Swaney's story is more complicated.

Let's get this out of the way early: she did nothing illegal to get here. She racked up the required FIS points to reach the Olympic standard. She went through the necessary hoops to join Team Hungary, the connection coming from her Hungarian maternal grandfather, who she said would have turned 100 on Tuesday. She's spent more than her fair share of money hopscotching continents chasing a dream she says was hatched watching the 1992 Games.

It was not easy and it was not cheap. Yet she kept at it. Keeping at it is kind of her thing. No matter how you try to frame the questions, the answers come back the same. She swears this isn't a publicity stunt. This is real.

"I'm trying to soak in the Olympic experience but also focusing on the halfpipe here and trying to go higher each time and getting more spins in," said Swaney, who wore bib No. 23 and stars-and-stripes goggles not as some sort of statement but basically because they were the least expensive in the athlete's store.

One problem. Swaney doesn't go very high. She doesn't spin very much.

Canadian Cassie Sharpe packed more twists into the first two tricks of her qualifying-topping run than Swaney did all day. In an event making its second Olympic appearance, one focused on progression and pushing the edge, Swaney's tentative, decidedly grounded trips down the pipe play in stark contrast to everyone else.

The English language announcer stayed largely quiet during Swaney's second run because, well, there wasn't much to describe. As Lady Gaga blared over the speakers, the crowd watched in silence for sparse applause at the end. The judges awarded her for doing back-to-back "alley oops," with the American judge even giving her a 33. It was better than her first, but it was nowhere near world class.

Swaney is here because she earned her way in.

Still, it leads to the inevitable question: should she be?

by Will Graves, AP |  Read more:
Image: Getty via Yahoo

Sunday, February 18, 2018


Nicole McCormick Santiago, Blue Room
via:

“Fuck You, I Like Guns.”

America, can we talk? Let’s just cut the shit for once and actually talk about what’s going on without blustering and pretending we’re actually doing a good job at adulting as a country right now. We’re not. We’re really screwing this whole society thing up, and we have to do better. We don’t have a choice. People are dying. At this rate, it’s not if your kids, or mine, are involved in a school shooting, it’s when. One of these happens every 60 hours on average in the US. If you think it can’t affect you, you’re wrong. Dead wrong. So let’s talk.

I’ll start. I’m an Army veteran. I like M-4’s, which are, for all practical purposes, an AR-15, just with a few extra features that people almost never use anyway. I’d say at least 70% of my formal weapons training is on that exact rifle, with the other 30% being split between various and sundry machineguns and grenade launchers. My experience is pretty representative of soldiers of my era. Most of us are really good with an M-4, and most of us like it at least reasonably well, because it is an objectively good rifle. I was good with an M-4, really good. I earned the Expert badge every time I went to the range, starting in Basic Training. This isn’t uncommon. I can name dozens of other soldiers/veterans I know personally who can say the exact same thing. This rifle is surprisingly easy to use, completely idiot-proof really, has next to no recoil, comes apart and cleans up like a dream, and is light to carry around. I’m probably more accurate with it than I would be with pretty much any other weapon in existence. I like this rifle a lot. I like marksmanship as a sport. When I was in the military, I enjoyed combining these two things as often as they’d let me.

With all that said, enough is enough. My knee jerk reaction is to consider weapons like the AR-15 no big deal because it is my default setting. It’s where my training lies. It is my normal, because I learned how to fire a rifle IN THE ARMY. You know, while I may only have shot plastic targets on the ranges of Texas, Georgia, and Missouri, that’s not what those weapons were designed for, and those targets weren’t shaped like deer. They were shaped like people. Sometimes we even put little hats on them. You learn to take a gut shot, “center mass”, because it’s a bigger target than the head, and also because if you maim the enemy soldier rather than killing him cleanly, more of his buddies will come out and get him, and you can shoot them, too. He’ll die of those injuries, but it’ll take him a while, giving you the chance to pick off as many of his compadres as you can. That’s how my Drill Sergeant explained it anyway. I’m sure there are many schools of thought on it. The fact is, though, when I went through my marksmanship training in the US Army, I was not learning how to be a competition shooter in the Olympics, or a good hunter. I was being taught how to kill people as efficiently as possible, and that was never a secret.

As an avowed pacifist now, it turns my stomach to even type the above words, but can you refute them? I can’t. Every weapon that a US Army soldier uses has the express purpose of killing human beings. That is what they are made for. The choice rifle for years has been some variant of what civilians are sold as an AR-15. Whether it was an M-4 or an M-16 matters little. The function is the same, and so is the purpose. These are not deer rifles. They are not target rifles. They are people killing rifles. Let’s stop pretending they’re not.

With this in mind, is anybody surprised that nearly every mass shooter in recent US history has used an AR-15 to commit their crime? And why wouldn’t they? High capacity magazine, ease of loading and unloading, almost no recoil, really accurate even without a scope, but numerous scopes available for high precision, great from a distance or up close, easy to carry, and readily available. You can buy one at Wal-Mart, or just about any sports store, and since they’re long guns, I don’t believe you have to be any more than 18 years old with a valid ID. This rifle was made for the modern mass shooter, especially the young one. If he could custom design a weapon to suit his sinister purposes, he couldn’t do a better job than Armalite did with this one already.

This rifle is so deadly and so easy to use that no civilian should be able to get their hands on one. We simply don’t need these things in society at large. I always find it interesting that when I was in the Army, and part of my job was to be incredibly proficient with this exact weapon, I never carried one at any point in garrison other than at the range. Our rifles lived in the arms room, cleaned and oiled, ready for the next range day or deployment. We didn’t carry them around just because we liked them. We didn’t bluster on about barracks defense and our second amendment rights. We tucked our rifles away in the arms room until the next time we needed them, just as it had been done since the Army’s inception. The military police protected us from threats in garrison. They had 9 mm Berettas to carry. They were the only soldiers who carry weapons in garrison. We trusted them to protect us, and they delivered. With notably rare exceptions, this system has worked well. There are fewer shootings on Army posts than in society in general, probably because soldiers are actively discouraged from walking around with rifles, despite being impeccably well trained with them. Perchance, we could have the largely untrained civilian population take a page from that book?

I understand that people want to be able to own guns. That’s ok. We just need to really think about how we’re managing this. Yes, we have to manage it, just as we manage car ownership. People have to get a license to operate a car, and if you operate a car without a license, you’re going to get in trouble for that. We manage all things in society that can pose a danger to other people by their misuse. In addition to cars, we manage drugs, alcohol, exotic animals (there are certain zip codes where you can’t own Serval cats, for example), and fireworks, among other things. We restrict what types of businesses can operate in which zones of the city or county. We have a whole system of permitting for just about any activity a person wants to conduct since those activities could affect others, and we realize, as a society, that we need to try to minimize the risk to other people that comes from the chosen activities of those around them in which they have no say. Gun ownership is the one thing our country collectively refuses to manage, and the result is a lot of dead people.

I can’t drive a Formula One car to work. It would be really cool to be able to do that, and I could probably cut my commute time by a lot. Hey, I’m a good driver, a responsible Formula One owner. You shouldn’t be scared to be on the freeway next to me as I zip around you at 140 MPH, leaving your Mazda in a cloud of dust! Why are you scared? Cars don’t kill people. People kill people. Doesn’t this sound like bullshit? It is bullshit, and everybody knows. Not one person I know would argue non-ironically that Formula One cars on the freeway are a good idea. Yet, these same people will say it’s totally ok to own the firearm equivalent because, in the words of comedian Jim Jeffries, “fuck you, I like guns”.

Yes, yes, I hear you now. We have a second amendment to the constitution, which must be held sacrosanct over all other amendments. Dude. No. The constitution was made to be a malleable document. It’s intentionally vague. We can enact gun control without infringing on the right to bear arms. You can have your deer rifle. You can have your shotgun that you love to shoot clay pigeons with. You can have your target pistol. Get a license. Get a training course. Recertify at a predetermined interval. You do not need a military grade rifle. You don’t. There’s no excuse.

“But we’re supposed to protect against tyranny! I need the same weapons the military would come at me with!” Dude. You know where I can get an Apache helicopter and a Paladin?! Hook a girl up! Seriously, though, do you really think you’d be able to hold off the government with an individual level weapon? Because you wouldn’t. One grenade, and you’re toast. Don’t have these illusions of standing up to the government, and needing military style rifles for that purpose. You’re not going to stand up to the government with this thing. They’d take you out in about half a second.

Let’s be honest. You just want a cool toy, and for the vast majority of people, that’s all an AR-15 is.

by Anna, EPSAAS |  Read more:
Image: uncredited
[ed. See also: America is Under Attack and the President Doesn't Care... nor do Republicans if it'll help in the next election.]

The Tyranny of Convenience

Convenience is the most underestimated and least understood force in the world today. As a driver of human decisions, it may not offer the illicit thrill of Freud’s unconscious sexual desires or the mathematical elegance of the economist’s incentives. Convenience is boring. But boring is not the same thing as trivial.

In the developed nations of the 21st century, convenience — that is, more efficient and easier ways of doing personal tasks — has emerged as perhaps the most powerful force shaping our individual lives and our economies. This is particularly true in America, where, despite all the paeans to freedom and individuality, one sometimes wonders whether convenience is in fact the supreme value.

As Evan Williams, a co-founder of Twitter, recently put it, “Convenience decides everything.” Convenience seems to make our decisions for us, trumping what we like to imagine are our true preferences. (I prefer to brew my coffee, but Starbucks instant is so convenient I hardly ever do what I “prefer.”) Easy is better, easiest is best.

Convenience has the ability to make other options unthinkable. Once you have used a washing machine, laundering clothes by hand seems irrational, even if it might be cheaper. After you have experienced streaming television, waiting to see a show at a prescribed hour seems silly, even a little undignified. To resist convenience — not to own a cellphone, not to use Google — has come to require a special kind of dedication that is often taken for eccentricity, if not fanaticism.

For all its influence as a shaper of individual decisions, the greater power of convenience may arise from decisions made in aggregate, where it is doing so much to structure the modern economy. Particularly in tech-related industries, the battle for convenience is the battle for industry dominance.

Americans say they prize competition, a proliferation of choices, the little guy. Yet our taste for convenience begets more convenience, through a combination of the economics of scale and the power of habit. The easier it is to use Amazon, the more powerful Amazon becomes — and thus the easier it becomes to use Amazon. Convenience and monopoly seem to be natural bedfellows.

Given the growth of convenience — as an ideal, as a value, as a way of life — it is worth asking what our fixation with it is doing to us and to our country. I don’t want to suggest that convenience is a force for evil. Making things easier isn’t wicked. On the contrary, it often opens up possibilities that once seemed too onerous to contemplate, and it typically makes life less arduous, especially for those most vulnerable to life’s drudgeries.

But we err in presuming convenience is always good, for it has a complex relationship with other ideals that we hold dear. Though understood and promoted as an instrument of liberation, convenience has a dark side. With its promise of smooth, effortless efficiency, it threatens to erase the sort of struggles and challenges that help give meaning to life. Created to free us, it can become a constraint on what we are willing to do, and thus in a subtle way it can enslave us.

It would be perverse to embrace inconvenience as a general rule. But when we let convenience decide everything, we surrender too much.

Convenience as we now know it is a product of the late 19th and early 20th centuries, when labor-saving devices for the home were invented and marketed. Milestones include the invention of the first “convenience foods,” such as canned pork and beans and Quaker Quick Oats; the first electric clothes-washing machines; cleaning products like Old Dutch scouring powder; and other marvels including the electric vacuum cleaner, instant cake mix and the microwave oven.

Convenience was the household version of another late-19th-century idea, industrial efficiency, and its accompanying “scientific management.” It represented the adaptation of the ethos of the factory to domestic life.

However mundane it seems now, convenience, the great liberator of humankind from labor, was a utopian ideal. By saving time and eliminating drudgery, it would create the possibility of leisure. And with leisure would come the possibility of devoting time to learning, hobbies or whatever else might really matter to us. Convenience would make available to the general population the kind of freedom for self-cultivation once available only to the aristocracy. In this way convenience would also be the great leveler.

This idea — convenience as liberation — could be intoxicating. Its headiest depictions are in the science fiction and futurist imaginings of the mid-20th century. From serious magazines like Popular Mechanics and from goofy entertainments like “The Jetsons” we learned that life in the future would be perfectly convenient. Food would be prepared with the push of a button. Moving sidewalks would do away with the annoyance of walking. Clothes would clean themselves or perhaps self-destruct after a day’s wearing. The end of the struggle for existence could at last be contemplated.

The dream of convenience is premised on the nightmare of physical work. But is physical work always a nightmare? Do we really want to be emancipated from all of it? Perhaps our humanity is sometimes expressed in inconvenient actions and time-consuming pursuits. Perhaps this is why, with every advance of convenience, there have always been those who resist it. They resist out of stubbornness, yes (and because they have the luxury to do so), but also because they see a threat to their sense of who they are, to their feeling of control over things that matter to them.

By the late 1960s, the first convenience revolution had begun to sputter. The prospect of total convenience no longer seemed like society’s greatest aspiration. Convenience meant conformity. The counterculture was about people’s need to express themselves, to fulfill their individual potential, to live in harmony with nature rather than constantly seeking to overcome its nuisances. Playing the guitar was not convenient. Neither was growing one’s own vegetables or fixing one’s own motorcycle. But such things were seen to have value nevertheless — or rather, as a result. People were looking for individuality again.

Perhaps it was inevitable, then, that the second wave of convenience technologies — the period we are living in — would co-opt this ideal. It would conveniencize individuality.

You might date the beginning of this period to the advent of the Sony Walkman in 1979. With the Walkman we can see a subtle but fundamental shift in the ideology of convenience. If the first convenience revolution promised to make life and work easier for you, the second promised to make it easier to be you. The new technologies were catalysts of selfhood. They conferred efficiency on self-expression.

Consider the man of the early 1980s, strolling down the street with his Walkman and earphones. He is enclosed in an acoustic environment of his choosing. He is enjoying, out in public, the kind of self-expression he once could experience only in his private den. A new technology is making it easier for him to show who he is, if only to himself. He struts around the world, the star of his own movie.

So alluring is this vision that it has come to dominate our existence. Most of the powerful and important technologies created over the past few decades deliver convenience in the service of personalization and individuality. Think of the VCR, the playlist, the Facebook page, the Instagram account. This kind of convenience is no longer about saving physical labor — many of us don’t do much of that anyway. It is about minimizing the mental resources, the mental exertion, required to choose among the options that express ourselves. Convenience is one-click, one-stop shopping, the seamless experience of “plug and play.” The ideal is personal preference with no effort. (...)

I do not want to deny that making things easier can serve us in important ways, giving us many choices (of restaurants, taxi services, open-source encyclopedias) where we used to have only a few or none. But being a person is only partly about having and exercising choices. It is also about how we face up to situations that are thrust upon us, about overcoming worthy challenges and finishing difficult tasks — the struggles that help make us who we are. What happens to human experience when so many obstacles and impediments and requirements and preparations have been removed?

Today’s cult of convenience fails to acknowledge that difficulty is a constitutive feature of human experience. Convenience is all destination and no journey. But climbing a mountain is different from taking the tram to the top, even if you end up at the same place. We are becoming people who care mainly or only about outcomes. We are at risk of making most of our life experiences a series of trolley rides.

by Tim Wu, NY Times |  Read more:
Image: Hudson Christie
[ed. This reminds me of an earlier post (The Philosophy of the Midlife Crisis) and the concept of "telic" and "atelic" activities: the "distinction between “incomplete” and “complete” activities. Building yourself a house is an incomplete activity, because its end goal—living in the finished house—is not something you can experience while you are building it. Building a house and living in it are fundamentally different things. By contrast, taking a walk in the woods is a complete activity: by walking, you are doing the very thing you wish to do. The first kind of activity is “telic”—that is, directed toward an end, or telos. The second kind is “atelic”: something you do for its own sake."]

Saturday, February 17, 2018

The State of Informed Bewilderment

The question that I’ve been asking myself for a long time is, what kind of framing should we have for the dilemmas posed by the technology we’re living through at the moment? I’m interested in information technology, ranging widely from digital technology and the Internet on one hand to artificial intelligence, both weak and strong, on the other hand. As we live through the changes and the disturbances that this technology brings, we’re in a state of mind that was once admirably characterized by Manuel Castells as "informed bewilderment," which was an expression I liked.

We’re informed because we are intensely curious about what’s going on. We're not short of information about it. We endlessly speculate and investigate it in various ways. Manuel’s point was that we actually don’t understand what it means—that’s what he meant by bewilderment. That’s a very good way of describing where we are. The question I have constantly on my mind is, are there frames that would help us to make sense of this in some way?

One of the frames that I’ve explored for a long time is the idea of trying to take a long view of these things. My feeling is that one of our besetting sins at the moment, in relation for example to digital technology, is what Michael Mann once described as the sociology of the last five minutes. I’m constantly trying to escape from that. I write a newspaper column every week, and I've written a couple of books about this stuff. If you wanted to find a way of describing what I try to do, it is trying to escape from the sociology of the last five minutes.

In relation to the Internet and the changes it has already brought in our society, my feeling is that although we don’t know really where it’s heading because it’s too early in the change, we’ve had one stroke of luck. The stroke of luck was that, as a species, we’ve conducted this experiment once before. We’re living through a transformation of our information environment. This happened once before, and we know quite a lot about it. It was kicked off in 1455 by Johannes Gutenberg and his invention of printing by movable type.

In the centuries that followed, that invention not only transformed humanity’s information environment, it also led to colossal changes in society and the world. You could say that what Gutenberg kicked off was a world in which we were all born. Even now, it’s the world in which most of us were shaped. That’s changing for younger generations, but that’s the case for people like me.

Why is Gutenberg useful? He’s useful because he instills in us a sense of humility. The way I’ve come to explain that is with a thought experiment which I often use in talks and lectures. The thought experiment goes like this:
I want you to imagine that we’re back in Mainz, the small town on the Rhine where Gutenberg's press was established. The date is around 1476 or ’78, and you’re working for the medieval version of Gallup or MORI Pollsters. You’ve got a clip slate in your hand and you’re stopping people and saying, "Excuse me, madam, would you mind if I asked you some questions?" And here’s question four: "On a scale of 1 to 5, where 1 is definitely yes and 5 is definitely no, do you think that the invention of printing by movable type will A) undermine the authority of the Catholic Church, B) trigger and fuel a Protestant Reformation, C) enable the rise of something called modern science, D) enable the creation of entirely undreamed of and unprecedented professions, occupations, industries, and E) change our conception of childhood?"
That’s a thought experiment, and the reason you want to do it is because nobody in Mainz in, say, 1478 had any idea that what Gutenberg had done in his workshop would have these effects, and yet we know now that it had all of those effects and many more. The point of the thought experiment is, as I said, to induce a sense of humility. I chose that day in 1478 because we’re about the same distance into the revolution we’re now living through. And for anybody therefore to claim confidently that they know what it means and where it’s heading, I think that’s foolish. That’s my idea of trying to get some kind of perspective on it. It makes sense to take the long view of the present in which we are enmeshed. (...)

I’m obsessed with the idea of longer views of things. In the area I know, which is information technology, the speed with which stuff appears to change has clearly outdistanced the capacity of our social institutions to adapt. They need longer and they’re not getting it.

A historian will say that’s always been the case, and maybe that’s true. I just don’t know. If you’re a cybernetician looking at this, cybernetics had an idea of a viable system. A viable system is one that can handle the complexity of its environment. For a system to be viable, there are only two strategies. One is to reduce the complexity of the environment that the system has to deal with, and that, broadly speaking, has been the way we’ve managed it in the past.

For example, mass production—the standardization of objects and production processes—was a way of reducing the infinite variety of human tastes. Henry Ford started it with the Model T by saying, "You can have any color as long as it’s black." As manufacturing technology—the business of making physical things—became more and more sophisticated, then the industrial system became quite good at widening the range of choice available, and therefore coping with greater levels of variety.

How many different models does Mercedes make? I don't know. Every time I see a Mercedes car, it’s got a different number on it. I used to think Mercedes made maybe twenty cars. My hunch is that they make probably several hundred varieties of particular cars. The same is true for Volkswagen, etc. Because manufacturing became so efficient, it was able to widen the range of choice.

Fundamentally, mass production was a way of coping with reducing the variety that the system had to deal with. Universities are the same. The way they coped with the infinite range of things that people might want to learn about was to essentially say, “You can do this course or you can do that course. We have a curriculum. We have a set of options. We have majors and minor subjects.” We then compress the infinite variety that they might have to deal with into much smaller amounts.

Most of our institutions, the ones that still govern our societies and indeed our industries, evolved in an era when the variety of their information environment was much smaller than it is now. Because of the Internet and related technologies, our information environment is orders of magnitude more complex than institutions had to deal with even fifty years ago, certainly seventy years ago. And what that means in effect is that in this new environment, a lot of our institutions are probably not viable in the cybernetic sense. They simply can’t manage the complexity they have to deal with now.

The question for society and for everybody else is, what happened? What will happen then? How will they evolve? Will they evolve? One metaphor that I have used for thinking about this is that of ecosystems. In other words, we now live in an information ecosystem. If you’re a scientist who studies natural ecosystems, then you can rank them in terms of complexity.

For example, at one level you could say that we have moved from an information environment, which was a simple ecosystem, rather like a desert, and is much closer to something that’s now like a rainforest. It's characterized by much more diversity, by much higher density of publishers and free agents, and of the interactions between them and the speed with which they evolve and change. Most of our social institutions have not evolved to deal with this metaphorical rainforest, in which case we can expect painful changes in institutions over the next fifty to 100 years as they have to reshape in order to stay viable. Universities are suffering from that already.​

by John Naughton, Edge | Read more:
Image: uncredited