Friday, February 23, 2018

YouTube Serves Ads with CPU-Draining Cryptocurrency Miners

YouTube was recently caught displaying ads that covertly leach off visitors' CPUs and electricity to generate digital currency on behalf of anonymous attackers, it was widely reported. (...)

The ads contain JavaScript that mines the digital coin known as Monero. In nine out of 10 cases, the ads will use publicly available JavaScript provided by Coinhive, a cryptocurrency-mining service that's controversial because it allows subscribers to profit by surreptitiously using other people's computers. The remaining 10 percent of the time, the YouTube ads use private mining JavaScript that saves the attackers the 30 percent cut Coinhive takes. Both scripts are programmed to consume 80 percent of a visitor's CPU, leaving just barely enough resources for it to function.

"YouTube was likely targeted because users are typically on the site for an extended period of time," independent security researcher Troy Mursch told Ars. "This is a prime target for cryptojacking malware, because the longer the users are mining for cryptocurrency the more money is made." Mursch said a campaign from September that used the Showtime website to deliver cryptocurrency-mining ads is another example of attackers targeting a video site. (...)

As the problem of Web-based cryptomining has surged to almost epidemic proportions, a variety of AV programs have started warning of cryptocurrency-mining scripts hosted on websites and giving users the option of blocking the activity. While drive-by cryptocurrency mining is an abuse that drains visitors' electricity and computing resources, there's no indication that it installs ransomware or other types of malware, as long as people don't click on malicious downloads.

by Dan Goodin, Ars Technica |  Read more:
Image: Diego Betto
[ed. I didn't catch this when it first came out, but cryptomining is apparently quite the 'thing' these days thanks to Coinhive.]

Christian Schloe

Boz Scaggs

Inside the OED

In February 2009, a Twitter user called @popelizbet issued an apparently historic challenge to someone called Colin: she asked if he could “mansplain” a concept to her. History has not recorded if he did, indeed, proceed to mansplain. But the lexicographer Bernadette Paton, who excavated this exchange last summer, believed it was the first time anyone had used the word in recorded form. “It’s been deleted since, but we caught it,” Paton told me, with quiet satisfaction.

In her office at Oxford University Press, Paton was drafting a brand new entry for the Oxford English Dictionary. Also in her in-tray when I visited were the millennial-tinged usage of “snowflake”, which she had hunted down to a Christian text from 1983 (“You are a snowflake. There are no two of you alike”), and new shadings of the compound “self-made woman”. Around 30,000 such items are on the OED master list; another 7,000 more pile up annually. “Everyone thinks we’re very slow, but it’s actually rather fast,” Paton said. “Though admittedly a colleague did spend a year revising ‘go’”.

Spending 12 months tracing the history of a two-letter word seems dangerously close to folly. But the purpose of a historical dictionary such as the OED is to give such questions the solemnity they deserve. An Oxford lexicographer might need to snoop on Twitter spats from a decade ago; or they might have to piece together a painstaking biography of one of the oldest verbs in the language (the revised entry for “go” traces 537 separate senses over 1,000 years). “Well, we have to get things right,” the dictionary’s current chief editor, Michael Proffitt, told me.

At one level, few things are simpler than a dictionary: a list of the words people use or have used, with an explanation of what those words mean, or have meant. At the level that matters, though – the level that lexicographers fret and obsess about – few things could be more complex. Who used those words, where and when? How do you know? Which words do you include, and on what basis? How do you tease apart this sense from that? And what is “English” anyway?

In the case of a dictionary such as the OED – which claims to provide a “definitive” record of every single word in the language from 1000AD to the present day – the question is even larger: can a living language be comprehensively mapped, surveyed and described? Speaking to lexicographers makes one wary of using the word “literally”, but a definitive dictionary is, literally, impossible. No sooner have you reached the summit of the mountain than it has expanded another hundred feet. Then you realise it’s not even one mountain, but an interlocking series of ranges marching across the Earth. (In the age of “global English”, the metaphor seems apt.)

Even so, the quest to capture “the meaning of everything” – as the writer Simon Winchester described it in his book on the history of the OED – has absorbed generations of lexicographers, from the Victorian worthies who set up a “Committee to collect unregistered words in English” to the OED’s first proper editor, the indefatigable James Murray, who spent 36 years shepherding the first edition towards publication (before it killed him). The dream of the perfect dictionary goes back to the Enlightenment notion that by classifying and regulating language one could – just perhaps – distil the essence of human thought. In 1747, in his “Plan” for the English dictionary that he was about to commence, Samuel Johnson declared he would create nothing less than “a dictionary by which the pronunciation of our language may be fixed, and its attainment facilitated; by which its purity may be preserved, its use ascertained, and its duration lengthened”. English would not be merely listed in alphabetical order; it would be saved for eternity.

Ninety years after the first edition appeared, the OED – a distant, far bulkier descendant of Johnson’s Dictionary – is currently embarked on a third edition, a goliath project that involves overhauling every entry (many of which have not been touched since the late-Victorian era) and adding at least some of those 30,000 missing words, as well as making the dictionary into a fully digital resource. This was originally meant to be completed in 2000, then 2005, then 2010. Since then, OUP has quietly dropped mentions of a date. How far had they got, I asked Proffitt. “About 48%,” he replied.

The dictionary retains a quiet pride in the lexical lengths to which it will – indeed, must – go. Some time in the late 1980s, Proffitt’s predecessor as chief editor, John Simpson, asked the poet Benjamin Zephaniah about the origins of the noun “skanking”. Zephaniah decided that the only way to explain was to come to OED headquarters and do a private, one-on-one performance. Skanking duly went in, defined as “a style of West Indian dancing to reggae music, in which the body bends forward at the waist, and the knees are raised and the hands claw the air in time to the beat”.

The tale touches something profound: in capturing a word, a sliver of lived experience can be observed and defined. If only you were able to catch all the words, perhaps you could define existence.
The first English dictionary-makers had no fantasies about capturing an entire culture. In contrast to languages such as Chinese and ancient Greek, where systematic, dictionary-like works have existed for millennia, the earliest English lexicons didn’t begin to be assembled until the 16th century. They were piecemeal affairs, as befitted the language’s mongrel inheritance – a jumbled stew of old Anglo-Germanic, Norse, Latin and Greek, and Norman French.

The language was perplexing enough, but in the mid-1500s it was getting ever more confusing, as political upheavals and colonial trade brought fresh waves of immigration, and with it a babel of recently “Englished” vocabulary: words such as “alcohol” (Arabic via Latin, c1543) and “abandonment” (French, c1593). Scientific and medical developments added to the chaos. In 1582, the schoolmaster Richard Mulcaster issued a frantic plea for someone to “gather all the wordes which we use in our English tung … into one dictionarie”. Such a book would stabilise spelling, a source of violent disagreement. Also, there would finally be rules for “proper use”.

In 1604, a clergyman named Robert Cawdrey attempted a stopgap solution: a slender book entitled A Table Alphabeticall. Aimed at “Ladies, gentlewomen and other unskillful persons”, it listed approximately 2,500 “hard usuall words”, less than 5% of the lexis in use at the time. Definitions were vague – “diet” is described as “manner of foode” – and there were no illustrative quotations, still less any attempt at etymology. A Table Alphabeticall was so far from being completist that there weren’t even entries for the letter W.

Lexicographers kept trying to do better – and mostly kept failing. A new “word book” edited by John Bullokar appeared in 1616 (5,000 words); another by Henry Cockeram in 1623 (8,000 words and the first to call itself a “dictionary”); yet another by Thomas Blount in 1656 (11,000 words). But no one could seem to capture “all the wordes” in English, still less agree on what those words meant. The language was expanding more rapidly than ever. Where would you even start?

by Andrew Dickson, The Guardian |  Read more:
Image: Alarmy

Thursday, February 22, 2018

The Two Words That Will Help Get an Airline Upgrade Over the Phone

"We have never bought an upper-class seat; if ever we’ve flown anywhere up front, we’ve used miles to upgrade from economy. If you want to do that, call reservations and drop the name “revenue management.” The reason is that revenue management’s job is to make sure a flight is profitable, so they’re the ones telling [reservation agents] what they can say; they’re like Flying Club’s boss. Not everyone knows that this department exists, and by mentioning it you reveal yourself as someone who knows how things work and understands how seats are released. Say to the agent: ‘Have revenue management released any first-class seats for miles upgrades yet?’ When they say no, ask them to check or just be put through to revenue management so you can ask when they will release some, as well as how many seats are left. Politely respond like this: ‘You have 20 seats unsold? Why aren’t you releasing them?’ Often by the end of the conversation they say, ‘OK, we’ll release one for you,’ or they might tell you to call back tomorrow. Doing that, we’ve had a pretty much 100 percent success rate."

The Two Words That Will Help Get an Airline Upgrade Over the Phone
Image: Singapore Airlines

The Poison We Pick

This nation pioneered modern life. Now epic numbers of Americans are killing themselves with opioids to escape it.

How does an opioid make you feel? We tend to avoid this subject in discussing recreational drugs, because no one wants to encourage experimentation, let alone addiction. And it’s easy to believe that weak people take drugs for inexplicable, reckless, or simply immoral reasons. What few are prepared to acknowledge in public is that drugs alter consciousness in specific and distinct ways that seem to make people at least temporarily happy, even if the consequences can be dire. Fewer still are willing to concede that there is a significant difference between these various forms of drug-induced “happiness” — that the draw of crack, say, is vastly different than that of heroin. But unless you understand what users get out of an illicit substance, it’s impossible to understand its appeal, or why an epidemic takes off, or what purpose it is serving in so many people’s lives. And it is significant, it seems to me, that the drugs now conquering America are downers: They are not the means to engage in life more vividly but to seek a respite from its ordeals.

The alkaloids that opioids contain have a large effect on the human brain because they tap into our natural “mu-opioid” receptors. The oxytocin we experience from love or friendship or orgasm is chemically replicated by the molecules derived from the poppy plant. It’s a shortcut — and an instant intensification — of the happiness we might ordinarily experience in a good and fruitful communal life. It ends not just physical pain but psychological, emotional, even existential pain. And it can easily become a lifelong entanglement for anyone it seduces, a love affair in which the passion is more powerful than even the fear of extinction.

Perhaps the best descriptions of the poppy’s appeal come to us from the gifted writers who have embraced and struggled with it. Many of the Romantic luminaries of the early-19th century — including the poets Coleridge, Byron, Shelley, Keats, and Baudelaire, and the novelist Walter Scott — were as infused with opium as the late Beatles were with LSD. And the earliest and in many ways most poignant account of what opium and its derivatives feel like is provided by the classic memoir Confessions of an English Opium-Eater, published in 1821 by the writer Thomas De Quincey.

De Quincey suffered trauma in childhood, losing his sister when he was 6 and his father a year later. Throughout his life, he experienced bouts of acute stomach pain, as well as obvious depression, and at the age of 19 he endured 20 consecutive days of what he called “excruciating rheumatic pains of the head and face.” As his pain drove him mad, he finally went into an apothecary and bought some opium (which was legal at the time, as it was across the West until the war on drugs began a century ago).

An hour after he took it, his physical pain had vanished. But he was no longer even occupied by such mundane concerns. Instead, he was overwhelmed with what he called the “abyss of divine enjoyment” that overcame him: “What an upheaving from its lowest depths, of the inner spirit! … here was the secret of happiness, about which philosophers had disputed for many ages.” The sensation from opium was steadier than alcohol, he reported, and calmer. “I stood at a distance, and aloof from the uproar of life,” he wrote. “Here were the hopes which blossom in the paths of life, reconciled with the peace which is in the grave.” A century later, the French writer Jean Cocteau described the experience in similar ways: “Opium remains unique and the euphoria it induces superior to health. I owe it my perfect hours.”

The metaphors used are often of lightness, of floating: “Rising even as it falls, a feather,” as William Brewer, America’s poet laureate of the opioid crisis, describes it. “And then, within a fog that knows what I’m going to do, before I do — weightlessness.” Unlike cannabis, opium does not make you want to share your experience with others, or make you giggly or hungry or paranoid. It seduces you into solitude and serenity and provokes a profound indifference to food. Unlike cocaine or crack or meth, it doesn’t rev you up or boost your sex drive. It makes you drowsy — somniferum means “sleep-inducing” — and lays waste to the libido. Once the high hits, your head begins to nod and your eyelids close. (...)

One of the more vivid images that Americans have of drug abuse is of a rat in a cage, tapping a cocaine-infused water bottle again and again until the rodent expires. Years later, as recounted in Johann Hari’s epic history of the drug war, Chasing the Scream, a curious scientist replicated the experiment. But this time he added a control group. In one cage sat a rat and a water dispenser serving diluted morphine. In another cage, with another rat and an identical dispenser, he added something else: wheels to run in, colored balls to play with, lots of food to eat, and other rats for the junkie rodent to play or have sex with. Call it rat park. And the rats in rat park consumed just one-fifth of the morphine water of the rat in the cage. One reason for pathological addiction, it turns out, is the environment. If you were trapped in solitary confinement, with only morphine to pass the time, you’d die of your addiction pretty swiftly too. Take away the stimulus of community and all the oxytocin it naturally generates, and an artificial variety of the substance becomes much more compelling.

One way of thinking of postindustrial America is to imagine it as a former rat park, slowly converting into a rat cage. Market capitalism and revolutionary technology in the past couple of decades have transformed our economic and cultural reality, most intensely for those without college degrees. The dignity that many working-class men retained by providing for their families through physical labor has been greatly reduced by automation. Stable family life has collapsed, and the number of children without two parents in the home has risen among the white working and middle classes. The internet has ravaged local retail stores, flattening the uniqueness of many communities. Smartphones have eviscerated those moments of oxytocin-friendly actual human interaction. Meaning — once effortlessly provided by a more unified and often religious culture shared, at least nominally, by others — is harder to find, and the proportion of Americans who identify as “nones,” with no religious affiliation, has risen to record levels. Even as we near peak employment and record-high median household income, a sense of permanent economic insecurity and spiritual emptiness has become widespread. Some of that emptiness was once assuaged by a constantly rising standard of living, generation to generation. But that has now evaporated for most Americans. (...)

It’s been several decades since Daniel Bell wrote The Cultural Contradictions of Capitalism, but his insights have proven prescient. Ever-more-powerful market forces actually undermine the foundations of social stability, wreaking havoc on tradition, religion, and robust civil associations, destroying what conservatives value the most. They create a less human world. They make us less happy. They generate pain.

This was always a worry about the American experiment in capitalist liberal democracy. The pace of change, the ethos of individualism, the relentless dehumanization that capitalism abets, the constant moving and disruption, combined with a relatively small government and the absence of official religion, risked the construction of an overly atomized society, where everyone has to create his or her own meaning, and everyone feels alone. The American project always left an empty center of collective meaning, but for a long time Americans filled it with their own extraordinary work ethic, an unprecedented web of associations and clubs and communal or ethnic ties far surpassing Europe’s, and such a plethora of religious options that almost no one was left without a purpose or some kind of easily available meaning to their lives. Tocqueville marveled at this American exceptionalism as the key to democratic success, but he worried that it might not endure forever.

And it hasn’t. What has happened in the past few decades is an accelerated waning of all these traditional American supports for a meaningful, collective life, and their replacement with various forms of cheap distraction. Addiction — to work, to food, to phones, to TV, to video games, to porn, to news, and to drugs — is all around us. The core habit of bourgeois life — deferred gratification — has lost its grip on the American soul. We seek the instant, easy highs, and it’s hard not to see this as the broader context for the opioid wave. This was not originally a conscious choice for most of those caught up in it: Most were introduced to the poppy’s joys by their own family members and friends, the last link in a chain that included the medical establishment and began with the pharmaceutical companies. It may be best to think of this wave therefore not as a function of miserable people turning to drugs en masse but of people who didn’t realize how miserable they were until they found out what life without misery could be. To return to their previous lives became unthinkable. For so many, it still is. (...)

To see this epidemic as simply a pharmaceutical or chemically addictive problem is to miss something: the despair that currently makes so many want to fly away. Opioids are just one of the ways Americans are trying to cope with an inhuman new world where everything is flat, where communication is virtual, and where those core elements of human happiness — faith, family, community — seem to elude so many. Until we resolve these deeper social, cultural, and psychological problems, until we discover a new meaning or reimagine our old religion or reinvent our way of life, the poppy will flourish.

by Andrew Sullivan, NY Magazine |  Read more:
Image: Joe Darrow
[ed. Finally someone gets it right... it's an epidemic of despair.]

Wednesday, February 21, 2018

Let's Get Ready to Rumble: Trademarking Your Catch Phrase

You and your significant other go to the movies. During the coming attractions, you nearly choke on some Raisinettes when you hear something familiar: a series of catchy words spoken by the lead character. That was your catch phrase! You'd even considered printing it up on some t-shirts. Was it stolen? Well, maybe, but then again, maybe not. This article will give you some tips on ensuring that your catch phrase idea stays just that, yours.

A catch phrase is an expression usually popularized through repeated use by a real person or fictional character. Today, catch phrases are increasingly seen as an important component of marketing and promoting a product or service. See if you recognize some of these well-known catch phrases:

"Ancient Chinese secret, huh?" - from a Calgon commercial

"Dy-no-mite!" - Jimmie Walker as J.J. Evans from "Good Times"

"Hasta la vista, baby." - The Terminator

"Show me the money!" - Tom Cruise in "Jerry Maguire"

"Whazzup?" - Budweiser ad campaign

"Where's the Beef?" - Clara Peller in a Wendy's commercial

A catch phrase is essentially a trademark. A trademark is any word, name, slogan, design, or symbol used in commerce to identify a particular product and distinguish it from others. Like copyrights, trademarks are protected as a form of property. Owners of valid trademarks are granted exclusive rights to their use in commerce. The main purpose of trademark protection is to increase the reliability of marketplace identification and thereby help consumers select goods and services. A distinctive trademark quickly identifies a product, and over time the mark may be equated with a particular level of quality.

As with copyrights, legal rights to trademarks arise automatically without governmental formalities. But unlike copyrights, trademark rights don't begin at the moment a word, symbol, or phrase is first scribbled on paper. Rather, trademark rights stem from the actual use of a distinctive mark in commerce.

If you develop a catch phrase, you should register it with the US Patent and Trademark Office (USPTO). You might wonder why this is important, but the benefits of federal registration will hopefully convince you. First, registration means the trademark is legally valid. This protects you in case of an infringement lawsuit. If you've registered the mark, the burden of proof shifts to the defendant to show why the registered mark is undeserving of protection. Second, registration is a nationwide notice of the registrant's claim of ownership. That means someone else using the mark in another part of the country can't claim territorial ownership rights. Third, federal registration comes with the right to file infringement lawsuits in the federal courts. In case you're not already convinced, registration simply serves as a deterrent for others who won't use your mark in fear of a legal battle.

Federal registration only goes so far. The owner of the mark still bears the burden of protecting it. The primary method of protecting a catch phrase is to file an infringement lawsuit. The plaintiff may sue for financial damages, an injunction against further use, or both. The basic test in infringement lawsuits is whether the allegedly infringing phrase is similar enough to create a "likelihood of confusion."

In short, here's how to get the maximum mileage out of your catch phrase: Develop a distinctive one, use it in interstate commerce, and register it with the US Patent and Trademark Office.

by Donald R. Simon, Legal Zoom |  Read more:

The Rise of Virtual Citizenship

“If you believe you are a citizen of the world, you are a citizen of nowhere. You don’t understand what citizenship means,” the British prime minister, Theresa May, declared in October 2016. Not long after, at his first postelection rally, Donald Trump asserted, “There is no global anthem. No global currency. No certificate of global citizenship. We pledge allegiance to one flag and that flag is the American flag.” And in Hungary, Prime Minister Viktor Orbán has increased his national-conservative party’s popularity with statements like “all the terrorists are basically migrants” and “the best migrant is the migrant who does not come.”

Citizenship and its varying legal definition has become one of the key battlegrounds of the 21st century, as nations attempt to stake out their power in a G-Zero, globalized world, one increasingly defined by transnational, borderless trade and liquid, virtual finance. In a climate of pervasive nationalism, jingoism, xenophobia, and ever-building resentment toward those who move, it’s tempting to think that doing so would become more difficult. But alongside the rise of populist, identitarian movements across the globe, identity itself is being virtualized, too. It no longer needs to be tied to place or nation to function in the global marketplace.

Hannah Arendt called citizenship “the right to have rights.” Like any other right, it can be bestowed and withheld by those in power, but in its newer forms it can also be bought, traded, and rewritten. Virtual citizenship is a commodity that can be acquired through the purchase of real estate or financial investments, subscribed to via an online service, or assembled by peer-to-peer digital networks. And as these options become available, they’re also used, like so many technologies, to exclude those who don’t fit in.

In a world that increasingly operates online, geography and physical infrastructure still remain crucial to control and management. Undersea fiber-optic cables trace the legacy of imperial trading routes. Google and Facebook erect data centers in Scandinavia and the Pacific Northwest, close to cheap hydroelectric power and natural cooling. The trade in citizenship itself often manifests locally as architecture. From luxury apartments in the Caribbean and the Mediterranean to data centers in Europe and refugee settlements in the Middle East, a scattered geography of buildings brings a different reality into focus: one in which political decisions and national laws transform physical space into virtual territory.

The sparkling seafront of Limassol, the second-largest city in Cyprus, stretches for several miles along the southwestern coast of the island. In recent years it has become particularly popular among Russian tourists and emigrants, who have settled in the area. Almost 20 percent of the population is now Russian-speaking. Along 28 October Avenue, which borders the seafront, new towers have sprung up, as well as a marina and housing complex, filled with international coffee and restaurant chains. The 19-floor Olympic Residence towers are the tallest residential buildings on the island, along with the Oval building, a 16-floor structure shaped like its name. Soon a crop of new skyscrapers will join them, including three 37- to 39-story towers called Trilogy and the 170-meter Onebuilding. Each building’s website features text in English, Russian, and in several cases, Chinese. China’s Juwai property portal lists other, cheaper options, from hillside holiday apartments to sprawling villas. Many are illustrated with computer renderings—they haven’t actually been built yet.

The appeal of Limassol isn’t limited to its excellent climate and proximity to the ocean. The real attraction, as many of the advertisements make clear, is citizenship. The properties are proxies for a far more valuable prize: a golden visa.

Visas are nothing new; they allow foreigners to travel and work within a host nation’s borders for varying lengths of time. But the golden visa is a relatively recent innovation. Pioneered in the Caribbean, golden visas trade citizenship for cash by setting a price on passports. If foreign nationals invest in property above a certain price threshold, they can buy their way into a country—and beyond, once they hold a citizenship and passport.

A luxury holiday home on Saint Kitts and Nevis or Grenada in the West Indies might be useful for those looking to take advantage of those islands’ liberal tax regimes. But a passport acquired through Cyprus’s golden-visa scheme makes the bearer a citizen of the European Union, with all the benefits that accrue therewith. Moreover, there’s no requirement to reside in or even to visit Cyprus. The whole business, including acquisition of suitably priced real estate, can be carried out without ever setting foot on the island. The real estate doesn’t even have to exist yet—it can be completely virtual, just a computer rendering on a website. All for just 2 million euros, the minimum spend for the citizenship by investment.

As a result, Cypriot real-estate websites are filled with investment guides and details on how to apply for a new passport. This is the new era of virtual citizenship, where your papers and your identity—and all the rights that flow from them—owe more to legal frameworks and investment vehicles than any particular patch of ground where you might live. (...)

Juwai, the Chinese portal, casts a wider eye than just Cyprus. Its website hosts a side-by-side comparison of various golden-visa schemes, laying out the costs and benefits of each, from the price of the investment to how long buyers must wait for a new passport to come through. Not all the schemes are created equally. Cyprus’s neighbor Greece has one of the cheapest schemes going, with residency available for just 250,000 euros. But that’s only residency—the right to stay in the country—not local, let alone EU, citizenship, which can take years to obtain and might never be granted. Sometimes the schemes have gone awry, too. Some 400,000 foreign investors in Portugal’s 500,000-euro golden-visa scheme have been left in limbo by bureaucratic collapse, waiting years for a passport which was promised within months. Chinese homeowners have been forced to fly in and out of the country every couple of months in order to maintain short-term visas, despite having paid thousands for property. (...)

The world is in the midst of the greatest movement of people since the end of the World War II, and the combination of increasing global inequality and climate change will only increase its pace. Two hundred million people are on the move now, and as many as a billion might become migratory by 2050. Citizenship, the only tool we have for guaranteeing rights and responsibilities in a world of nation-states, is subject to increasing pressure to adapt. Today’s virtual citizenship caters mostly to the wealthy, or the poor. Could tomorrow provide new opportunities for everyone? And if possible, will the results look more like what’s been done for the global elite or for the most disadvantaged?

by James Bridle, The Atlantic |  Read more:
Image: Sean Gallup / Getty

‘The Twilight Zone,’ from A to Z

The planet has been knocked off its elliptical orbit and overheats as it hurtles toward the sun; the night ceases to exist, oil paintings melt, the sidewalks in New York are hot enough to fry an egg on, and the weather forecast is “more of the same, only hotter.” Despite the unbearable day-to-reality of constant sweat, the total collapse of order and decency, and, above all, the scarcity of water, Norma can’t shake the feeling that one day she’ll wake up and find that this has all been a dream. And she’s right. Because the world isn’t drifting toward the sun at all, it’s drifting away from it, and the paralytic cold has put Norma into a fever dream.

This is “The Midnight Sun,” my favorite episode of The Twilight Zone, and one that has come to seem grimly familiar. I also wake up adrift, in a desperate and unfamiliar reality, wondering if the last year in America has been a dream—I too expect catastrophe, but it’s impossible to know from which direction it will come, whether I am right to trust my senses or if I’m merely sleepwalking while the actual danger becomes ever-more present. One thing I do know is that I’m not alone: since the election of Donald Trump, it’s become commonplace to compare the new normal to living in the Twilight Zone, as Paul Krugman did in a 2017 New York Times op-ed titled “Living in the Trump Zone,” in which he compared the President to the all-powerful child who terrorizes his Ohio hometown in “It’s a Good Life,” policing their thoughts and arbitrarily striking out at the adults. But these comparisons do The Twilight Zone a disservice. The show’s articulate underlying philosophy was never that life is topsy-turvy, things are horribly wrong, and misrule will carry the day—it is instead a belief in a cosmic order, of social justice and a benevolent irony that, in the end, will wake you from your slumber and deliver you unto the truth.

The Twilight Zone has dwelt in the public imagination, since its cancellation in 1964, as a synecdoche for the kind of neat-twist ending exemplified by “To Serve Man” (it’s a cookbook), “The After Hours” (surprise, you’re a mannequin), and “The Eye of the Beholder” (everyone has a pig-face but you). It’s probably impossible to feel the original impact of each show-stopping revelation, as the twist ending has long since been institutionalized, clichéd, and abused in everything from the 1995 film The Usual Suspects to Twilight Zone-style anthology series like Black Mirror. Rewatching these episodes with the benefit of Steven Jay Rubin’s new, 429-page book, The Twilight Zone Encyclopedia, (a bathroom book if ever I saw one), I realized that the punchlines are actually the least reason for the show’s enduring hold over the imagination. That appeal lies, rather, in its creator Rod Serling’s rejoinders to the prevalent anti-Communist panic that gripped the decade: stories of witch-hunting paranoia tend to end badly for everyone, as in “The Monsters Are Due on Maple Street,” in which the population of a town turns on each other in a panic to ferret out the alien among them, or in “Will the Real Martian Please Stand Up?” which relocates the premise to a diner in which the passengers of a bus are temporarily stranded and subject to interrogation by a pair of state troopers.

The show’s most prevalent themes are probably best distilled as “you are not what you took yourself to be,” “you are not where you thought you were,” and “beneath the façade of mundane American society lurks a cavalcade of monsters, clones, and robots.” Serling had served as a paratrooper in the Philippines in 1945 and returned with PTSD; he and his eventual audience were indeed caught between the familiar past and an unknown future. They stood dazed in a no-longer-recognizable world, flooded with strange new technologies, vastly expansionist corporate or federal jurisdictions, and once-unfathomable ideologies. The culture was shifting from New Deal egalitarianism to the exclusionary persecution and vigilantism of McCarthyism, the “southern strategy” of Goldwater and Nixon, and the Cold War-era emphasis on mandatory civilian conformity, reinforced across the board in schools and the media. In “The Obsolete Man,” a totalitarian court tries a crusty, salt-of-the-earth librarian (played by frequent Twilight Zone star Burgess Meredith, blacklisted since the 1950s, who breaks his glasses in “Time Enough At Last” and plays the titular milquetoast in “Mr. Dingle, the Strong”), who has outlived his bookish medium; but his obsolescence is something every US veteran would have recognized given the gulf between the country they defended and the one that had so recently taken root and was beginning to resemble, in its insistence on purity and obedience to social norms, the fascist states they had fought against in the war. From Serling’s opening narration:
You walk into this room at your own risk, because it leads to the future, not a future that will be but one that might be. This is not a new world, it is simply an extension of what began in the old one. It has patterned itself after every dictator who has ever planted the ripping imprint of a boot on the pages of history since the beginning of time. It has refinements, technological advances, and a more sophisticated approach to the destruction of human freedom. But like every one of the superstates that preceded it, it has one iron rule: logic is an enemy and truth is a menace. (...)
And then there’s the remarkable case of Charles Beaumont, the most prolific and celebrated of the show’s writers next to Sterling. At the time of his death at thirty-eight in 1967, he physically resembled a man of ninety years old, having abruptly aged into unrecognizable infirmity—due, depending on whom you ask, to a unique combination of Alzheimer’s and Pick’s Disease, or an addiction to Bromo-Seltzer, a shady over-the-counter antacid and hangover cure that was withdrawn from the market in 1975 due to toxicity. Beaumont was credited with twenty-two episodes of The Twilight Zone, including “Living Doll,” “Number 12 Looks Just Like You,” and “The Howling Man,” the last adapted from one of his many short stories (collected by Penguin Classics in 2015, with an afterword by William Shatner). While Serling’s Twilight Zone scripts tended to concentrate on supernatural reversals of social norms or the just deserts of assorted pretenders, reactionaries, and bigots, Beaumont’s topics trafficked in existential despair, returning to themes of futility and isolation. A man on death row is caught in a cyclical dream where the stay of execution always arrives too late (“Shadow Play”); a lonely man can only function inside the fantasies taking place inside a dollhouse (“Miniature”); a dead man quickly tires of Heaven (“A Nice Place to Visit”); and in “Printer’s Devil,” a beleaguered small-town publisher, despairing of the death of print in 1963, more-or-less-knowingly hires the Devil as his new linotype operator (Burgess Meredith again). Charles Beaumont’s cultural contribution might, in other words, be termed the most salient and pitch-black representations of irony-in-action to have graced the small screen when The Twilight Zone began airing on CBS in 1959. More than any writer up to that point, Beaumont discovered an intersection between pulp fare and Sophocles, making sociopolitical morality plays out of dime-rack science fiction, a contribution—shared with Serling—without which contemporary pop culture, with its strong tendency to couch social commentary in a metaphysical vernacular borrowed from comics and monster movies, would be impossible to imagine. (...)

Television idea or not, The Twilight Zone was an American idea, and one whose commitment to the ideals of equanimity, brotherhood, and social activism gave rise to satire at its most pointed and Juvenalian, disguised as a supernatural anthology series. Educated at the Ohio liberal arts college Antioch, Serling recalled in his last interview, before dying during heart surgery in 1975 the age of fifty, that he was motivated by his disgust at postwar bias and prejudice, which he railed against so virulently that he confessed “to creating daydreams about how I could… bump off some of these pricks.” But writing ultimately covers more ground, and Serling confined his daydreams to television and film (he famously co-wrote Planet of the Apes, another buffet of Cold War anxieties served up as an alternate-reality blockbuster).

It’s not quite the case that The Twilight Zone has been consistently influential since its early 1960s heyday. Instead, the anthology of “weird tales” format comes into vogue every fifteen years or so, with The Twilight Zone providing the obvious benchmark. Serling contributed to and hosted one of the first of these, Night Gallery, but rightly recognized that the new breed of shows abandoned the real spirit of The Twilight Zone in favor of cheap scares and special effects. Of its contemporary heirs, Black Mirror most resembles The Twilight Zone’s perception of technology as a tool for flattening the individual beneath the corrupting gullibility of the masses. But, with the exception of the very best episodes—such as the immediately canonized “San Junipero” and fourth season premiere “USS Callister,” both of which introduce simulated realities where disenfranchised members of society can dwell indefinitely—these current episodes are merely chilling visions of what already is, in which technology is the villain, not people.

by J.W. McCormack, NY Review of Books | Read more:
Image: via

Tuesday, February 20, 2018

The Singular Pursuit of Comrade Bezos

It was explicitly and deliberately a ratchet, designed to effect a one-way passage from scarcity to plenty by way of stepping up output each year, every year, year after year. Nothing else mattered: not profit, not the rate of industrial accidents, not the effect of the factories on the land or the air. The planned economy measured its success in terms of the amount of physical things it produced.
— Francis Spufford,
Red Plenty

But isn’t a business’s goal to turn a profit? Not at Amazon, at least in the traditional sense. Jeff Bezos knows that operating cash flow gives the company the money it needs to invest in all the things that keep it ahead of its competitors, and recover from flops like the Fire Phone. Up and to the right.
— Recode, “Amazon’s Epic 20-Year Run as a Public Company, Explained in Five Charts

From a financial point of view, Amazon doesn’t behave much like a successful 21st-century company. Amazon has not bought back its own stock since 2012. Amazon has never offered its shareholders a dividend. Unlike its peers Google, Apple, and Facebook, Amazon does not hoard cash. It has only recently started to record small, predictable profits. Instead, whenever it has resources, Amazon invests in capacity, which results in growth at a ridiculous clip. When the company found itself with $13.8 billion lying around, it bought a grocery chain for $13.7 billion. As the Recode story referenced above summarizes in one of the graphs: “It took Amazon 18 years as a public company to catch Walmart in market cap, but only two more years to double it.” More than a profit-seeking corporation, Amazon is behaving like a planned economy.

If there is one story on Americans who grew up after the fall of the Berlin Wall know about planned economies, I’d wager it’s the one about Boris Yeltsin in a Texas supermarket.

In 1989, recently elected to the Supreme Soviet, Yeltsin came to America, in part to see Johnson Space Center in Houston. On an unscheduled jaunt, the Soviet delegation visited a local supermarket. Photos from the Houston Chronicle capture the day: Yeltsin, overcome by a display of Jell-O Pudding Pops; Yeltsin inspecting the onions; Yeltsin staring down a full display of shiny produce like a line of enemy soldiers. Planning could never master the countless variables that capitalism calculated using the tireless machine of self-interest. According to the story, the overflowing shelves filled Yeltsin with despair for the Soviet system, turned him into an economic reformer, and spelled the end for state socialism as a global force. We’re taught this lesson in public schools, along with Animal Farm: Planned economies do not work.

It’s almost 30 years later, but if Comrade Yeltsin had visited today’s most-advanced American grocery stores, he might not have felt so bad. Journalist Hayley Peterson summarized her findings in the title of her investigative piece, “‘Seeing Someone Cry at Work Is Becoming Normal’: Employees Say Whole Foods Is Using ‘Scorecards’ to Punish Them.” The scorecard in question measures compliance with the (Amazon subsidiary) Whole Foods OTS, or “on-the-shelf” inventory management. OTS is exhaustive, replacing a previously decentralized system with inch-by-inch centralized standards. Those standards include delivering food from trucks straight to the shelves, skipping the expense of stockrooms. This has resulted in produce displays that couldn’t bring down North Korea. Has Bezos stumbled into the problems with planning?

Although OTS was in play before Amazon purchased Whole Foods last August, stories about enforcement to tears fit with the Bezos ethos and reputation. Amazon is famous for pursuing growth and large-scale efficiencies, even when workers find the experiments torturous and when they don’t make a lot of sense to customers, either. If you receive a tiny item in a giant Amazon box, don’t worry. Your order is just one small piece in an efficiency jigsaw that’s too big and fast for any individual human to comprehend. If we view Amazon as a planned economy rather than just another market player, it all starts to make more sense: We’ll thank Jeff later, when the plan works. And indeed, with our dollars, we have.

In fact, to think of Amazon as a “market player” is a mischaracterization. The world’s biggest store doesn’t use suggested retail pricing; it sets its own. Book authors (to use a personal example) receive a distinctly lower royalty for Amazon sales because the site has the power to demand lower prices from publishers, who in turn pass on the tighter margins to writers. But for consumers, it works! Not only are books significantly cheaper on Amazon, the site also features a giant stock that can be shipped to you within two days, for free with Amazon Prime citizensh…er, membership. All 10 or so bookstores I frequented as a high school and college student have closed, yet our access to books has improved — at least as far as we seem to be able to measure. It’s hard to expect consumers to feel bad enough about that to change our behavior.

Although they attempt to grow in a single direction, planned economies always destroy as well as build. In the 1930s, the Soviet Union compelled the collectivization of kulaks, or prosperous peasants. Small farms were incorporated into a larger collective agricultural system. Depending on who you ask, dekulakization was literal genocide, comparable to the Holocaust, and/or it catapulted what had been a continent-sized expanse of peasants into a modern superpower. Amazon’s decimation of small businesses (bookstores in particular) is a similar sort of collectivization, purging small proprietors or driving them onto Amazon platforms. The process is decentralized and executed by the market rather than the state, but don’t get confused: Whether or not Bezos is banging on his desk, demanding the extermination of independent booksellers — though he probably is — these are top-down decisions to eliminate particular ways of life. (...)

Amazon has succeeded in large part because of the company’s uncommon drive to invest in growth. And today, not only are other companies slow to spend, so are governments. Austerity politics and decades of privatization put Amazon in a place to take over state functions. If localities can’t or won’t invest in jobs, then Bezos can get them to forgo tax dollars (and dignity) to host HQ2. There’s no reason governments couldn’t offer on-demand cloud computing services as a public utility, but instead the feds pay Amazon Web Services to host their sites. And if the government outsources health care for its population to insurers who insist on making profits, well, stay tuned. There’s no near-term natural end to Amazon’s growth, and by next year the company’s annual revenue should surpass the GDP of Vietnam. I don’t see any reason why Amazon won’t start building its own cities in the near future.

America never had to find out whether capitalism could compete with the Soviets plus 21st-century technology. Regardless, the idea that market competition can better set prices than algorithms and planning is now passé. Our economists used to scoff at the Soviets’ market-distorting subsidies; now Uber subsidizes every ride. Compared to the capitalists who are making their money by stripping the copper wiring from the American economy, the Bezos plan is efficient. So, with the exception of small business owners and managers, why wouldn’t we want to turn an increasing amount of our life-world over to Amazon? I have little doubt the company could, from a consumer perspective, improve upon the current public-private mess that is Obamacare, for example. Between the patchwork quilt of public- and private-sector scammers that run America today and “up and to the right,” life in the Amazon with Lex Luthor doesn’t look so bad. At least he has a plan, unlike some people.

From the perspective of the average consumer, it’s hard to beat Amazon. The single-minded focus on efficiency and growth has worked, and delivery convenience is perhaps the one area of American life that has kept up with our past expectations for the future. However, we do not make the passage from cradle to grave as mere average consumers. Take a look at package delivery, for example: Amazon’s latest disruptive announcement is “Shipping with Amazon,” a challenge to the USPS, from which Amazon has been conniving preferential rates. As a government agency bound to serve everyone, the Postal Service has had to accept all sorts of inefficiencies, like free delivery for rural customers or subsidized media distribution to realize freedom of the press. Amazon, on the other hand, is a private company that doesn’t really have to do anything it doesn’t want to do. In aggregate, as average consumers, we should be cheering. Maybe we are. But as members of a national community, I hope we stop to ask if efficiency is all we want from our delivery infrastructure. Lowering costs as far as possible sounds good until you remember that one of those costs is labor. One of those costs is us.

Earlier this month, Amazon was awarded two patents for a wristband system that would track the movement of warehouse employees’ hands in real time. It’s easy to see how this is a gain in efficiency: If the company can optimize employee movements, everything can be done faster and cheaper. It’s also easy to see how, for those workers, this is a significant step down the path into a dystopian hellworld. Amazon is a notoriously brutal, draining place to work, even at the executive levels. The fear used to be that if Amazon could elbow out all its competitors with low prices, it would then jack them up, Martin Shkreli style. That’s not what happened. Instead, Amazon and other monopsonists have used their power to drive wages and the labor share of production down. If you follow the Bezos strategy all the way, it doesn’t end in fully automated luxury communism or even Wall-E. It ends in The Matrix, with workers swaddled in a pod of perfect convenience and perfect exploitation. Central planning in its capitalist form turns people into another cost to be reduced as low as possible.

Just because a plan is efficient doesn’t mean it’s good. Postal Service employees are unionized; they have higher wages, paths for advancement, job stability, negotiated grievance procedures, health benefits, vacation time, etc. Amazon delivery drivers are not and do not. That difference counts as efficiency when we measure by price, and that is, to my mind, a very good argument for not handing the world over to the king of efficiency.

by Malcolm Harris, Medium |  Read more:

Salon to Ad Blockers: Can We Use Your Browser to Mine Cryptocurrency? has a new, cryptocurrency-driven strategy for making money when readers block ads. If you want to read Salon without seeing ads, you can do so—as long as you let the website use your spare computing power to mine some coins.

If you visit Salon with an ad blocker enabled, you might see a pop-up that asks you to disable the ad blocker or "Block ads by allowing Salon to use your unused computing power."

Salon explains what's going on in a new FAQ. "How does Salon make money by using my processing power?" the FAQ says. "We intend to use a small percentage of your spare processing power to contribute to the advancement of technological discovery, evolution, and innovation. For our beta program, we'll start by applying your processing power to help support the evolution and growth of blockchain technology and cryptocurrencies."

While that's a bit vague, a second pop-up says that Salon is using Coinhive for "calculations [that] are securely executed in your browser's sandbox." The Coinhive pop-up on provides the option to cancel or allow the mining to occur for one browser session. Clicking "more info" brings you to a Coinhive page.

We wrote about Coinhive in October 2017. Coinhive "harnesses the CPUs of millions of PCs to mine the Monero crypto currency. In turn, Coinhive gives participating sites a tiny cut of the relatively small proceeds."

It really does use a lot of CPU power

I enabled the mining on today in order to see how much computing power it used. In Chrome's task manager, I got CPU readings of 426.7 and higher for a Salon tab:

The Chrome helper's CPU use shot up to 499 on my 2016 MacBook Pro, a highly unusual total on my computer even for the Chrome browser. That's out of a total of 800%, which accounts for four cores that each run two threads:

The bottom of my laptop started heating up a little, but the computer still worked normally otherwise. With that high Chrome usage, the Mac Activity Monitor said I had about 24 percent of my CPU power still in idle. After I disabled Salon's cryptocurrency mining, my idle CPU power went back up to a more typical 70 to 80 percent.

The computer I used for this experiment has a quad-core, Intel Core i7 Skylake processor. People with different computers will obviously get different results. While Salon's mining might not lock your computer up, I still wouldn't want it running in the background, especially if I were away from a power outlet.

Salon: No risk to security

On Salon, readers aren't forced into cryptocurrency mining because of the site's opt-in system. But in other cases, users have been unaware that Coinhive was being used on their systems. Researchers "from security firm Sucuri warned that at least 500 websites running the WordPress content management system alone had been hacked to run the Coinhive mining scripts," we wrote in the October 2017 article.

Cryptojacking continues to be a problem, as we've detailed in several additional articles, including one yesterday.

Users being caught unaware shouldn't happen at Salon, which makes it clear that readers don't have to opt in to the mining and says that users' security isn't compromised.

"This happens only when you are browsing," the site's FAQ says. "Nothing is ever installed on your computer, and Salon never has access to your personal information or files."

Salon notes that ads allow the site to make money from readers without requiring them to pay for subscriptions.

by Jon Brodkin, Ars Technica |  Read more:
Images: Jon Brodkin