Monday, January 8, 2018

Who Cares About Inequality?

Lloyd Blankfein is worried about inequality. The CEO of Goldman Sachs—that American Almighty, who swindled the economy and walked off scot-free— sees new “divisions” in the country. “Too much,” Blankfein lamented in 2014, “has gone to too few people.”

Charles Koch is worried, too. Another great American plutocrat—shepherd of an empire that rakes in $115 billion and spits out $200 million in campaign contributions each year—decried in 2015 the “welfare for the rich” and the formation of a “permanent underclass.” “We’re headed for a two-tiered society,” Koch warned.

Their observations join a chorus of anti-inequality advocacy among the global elite. The World Bank called inequality a “powerful threat to global progress.” The International Monetary Fund claimed it was “not a recipe for stability and sustainability” —threat-level red for the IMF. And the World Economic Forum, gathered together at Davos last year, described inequality as the single greatest global threat.

It is a stunning consensus. In Zuccotti Park, the cry of the 99% was an indictment. To acknowledge the existence of the super-rich was to incite class warfare. Not so today. Ted Cruz, whom the Kochs have described as a ‘hero’, railed against an economy where wealthy Americans “have gotten fat and happy.” He did so on Fox News.

What the hell is happening here? Why do so many rich people care so much about inequality? And why now?

The timing of the elite embrace of the anti-inequality agenda presents a puzzle precisely because it is so long overdue.

For decades, political economists have struggled to understand why inequality has remained uncontested all this time. Their workhorse game theoretic model, developed in the early 1980s by Allan Meltzer and Scott Richard, predicts that democracies respond to an increase in equality with an increase in top-rate taxation—a rational response of the so-called ‘median voter.’

And yet, the relationship simply does not hold in the real world. On the contrary, in the United States, we find its literal inverse: amid record high inequality, one of the largest tax cuts in history. This inverted relationship is known as the Robin Hood Paradox.

One explanation of this paradox is the invisibility of the super-rich. On the one hand, they hide in their enclaves: the hills, the Hamptons, Dubai, the Bahamas. In the olden days, the poor were forced to bear witness to royal riches, standing roadside as the chariot moved through town. Today, they live behind high walls in gated communities and private islands. Their wealth is obscured from view, stashed offshore and away from the tax collector. This is wealth as exclusion.

On the other, they hide among us. As Rachel Sherman has recently argued, conspicuous consumption is out of fashion, displaced by an encroaching “moral stigma of privilege” that won’t let the wealthy just live. Not long ago, the rich felt comfortable riding down broad boulevards in stretch limousines and fur coats. Today, they remove price tags from their groceries and complain about making ends meet. This is wealth as assimilation.

The result is a general misconception about the scale of inequality in America. According to one recent study, Americans tend to think that the ratio of CEO compensation to average income is 30-to-1. The actual figures are 350-to-1.

Yet this is only a partial explanation of the Robin Hood Paradox. It is an appealing theory, but I find it doubtful that any public revelation of elite lifestyles would drive these elites to call for reform. It would seem a difficult case to make after the country elected a man to highest office that lives in a golden penthouse of a skyscraper bearing his own name in the middle of the most expensive part of America’s most expensive city.

“I love all people,” President Trump promised at a rally last June. “But for these posts”—the posts in his cabinet—“I just don’t want a poor person.” The crowd cheered loudly. The state of play of the American pitchfork is determined in large part by this very worldview—and the three myths about the rich and poor that sustain it.

The first is the myth of the undeserving poor. American attitudes to inequality are deeply informed by our conception of the poor as lazy. In Why Americans Hate Welfare, Martin Gilens examines the contrast between Americans’ broad support for social spending and narrow support for actually existing welfare programs. The explanation, Gilens argues, is that Americans view the poor as scroungers—a view forged by racial representations of welfare recipients in our media.

In contrast—and this is the second myth—Americans believe in the possibility of their own upward mobility. Even if they are not rich today, they will be rich tomorrow. And even if they are not rich tomorrow, their children will be rich the next day. In a recent survey experiment, respondents overestimated social mobility in the United States by over 20%. It turns out that the overestimation is quite easy to provoke: researchers simply had to remind the participants of their own ‘talents’ in order to boost their perceptions of class mobility. Such a carrot of wealth accumulation has been shown to exert a downward pressure on Americans’ preferences for top-rate taxation.

But the third myth, and perhaps most important, concerns the wealthy. For many years, this was called trickle-down economics. Inequality was unthreatening because of our faith that the wealth at the top would—some way or another—reach the bottom. The economic science was questionable, but cultural memories lingered around a model of paternalistic capitalism that suggested its truth. The old titans of industry laid railroads, made cars, extracted oil. Company towns sprouted across the country, where good capitalists took care of good workers.

But the myth of trickling wealth has become difficult to sustain. Over the last half-century, while productivity has soared, average wages among American workers have grown by just 0.2% each year—while those at the very top grew 138%. Only half of Republicans still believe that trimming taxes for the rich leads to greater wealth for the general population. Only 13% of Democrats do.

Declining faith in trickle-down economics, however, does not necessarily imply declining reverence for the wealthy. 43% of Americans today still believe that the rich are more intelligent than the average American, compared to just 8% that believe they are less. 42% of Americans still believe that the rich are more hardworking than the average, compared to just 24 that believe they are less.

It would seem, therefore, that the trickle-down myth has been displaced by another, perhaps more obstinate myth of the 1% innovator.

The 1% innovator is a visionary: with his billions, he dreams up new and exciting ideas for the twenty-first century. Steve Jobs was one; Elon Musk is another. Their money is not idle—it is fodder for that imagination. As the public sector commitment to futurist innovation has waned—as NASA, for example, has shrunk and shriveled—his role has become even more important. Who else will take us to Mars?

The reality, of course, is that our capitalists are anything but innovative. They’re not even paternal. In fact, they are not really capitalists at all. They are mostly rentiers: rather than generate wealth, they simply extract it from the economy. Consider the rapid rise in real estate investment among the super-rich. Since the financial crash, a toxic mix of historically low interest rates and sluggish growth have encouraged international investors to turn toward the property market, which promises to deliver steady if moderate returns. Among the Forbes 400 “self-made” billionaires, real estate ranks third. Investments and technology—two other rentier industries—rank first and second, respectively.

But the myth of the 1% innovator is fundamental to the politics of inequality, because it suspends public demands for wealth taxation. If the innovators are hard at work, and they need all that capital to design and bring to life the consumer goodies that we enjoy, then we should hold off on serious tax reform and hear them out. Or worse: we should cheer on their wealth accumulation, waiting for the next, more expensive rabbit to be pulled from the hat. The revolt from below can be postponed until tomorrow or the next day.

All together, the enduring strength of these myths only serves to deepen the puzzle of elite anti-inequality advocacy. Why the sudden change of heart? Why not keep promoting the myths and playing down the scale of the “two-tiered society” that Charles Koch today decries?

The unfortunate answer, I believe, is that inequality has simply become bad economics.

by David Adler, Current Affairs | Read more:
Image: uncredited

Sunday, January 7, 2018


Rafael Araujo
via:

Dude, You Broke the Future!

Abstract: We're living in yesterday's future, and it's nothing like the speculations of our authors and film/TV producers. As a working science fiction novelist, I take a professional interest in how we get predictions about the future wrong, and why, so that I can avoid repeating the same mistakes. Science fiction is written by people embedded within a society with expectations and political assumptions that bias us towards looking at the shiny surface of new technologies rather than asking how human beings will use them, and to taking narratives of progress at face value rather than asking what hidden agenda they serve.

In this talk, author Charles Stross will give a rambling, discursive, and angry tour of what went wrong with the 21st century, why we didn't see it coming, where we can expect it to go next, and a few suggestions for what to do about it if we don't like it.


Good morning. I'm Charlie Stross, and it's my job to tell lies for money. Or rather, I write science fiction, much of it about our near future, which has in recent years become ridiculously hard to predict.

Our species, Homo Sapiens Sapiens, is roughly three hundred thousand years old. (Recent discoveries pushed back the date of our earliest remains that far, we may be even older.) For all but the last three centuries of that span, predicting the future was easy: natural disasters aside, everyday life in fifty years time would resemble everyday life fifty years ago.

Let that sink in for a moment: for 99.9% of human existence, the future was static. Then something happened, and the future began to change, increasingly rapidly, until we get to the present day when things are moving so fast that it's barely possible to anticipate trends from month to month.

As an eminent computer scientist once remarked, computer science is no more about computers than astronomy is about building telescopes. The same can be said of my field of work, written science fiction. Scifi is seldom about science—and even more rarely about predicting the future. But sometimes we dabble in futurism, and lately it's gotten very difficult.

How to predict the near future

When I write a near-future work of fiction, one set, say, a decade hence, there used to be a recipe that worked eerily well. Simply put, 90% of the next decade's stuff is already here today. Buildings are designed to last many years. Automobiles have a design life of about a decade, so half the cars on the road will probably still be around in 2027. People ... there will be new faces, aged ten and under, and some older people will have died, but most adults will still be around, albeit older and grayer. This is the 90% of the near future that's already here.

After the already-here 90%, another 9% of the future a decade hence used to be easily predictable. You look at trends dictated by physical limits, such as Moore's Law, and you look at Intel's road map, and you use a bit of creative extrapolation, and you won't go too far wrong. If I predict that in 2027 LTE cellular phones will be everywhere, 5G will be available for high bandwidth applications, and fallback to satellite data service will be available at a price, you won't laugh at me. It's not like I'm predicting that airliners will fly slower and Nazis will take over the United States, is it?

And therein lies the problem: it's the 1% of unknown unknowns that throws off all calculations. As it happens, airliners today are slower than they were in the 1970s, and don't get me started about Nazis. Nobody in 2007 was expecting a Nazi revival in 2017, right? (Only this time round Germans get to be the good guys.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we're now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Ruling out the singularity

Some of you might assume that, as the author of books like "Singularity Sky" and "Accelerando", I attribute this to an impending technological singularity, to our development of self-improving artificial intelligence and mind uploading and the whole wish-list of transhumanist aspirations promoted by the likes of Ray Kurzweil. Unfortunately this isn't the case. I think transhumanism is a warmed-over Christian heresy. While its adherents tend to be vehement atheists, they can't quite escape from the history that gave rise to our current western civilization. Many of you are familiar with design patterns, an approach to software engineering that focusses on abstraction and simplification in order to promote reusable code. When you look at the AI singularity as a narrative, and identify the numerous places in the story where the phrase "... and then a miracle happens" occurs, it becomes apparent pretty quickly that they've reinvented Christianity.

Indeed, the wellsprings of today's transhumanists draw on a long, rich history of Russian Cosmist philosophy exemplified by the Russian Orthodox theologian Nikolai Fyodorvitch Federov, by way of his disciple Konstantin Tsiolkovsky, whose derivation of the rocket equation makes him essentially the father of modern spaceflight. And once you start probing the nether regions of transhumanist thought and run into concepts like Roko's Basilisk—by the way, any of you who didn't know about the Basilisk before are now doomed to an eternity in AI hell—you realize they've mangled it to match some of the nastiest ideas in Presybterian Protestantism.

If it walks like a duck and quacks like a duck, it's probably a duck. And if it looks like a religion it's probably a religion. I don't see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I'm going to wash my hands of the singularity as an explanatory model without further ado—I'm one of those vehement atheists too—and try and come up with a better model for what's happening to us.

Towards a better model for the future

As my fellow SF author Ken MacLeod likes to say, the secret weapon of science fiction is history. History, loosely speaking, is the written record of what and how people did things in past times—times that have slipped out of our personal memories. We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story. With a little bit of history it's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Hapsburg Empire, or to re-spin the October Revolution as a tale of how Mars got its independence.

But history is useful for so much more than that.

It turns out that our personal memories don't span very much time at all. I'm 53, and I barely remember the 1960s. I only remember the 1970s with the eyes of a 6-16 year old. My father, who died last year aged 93, just about remembered the 1930s. Only those of my father's generation are able to directly remember the great depression and compare it to the 2007/08 global financial crisis directly. But westerners tend to pay little attention to cautionary tales told by ninety-somethings. We modern, change-obsessed humans tend to repeat our biggest social mistakes when they slip out of living memory, which means they recur on a time scale of seventy to a hundred years.

So if our personal memories are usless, it's time for us to look for a better cognitive toolkit.

History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.

I'm talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?

Old, slow AI

Let me crib from Wikipedia for a moment:

In the late 18th century, Stewart Kyd, the author of the first treatise on corporate law in English, defined a corporation as:
a collection of many individuals united into one body, under a special denomination, having perpetual succession under an artificial form, and vested, by policy of the law, with the capacity of acting, in several respects, as an individual, particularly of taking and granting property, of contracting obligations, and of suing and being sued, of enjoying privileges and immunities in common, and of exercising a variety of political rights, more or less extensive, according to the design of its institution, or the powers conferred upon it, either at the time of its creation, or at any subsequent period of its existence.
—A Treatise on the Law of Corporations, Stewart Kyd (1793-1794)

In 1844, the British government passed the Joint Stock Companies Act, which created a register of companies and allowed any legal person, for a fee, to register a company, which existed as a separate legal person. Subsequently, the law was extended to limit the liability of individual shareholders in event of business failure, and both Germany and the United States added their own unique extensions to what we see today as the doctrine of corporate personhood.

(Of course, there were plenty of other things happening between the sixteenth and twenty-first centuries that changed the shape of the world we live in. I've skipped changes in agricultural productivity due to energy economics, which finally broke the Malthusian trap our predecessors lived in. This in turn broke the long term cap on economic growth of around 0.1% per year in the absence of famine, plagues, and wars depopulating territories and making way for colonial invaders. I've skipped the germ theory of diseases, and the development of trade empires in the age of sail and gunpowder that were made possible by advances in accurate time-measurement. I've skipped the rise and—hopefully—decline of the pernicious theory of scientific racism that underpinned western colonialism and the slave trade. I've skipped the rise of feminism, the ideological position that women are human beings rather than property, and the decline of patriarchy. I've skipped the whole of the Enlightenment and the age of revolutions! But this is a technocentric congress, so I want to frame this talk in terms of AI, which we all like to think we understand.)

Here's the thing about corporations: they're clearly artificial, but legally they're people. They have goals, and operate in pursuit of these goals. And they have a natural life cycle. In the 1950s, a typical US corporation on the S&P 500 index had a lifespan of 60 years, but today it's down to less than 20 years.

Corporations are cannibals; they consume one another. They are also hive superorganisms, like bees or ants. For their first century and a half they relied entirely on human employees for their internal operation, although they are automating their business processes increasingly rapidly this century. Each human is only retained so long as they can perform their assigned tasks, and can be replaced with another human, much as the cells in our own bodies are functionally interchangeable (and a group of cells can, in extremis, often be replaced by a prosthesis). To some extent corporations can be trained to service the personal desires of their chief executives, but even CEOs can be dispensed with if their activities damage the corporation, as Harvey Weinstein found out a couple of months ago.

Finally, our legal environment today has been tailored for the convenience of corporate persons, rather than human persons, to the point where our governments now mimic corporations in many of their internal structures.

by Charlie Stross, Charlie's Diary |  Read more:
Image: via 

Roy Lichtenstein
via:

Kokee Lodge
photo: markk

The US Democratic Party After The Election Of Donald Trump

In your view, what is the historic position of the Democrats in the US political system and where do they currently stand?

The Democrats have undergone an evolution over their course. It’s the oldest political party in the United States and, just to resume very briefly the late 20th century, it was the party of the New Deal, of the New Frontier, John F Kennedy, the Great Society of Lyndon Johnson. Over the most recent 30-year period, it has become somewhat different from that: a party of third-way centrism with what I think we identify in Europe as a moderately neo-liberal agenda but, in the United States, strongly associated with the financial sector.

Now it’s facing a crisis of that particular policy orientation, which is largely discredited and does not have a broad popular base. This is the meaning of the Sanders campaign and the strong appeal of that campaign in 2016 to younger voters suggests that the future of the Democratic Party, so far as its popular appeal is concerned, lies in a different direction, one that really encompasses substantially more dramatic proposals for change and reform and renovation.

In coming to the structure of a SWOT analysis, where would you identify the strengths and weaknesses of the Democrats today?

The strengths are evident in the fact that the party retains a strong position on the two coasts and the weaknesses are evident in the fact that it doesn’t have a strong position practically anywhere else. The polarisation works very much to the disadvantage of the Democratic Party because the US constitutional system gives extra weight to small states, to rural areas, and the control of those states also means that the Republican Party has gained control of the House of Representatives.

The Democratic Party has failed to maintain a national base of political organisation and has become a party that is largely responsive to a reasonably affluent, socially progressive, professional class and that is not a winning constituency in US national elections. That’s not to say that they might not win some given the alternative at any given time but the position is by no means strong structurally or organisationally.

When it comes to the opportunities and threats that the party is facing, a threat is obviously what happened in the last election with the rise of Donald Trump. How would you frame this in the context of the Democratic Party? Going forward, where do you think there are opportunities?


Up until this most recent election, the Democrats had won the presidential contest in a series of Midwestern and upper Midwestern states on a consistent basis since the 1980s. If one looked at Michigan and Wisconsin and Pennsylvania, Ohio a little less so but Minnesota, certainly, this was known as the Blue Wall. It was a set of states the Democrats felt they had a structurally sound position in.

It was clear, particularly since the global crisis in 2007-2009 and the recession that followed, that that position had eroded because it was rooted in manufacturing jobs and organised labour and those jobs were disappearing after the crisis at an accelerated rate and this preocess was concentrated in those states. Trump saw this and took advantage of it.

The Clinton campaign, which was deeply rooted in the bi-coastal elites that dominated the Democratic Party, failed to see it adequately, failed to take steps that might counter it, failed to appeal to those constituencies and, in fact, treated them with a certain amount of distance if not disdain. It was something that could easily be interpreted as disdain in the way in which they scheduled their campaign.

She never went to Wisconsin, for example, and in certain comments that she made and the way in which she identified the core constituencies of her campaign, she really did not reach out to these communities. Trump, as he said himself, saw the anger and took advantage of it and that was the story of the election.

Hilary Clinton did win the popular vote by a very substantial margin, mainly because she had an overwhelming advantage in the state of California but that was 4 million extra votes that made no difference to the outcome whereas, in these upper Midwestern states, a few tens of thousands of votes were decisive and it was Trump that was able to walk away with the electoral votes of those states.

Obviously, the threat or the challenge of populism, especially right-wing populism, is not unique to the United States. If you broaden the discussion a little bit, what would you recommend? How should progressive parties in the US and beyond react to the challenge that right-wing populism poses?

I dislike the term populism as a general purpose pejorative in politics because it tends to be used by members of the professional classes to describe political appeals to, let’s say, working class constituencies. Populism in the United States in the late 19th century was a former labour movement. It was a movement of debtors against creditors and of easy money and silver advocates against gold advocates and that was the essence of it.

I find a lot to identify with in that tradition and so I’m not inclined to say dismissively that one should be opposed to populism. The Democratic Party’s problem is that it had a core in the New Deal liberal period that was rooted in the organised labour movement – the working class and trade unions. That has been structurally weakened by the deindustrialisation of large parts of the American economy and the party has failed to maintain a popular base.

It could have developed and maintained that base but, in many ways, chose not to do so. Why not? Because if one really invests power in a working class constituency, you have to give serious consideration to what people in that constituency want. It’s obvious that that would be in contradiction with the Democratic Party’s commitment in the ‘90s and noughties to free trade agreements, to use the most flagrant example.

It would require a much more, let’s say, real-world employment policy. It would require a responsiveness that was not there to the housing and foreclosure crisis after the recession. What happened in the period following the great financial crisis was particularly infuriating because everybody could see that the class of big bankers was bailed out and protected whereas people who were ordinary homeowners, particularly people who had been in neighbourhoods that were victimised with subprime loans, suffered aggressive foreclosure.

There was a fury that was building and it was building on a justified basis that the party had not been responsive to a series of really, I think, clearly understood community needs and demands.

You mentioned the constituencies, the working class, one of the discussions that we had in other episodes of this series was: is there still a coherent working class and what does that mean? For instance, if you compare the socio-economic position of, say, skilled workers who now have a pretty good wage compared to, say, cleaners somewhere, is there still some kind of working class identity or is this actually fraying?

There’s certainly the case that working class is a shorthand, which has a certain dated quality to it, for sure, but it’s certainly the case that, since the mid-1970s in the US, the industrial working class represented by powerful trade unions has diminished dramatically and, in particular, in the regions of the country which constituted the manufacturing belt that was built up from, let’s say, the 1900s into the 1950s.

There has been a terrific change in the economic structure of the country and it has diminished the membership, power and influence of the trade unions. No question about that. The concept of working class now does span a bifurcated community… There’s certainly still manufacturing activity and some of it is really quite well paid and it’s certainly better to be a manufacturing worker than to be in the low-wage services sector.

Figuring out how to appeal broadly to those constituencies and to constituencies that lie on a lower level of income than the established professional classes is the challenge. That challenge was met, pretty effectively, by the Sanders campaign in 2016. What Bernie Sanders was proposing was the $15 minimum wage and universal health insurance and debt-free access to higher education plus progressive income taxes and a structural reform of the banking sector.

Those things stitch together some strongly felt needs particularly amongst younger people and that was, I think, why the Sanders campaign took off. People grasped that this was not an unlimited laundry list of ideas. It was a select and focused set, which Sanders advanced and repeated in a very disciplined way over the course of the campaign and so it was young people who rallied to that campaign. That does suggest that there is a policy agenda that could form the basis for the Democratic Party of the future.

by James K. Galbraith, Social Europe |  Read more:
Image: uncredited
[ed. This and other links at Politics 101.]

Fitz and the Tantrums / Ed Sheeran / Lia Kim x May J Lee Choreography



Repost

The Secret Lives of Students Who Mine Cryptocurrency in Their Dorm Rooms

Mark was a sophomore at MIT in Cambridge, Massachusetts, when he began mining cryptocurrencies more or less by accident.

In November 2016, he stumbled on NiceHash, an online marketplace for individuals to mine cryptocurrency for willing buyers. His desktop computer, boosted with a graphics card, was enough to get started. Thinking he might make some money, Mark, who asked not to use his last name, downloaded the platform’s mining software and began mining for random buyers in exchange for payments in bitcoin. Within a few weeks, he had earned back the $120 cost of his graphics card, as well as enough to buy another for $200.

From using NiceHash, he switched to mining ether, then the most popular bitcoin alternative. To increase his computational power, he scrounged up several unwanted desktop computers from a professor who “seemed to think that they were awful and totally trash.” When equipped with the right graphics cards, the “trash” computers worked fine.

Each time Mark mined enough ether to cover the cost, he bought a new graphics card, trading leftover currency into bitcoin for safekeeping. By March 2017, he was running seven computers, mining ether around the clock from his dorm room. By September his profits totaled one bitcoin—worth roughly $4,500 at the time. Now, four months later, after bitcoin’s wild run and the diversification of his cryptocoin portfolio, Mark estimates he has $20,000 in digital cash. “It just kind of blew up,” he says.

Exploiting a crucial competitive advantage and motivated by profit and a desire to learn the technology, students around the world are launching cryptocurrency mining operations right from their dorm rooms. In a typical mining operation, electricity consumption accounts for the highest fraction of operational costs, which is why the largest bitcoin mines are based in China. But within Mark’s dorm room, MIT foots the bill. That gives him and other student miners the ability to earn higher profit margins than most other individual miners.

In the months since meeting Mark, I’ve interviewed seven other miners from the US, Canada, and Singapore who ran or currently run dorm room cryptomining operations, and I’ve learned of many more who do the same. Initially, almost every student began mining because it was fun, cost-free, and even profitable. As their operations grew, so did their interest in cryptocurrency and in blockchain, the underlying technology. Mining, in other words, was an unexpected gateway into discovering a technology that many predict will dramatically transform our lives.  (...)

A dorm room operation

Years before meeting Mark, when I was a junior at MIT, I had heard rumors of my peers mining bitcoin. After its value exploded, and along with it, the necessary computational and electrical power to mine it, I assumed that dorm room mining was no longer viable. What I hadn’t considered was the option of mining alternate cryptocurrencies, including ethereum, which can and do thrive as small-scale operations.

When mining for cryptocurrency, computational power, along with low power costs, is king. Miners around the world compete to solve math problems for a chance to earn digital coins. The more computational power you have, the greater your chances of getting returns.

To profitably mine bitcoin today, you need an application-specific integrated circuit, or ASIC—specialized hardware designed for bitcoin-mining efficiency. An ASIC can have 100,000 times more computational power than a standard desktop computer equipped with a few graphics cards. But ASICs are expensive—the most productive ones easily cost several thousands of dollars—and they suck power. If bitcoin prices aren’t high enough to earn more revenue than the cost of electricity, the pricey hardware cannot be repurposed for any other function.

In contrast, alternate currencies like ethereum are “ASIC-resistant,” because ASICS designed to mine ether don’t exist. That means ether can be profitably mined with just a personal computer. Rather than rely solely on a computer’s core processor (colloquially called a “CPU”), however, miners pair it with graphics cards (“GPUs”) to increase the available computational power. Whereas CPUs are designed to solve one problem at a time, GPUs are designed to simultaneously solve hundreds. The latter dramatically raises the chances of getting coins.

by Karen Hao, Quartz |  Read more:
Image: rebcenter-moscow/Pixabay

William-Adolphe Bouguereau, The song of angels (1881)
via:

Of All the Blogs in the World, He Walks Into Mine

A man born to an Orthodox Jewish family in Toronto and schooled at a Yeshiva and a Japanese-American man raised on the island of Oahu, Hawaii, were married in the rare books section of the Strand Bookstore in Greenwich Village before a crowd of 200 people, against a backdrop of an arch of gold balloons that were connected to each other like intertwined units of a necklace chain or the link emoji, in a ceremony led by a Buddhist that included an operatic performance by one friend, the reading of an original poem based on the tweets of Yoko Ono by another, and a lip-synced rendition of Whitney Houston’s “I Will Always Love You” by a drag queen dressed in a white fringe jumper and a long veil.

The grooms met on the internet. But this isn’t a story about people who swiped right.

Adam J. Kurtz, 29, and Mitchell Kuga, 30, first connected Dec. 1, 2012, five years to the day before their wedding.

It was just before 5 p.m. and Mr. Kurtz, living in the Williamsburg section of Brooklyn, ordered a pizza. As one does, when one is 24 and living amid a generation of creative people whose every utterance and experience might be thought of as content, Mr. Kurtz filmed and posted to Tumblr a 10-minute video showing him awaiting the delivery.

Among those who liked the video was a stranger Mr. Kurtz had already admired from afar. It was a guy named Mitchell who didn’t reveal his last name on his Tumblr account, just his photographic eye for Brooklyn street scenes and, on occasion, his face. Mr. Kurtz had developed a bit of a social-media crush on him. “I would think, ‘He’s not even sharing his whole life, that is so smart and impressive,’” Mr. Kurtz said. (...)

When they met, they both were relatively new to New York. Mr. Kuga had moved to the city from Oahu in 2010, after having studied magazine journalism at Syracuse University, from which he graduated in 2009. He is a freelance journalist who has written for Next Magazine and for Gothamist, including an article about Spam (the food product, not the digital menace).

Mr. Kurtz graduated from the University of Maryland, Baltimore County in 2009 and moved to New York in 2012 to work as a graphic artist. He was always creative and enjoyed making crafts with bits and bobs of paper he had saved, ticket stubs and back-of-the-envelope doodles.

He began to build a large social media following, particularly on Instagram, of those who enjoyed his wry humor in celebrating paper culture through digital media, as well as the witty items he began to sell online (like little heart-shaped Valentine’s Day candies that say, “RT 4 YES, FAV 4 NO” AND “REBLOG ME”).

by Katherine Rosman, NY Times |  Read more:
Image: Rebecca Smeyne
[ed. Gay, straight, sideways... this just hurts my brain.]

Saturday, January 6, 2018

The Real Future of Work

In 2013, Diana Borland and 129 of her colleagues filed into an auditorium at the University of Pittsburgh Medical Center. Borland had worked there for the past 13 years as a medical transcriptionist, typing up doctors’ audio recordings into written reports. The hospital occasionally held meetings in the auditorium, so it seemed like any other morning.

The news she heard came as a shock: A UPMC representative stood in front of the group and told them their jobs were being outsourced to a contractor in Massachusetts. The representative told them it wouldn’t be a big change, since the contractor, Nuance Communications, would rehire them all for the exact same position and the same hourly pay. There would just be a different name on their paychecks.

Borland soon learned that this wasn’t quite true. Nuance would pay her the same hourly rate—but for only the first three months. After that, she’d be paid according to her production, 6 cents for each line she transcribed. If she and her co-workers passed up the new offer, they couldn’t collect unemployment insurance, so Borland took the deal. But after the three-month transition period, her pay fell off a cliff. As a UPMC employee, she had earned $19 per hour, enough to support a solidly middle-class life. Her first paycheck at the per-line rate worked out to just $6.36 per hour—below the minimum wage.

“I thought they made a mistake,” she said. “But when I asked the company, they said, ‘That’s your paycheck.’”

Borland quit not long after. At the time, she was 48, with four kids ranging in age from 9 to 24. She referred to herself as retired and didn’t hold a job for the next two years. Her husband, a medical technician, told her that “you need to be well for your kids and me.” But early retirement didn’t work out. The family struggled financially. Two years ago, when the rival Allegheny General Hospital recruited her for a transcriptionist position, she took the job. To this day, she remains furious about UPMC’s treatment of her and her colleagues.

“The bottom line was UPMC was going to do what they were going to do,” she said. “They don’t care about what anybody thinks or how it affects any family.” UPMC, reached by email, said the outsourcing was a way to save the transcriptionists’ jobs as the demand for transcriptionists fell.

It worked out for her former employer: In the four years since the outsourcing, UPMC’s net income has more than doubled.

What happened to Borland and her co-workers may not be as dramatic as being replaced by a robot, or having your job exported to a customer service center in Bangalore. But it is part of a shift that may be even more historic and important—and has been largely ignored by lawmakers in Washington. Over the past two decades, the U.S. labor market has undergone a quiet transformation, as companies increasingly forgo full-time employees and fill positions with independent contractors, on-call workers or temps—what economists have called “alternative work arrangements” or the “contingent workforce.” Most Americans still work in traditional jobs, but these new arrangements are growing—and the pace appears to be picking up. From 2005 to 2015, according to the best available estimate, the number of people in alternative work arrangements grew by 9 million and now represents roughly 16 percent of all U.S. workers, while the number of traditional employees declined by 400,000. A perhaps more striking way to put it is that during those 10 years, all net job growth in the American economy has been in contingent jobs.

Around Washington, politicians often talk about this shift in terms of the so-called gig economy. But those startling numbers have little to do with the rise of Uber, TaskRabbit and other “disruptive” new-economy startups. Such firms actually make up a small share of the contingent workforce. The shift that came for Borland is part of something much deeper and longer, touching everything from janitors and housekeepers to lawyers and professors.

“This problem is not new,” said Senator Sherrod Brown of Ohio, one of the few lawmakers who has proposed a comprehensive plan on federal labor law reform. “But it’s being talked about as if it’s new.”

The repercussions go far beyond the wages and hours of individuals. In America, more than any other developed country, jobs are the basis for a whole suite of social guarantees meant to ensure a stable life. Workplace protections like the minimum wage and overtime, as well as key benefits like health insurance and pensions, are built on the basic assumption of a full-time job with an employer. As that relationship crumbles, millions of hardworking Americans find themselves ejected from that implicit pact. For many employees, their new status as “independent contractor” gives them no guarantee of earning the minimum wage or health insurance. For Borland, a new full-time job left her in the same chair but without a livable income.

In Washington, especially on Capitol Hill, there’s not much talk about this shift in the labor market, much less movement toward solutions. Lawmakers attend conference after conference on the “Future of Work” at which Republicans praise new companies like Uber and TaskRabbit for giving workers more flexibility in their jobs, and Democrats argue that those companies are simply finding new ways to skirt federal labor law. They all warn about automation and worry that robots could replace humans in the workplace. But there’s actually not much evidence that the future of work is going to be jobless. Instead, it’s likely to look like a new labor market in which millions of Americans have lost their job security and most of the benefits that accompanied work in the 20th century, with nothing to replace them.

by Danny Vinik, Politico |  Read more:
Image: Chris Gash

Jackson Pollock
via:

Lawrence Wheeler
via:

Mick and Keith
via:

What “Affordable Housing” Really Means

When people — specifically market urbanists versus regulation fans — argue about housing affordability on the internet it seems to me that the two groups are using the concept of "affordable" in different ways.
  1. In one usage, the goal of improving affordability is to make it possible for more people to share in the economic dynamism of a growing, high-income city like Seattle.
  2. In the other usage, the goal of improving affordability is to reduce (or slow the rise of) average rents in an economically dynamic, high-income city like Seattle.
These are both things that a reasonable person could be interested in. But since they are different things, different policies will impact them.

The first definition is what market urbanists are talking about. I live in a neighborhood of Washington, DC, that's walkable to much of the central business district, has good transit assets, and though predominantly poor in the very recent past has now become expensive (i.e., it's gentrifying).

If the city changed the zoning to allow for denser construction, the number of housing units available in the neighborhood would increase and thus (essentially by definition) the number of people who are able to afford to live there would go up.

What's not entirely clear is whether a development boom would reduce prices in the neighborhood. I think it's pretty clear that on some scale, "more supply equals lower prices" is true. The extra residents don't materialize out of thin air, after all, so there must be somewhere that demand is eased as a result of the increased development.

But skeptics are correct to note that the actual geography of the price impact is going to depend on a huge array of factors and there are no guarantees here. In particular, there's no guarantee that incumbent low-income residents will be more able to stay in place under a high-development regime than a low-development one.

To accomplish the goals of (2), you really do need regulation — either traditional rent control or some newfangled inclusionary zoning or what have you.

But — critically — (2) doesn't accomplish (1). If you're concerned that we are locking millions of Americans out of economic opportunity by making it impossible for thriving, high-wage metro areas to grow their housing stock rapidly, then simply reducing the pace of rent increases in those areas won't do anything to help. Indeed, there's some possibility that it might hurt by further constraining overall housing supply.

by Matthew Yglesias, Vox | Read more:
Image: Shutterstock

Our Cloud-Centric Future Depends on Open Source Chips

When a Doctor First Handed Me Opioids

On a sunny September morning in 2012, my wife and I returned to our apartment from walking our eldest daughter to her first day of kindergarten. When we entered our home, in the Washington, DC, suburb of Greenbelt, Maryland, I immediately felt that something was off. My Xbox 360 and Playstation 3 were missing.

My wife ran to the bedroom, where drawers were open, clothing haphazardly strewn about. It was less than a minute before a wave of terror washed over me: My work backpack was gone. Inside that bag were notebooks and my ID for getting into work at NBC News Radio, where I was an editor. But the most important item in my life was in that bag: my prescription bottle of Oxycodone tablets.

“I can’t believe this happened to us,” my wife said.

“They took my pills,” I said.

We repeated those lines to each other over and over, my wife slowly growing annoyed with me. Why didn’t I feel the same sense of violation? Why wasn’t I more upset about the break-in? Oh, but I was. Because they took my pills. The game consoles, few dollars and cheap jewelry they stole would all be replaced. But my pills! They took my fucking pills!

We had to call the police. Not because of the break-in but, rather, so I could have a police report to show my doctor. That was all I could think about. My pills.

How did I become this person? How did I get to a place where the most important thing in my life was a round, white pill of opiate pleasure?
***
Before 2010, I only had taken opiates a few times. In 2007, I went to the emergency room in my hometown of Cleveland, Ohio, because I could not stop vomiting from abdominal pain. Upon my discharge, I was given 15 Percocets, 5 milligrams each. I took them as prescribed, noticed that they made me feel happy, and never gave them another thought.

After I took a reporter job in Orlando, I began to get sick more frequently, requiring several visits to the ER for abdominal pain and vomiting. In September of 2008 I was diagnosed with Crohn’s, an inflammatory bowel disease, and put on a powerful chemotherapy drug called Remicade to quell the symptoms. My primary care doctor, knowing I was in pain, prescribed me Percocet every month. I took them as needed, or whenever I needed a pick-me-up at work. I shared a few with a coworker from time to time. We’d take them, and 20 minutes later, start giggling at each other. I never totally ran out—never took them that often. I never needed an early refill.

In March of 2010, I was hired as the news director of a radio station in Madison, Wisconsin. Before we moved, my doctor in Orlando wrote me a Percocet script for 90 pills to bridge the gap until my new insurance kicked in in Wisconsin—approximately three month’s worth. I went through them in four weeks. I spent about a week feeling like I had the flu and then recovered, never once realizing that I was experiencing opiate withdrawal for the first time. Soon after, I set up my primary and GI care with my new insurance, and went back to my one-to-two-pills-per-day Percocet prescription, along with a continuation of my Remicade treatment.

Two months later, while my wife and daughter were visiting family in Cleveland, I developed concerning symptoms. My joints were swollen, I couldn’t bend my elbows, I was dizzy. I went to the ER, where for two days the doctors performed all sorts of tests as my symptoms worsened. Eventually, the rheumatologist diagnosed me with drug-induced Lupus from the Remicade. I was prescribed 60 Percocets upon leaving the hospital.

When I went back to my GI doc four weeks later for a refill, he told me he was uncomfortable prescribing pain medication, so he referred me to a pain clinic. I told the physician there how I would get cramps, sharp pains that would sometimes lead to vomiting. Did it hurt when I drove over bumps, or when bending over? Yes, sometimes. I left with a prescription for Oxycodone, with a prescription to take one pill every three to four hours. My initial script was for 120 pills. I felt like I hit the jackpot.

by Anonymous, Mother Jones |  Read more:
Image: PeopleImages/Getty

Friday, January 5, 2018

Nine-Enders

You’re Most Likely to Do Something Extreme Right Before You Turn 30... or 40, or 50, or 60...

Red Hong Yi ran her first marathon when she was 29 years old. Jeremy Medding ran his when he was 39. Cindy Bishop ran her first marathon at age 49, Andy Morozovsky at age 59.

All four of them were what the social psychologists Adam Alter and Hal Hershfield call “nine-enders,” people in the last year of a life decade. They each pushed themselves to do something at ages 29, 39, 49, and 59 that they didn’t do, didn’t even consider, at ages 28, 38, 48, and 58—and didn’t do again when they turned 30, 40, 50, or 60.

Of all the axioms describing how life works, few are sturdier than this: Timing is everything. Our lives present a never-ending stream of “when” decisions—when to schedule a class, change careers, get serious about a person or a project, or train for a grueling footrace. Yet most of our choices emanate from a steamy bog of intuition and guesswork. Timing, we believe, is an art.

In fact, timing is a science. For example, researchers have shown that time of day explains about 20 percent of the variance in human performance on cognitive tasks. Anesthesia errors in hospitals are four times more likely at 3 p.m. than at 9 a.m. Schoolchildren who take standardized tests in the afternoon score considerably lower than those who take the same tests in the morning; researchers have found that for every hour after 8 a.m. that Danish public-school students take a test, the effect on their scores is equivalent to missing two weeks of school.

Other researchers have found that we use “temporal landmarks” to wipe away previous bad behavior and make a fresh start, which is why you’re more likely to go to the gym in the month following your birthday than the month before.  (...)

For example, to run a marathon, participants must register with race organizers and include their age. Alter and Hershfield found that nine-enders are overrepresented among first-time marathoners by a whopping 48 percent. Across the entire lifespan, the age at which people were most likely to run their first marathon was 29. Twenty-nine-year-olds were about twice as likely to run a marathon as 28-year-olds or 30-year-olds.

Meanwhile, first-time marathon participation declines in the early 40s but spikes dramatically at age 49. Someone who’s 49 is about three times more likely to run a marathon than someone who’s just a year older.

What’s more, nearing the end of a decade seems to quicken a runner’s pace—or at least motivates them to train harder. People who had run multiple marathons posted better times at ages 29 and 39 than during the two years before or after those ages.

The energizing effect of the end of a decade doesn’t make logical sense to the marathon-running scientist Morozovsky. “Keeping track of our age? The Earth doesn’t care. But people do, because we have short lives. We keep track to see how we’re doing,” he told me. “I wanted to accomplish this physical challenge before I hit 60. I just did.” For Yi, the artist, the sight of that chronological mile marker roused her motivation. “As I was approaching the big three-o, I had to really achieve something in my 29th year,” she said. “I didn’t want that last year just to slip by.”

However, flipping life’s odometer to a nine doesn’t always trigger healthy behavior. Alter and Hershfield also discovered that “the suicide rate was higher among nine-enders than among people whose ages ended in any other digit.” So, apparently, was the propensity of men to cheat on their wives. On the extramarital-affair website Ashley Madison, nearly one in eight men were 29, 39, 49, or 59, about 18 percent higher than chance would predict.

“People are more apt to evaluate their lives as a chronological decade ends than they are at other times,” Alter and Hershfield explain. “Nine-enders are particularly preoccupied with aging and meaningfulness, which is linked to a rise in behaviors that suggest a search for or crisis of meaning.”

by Daniel H. Pink, The Atlantic |  Read more:
Image: Mike Segar, Reuters

Politics 101

Thursday, January 4, 2018