Tuesday, January 9, 2018

Fifty Psychological and Psychiatric Terms to Avoid

Abstract

The goal of this article is to promote clear thinking and clear writing among students and teachers of psychological science by curbing terminological misinformation and confusion. To this end, we present a provisional list of 50 commonly used terms in psychology, psychiatry, and allied fields that should be avoided, or at most used sparingly and with explicit caveats. We provide corrective information for students, instructors, and researchers regarding these terms, which we organize for expository purposes into five categories: inaccurate or misleading terms, frequently misused terms, ambiguous terms, oxymorons, and pleonasms. For each term, we (a) explain why it is problematic, (b) delineate one or more examples of its misuse, and (c) when pertinent, offer recommendations for preferable terms. By being more judicious in their use of terminology, psychologists and psychiatrists can foster clearer thinking in their students and the field at large regarding mental phenomena. (...)

Inaccurate or Misleading Terms

(1) A gene for. The news media is awash in reports of identifying “genes for” a myriad of phenotypes, including personality traits, mental illnesses, homosexuality, and political attitudes (Sapolsky, 1997). For example, in 2010, The Telegraph (2010) trumpeted the headline, “‘Liberal gene’ discovered by scientists.” Nevertheless, because genes code for proteins, there are no “genes for” phenotypes per se, including behavioral phenotypes (Falk, 2014). Moreover, genome-wide association studies of major psychiatric disorders, such as schizophrenia and bipolar disorder, suggest that there are probably few or no genes of major effect (Kendler, 2005). In this respect, these disorders are unlike single-gene medical disorders, such as Huntington’s disease or cystic fibrosis. The same conclusion probably holds for all personality traits (De Moor et al., 2012).

Not surprisingly, early claims that the monoamine oxidase-A (MAO-A) gene is a “warrior gene” (McDermott et al., 2009) have not withstood scrutiny. This polymorphism appears to be only modestly associated with risk for aggression, and it has been reported to be associated with conditions that are not tied to a markedly heightened risk of aggression, such as major depression, panic disorder, and autism spectrum disorder (Buckholtz and Meyer-Lindenberg, 2013; Ficks and Waldman, 2014). The evidence for a “God gene,” which supposedly predisposes people to mystical or spiritual experiences, is arguably even less impressive (Shermer, 2015) and no more compelling than that for a “God spot” in the brain (see “God spot”). Incidentally, the term “gene” should not be confused with the term “allele”; genes are stretches of DNA that code for a given morphological or behavioral characteristic, whereas alleles are differing versions of a specific polymorphism in a gene (Pashley, 1994).

(2) Antidepressant medication. Medications such as tricyclics, selective serotonin reuptake inhibitors, and selective serotonin and norepinephrine reuptake inhibitors, are routinely called “antidepressants.” Yet there is little evidence that these medications are more efficacious for treating (or preventing relapse for) mood disorders than for several other conditions, such as anxiety-related disorders (e.g., panic disorder, obsessive-compulsive disorder; Donovan et al., 2010) or bulimia nervosa (Tortorella et al., 2014). Hence, their specificity to depression is doubtful, and their name derives more from historical precedence—the initial evidence for their efficacy stemmed from research on depression (France et al., 2007)—than from scientific evidence. Moreover, some authors argue that these medications are considerably less efficacious than commonly claimed, and are beneficial for only severe, but not mild or moderate, depression, rendering the label of “antidepressant” potentially misleading (Antonuccio and Healy, 2012; but see Kramer, 2011, for an alternative view).

(3) Autism epidemic. Enormous effort has been expended to uncover the sources of the “autism epidemic” (e.g., King, 2011), the supposed massive increase in the incidence and prevalence of autism, now termed autism spectrum disorder, over the past 25 years. The causal factors posited to be implicated in this “epidemic” have included vaccines, television viewing, dietary allergies, antibiotics, and viruses.

Nevertheless, there is meager evidence that this purported epidemic reflects a genuine increase in the rates of autism per se as opposed to an increase in autism diagnoses stemming from several biases and artifacts, including heightened societal awareness of the features of autism (“detection bias”), growing incentives for school districts to report autism diagnoses, and a lowering of the diagnostic thresholds for autism across successive editions of the Diagnostic and Statistical Manual of Mental Disorders (Gernsbacher et al., 2005; Lilienfeld and Arkowitz, 2007). Indeed, data indicate when the diagnostic criteria for autism were held constant, the rates of this disorder remained essentially constant between 1990 and 2010 (Baxter et al., 2015). If the rates of autism are increasing, the increase would appear to be slight at best, hardly justifying the widespread claim of an “epidemic.”

(4) Brain region X lights up. Many authors in the popular and academic literatures use such phrases as “brain area X lit up following manipulation Y” (e.g., Morin, 2011). This phrase is unfortunate for several reasons. First, the bright red and orange colors seen on functional brain imaging scans are superimposed by researchers to reflect regions of higher brain activation. Nevertheless, they may engender a perception of “illumination” in viewers. Second, the activations represented by these colors do not reflect neural activity per se; they reflect oxygen uptake by neurons and are at best indirect proxies of brain activity. Even then, this linkage may sometimes be unclear or perhaps absent (Ekstrom, 2010). Third, in almost all cases, the activations observed on brain scans are the products of subtraction of one experimental condition from another. Hence, they typically do not reflect the raw levels of neural activation in response to an experimental manipulation. For this reason, referring to a brain region that displays little or no activation in response to an experimental manipulation as a “dead zone” (e.g., Lamont, 2008) is similarly misleading. Fourth, depending on the neurotransmitters released and the brain areas in which they are released, the regions that are “activated” in a brain scan may actually be being inhibited rather than excited (Satel and Lilienfeld, 2013). Hence, from a functional perspective, these areas may be being “lit down” rather than “lit up.”

(5) Brainwashing. This term, which originated during the Korean War (Hunter, 1951) but which is still invoked uncritically from time to time in the academic literature (e.g., Ventegodt et al., 2009; Kluft, 2011), implies that powerful individuals wishing to persuade others can capitalize on a unique armamentarium of coercive procedures to change their long-term attitudes. Nevertheless, the attitude-change techniques used by so-called “brainwashers” are no different than standard persuasive methods identified by social psychologists, such as encouraging commitment to goals, manufacturing source credibility, forging an illusion of group consensus, and vivid testimonials (Zimbardo, 1997). Furthermore, there are ample reasons to doubt whether “brainwashing” permanently alters beliefs (Melton, 1999). For example, during the Korean War, only a small minority of the 3500 American political prisoners subjected to intense indoctrination techniques by Chinese captors generated false confessions. Moreover, an even smaller number (probably under 1%) displayed any signs of adherence to Communist ideologies following their return to the US, and even these were individuals who returned to Communist subcultures (Spanos, 1996).

(6) Bystander apathy. The classic work of (e.g., Darley and Latane, 1968; Latane and Rodin, 1969) underscored the counterintuitive point that when it comes to emergencies, there is rarely “safety in numbers.” As this and subsequent research demonstrated, the more people present at an emergency, the lower the likelihood of receiving help. In early research, this phenomenon was called “bystander apathy” (Latane and Darley, 1969) a term that endures in many academic articles (e.g., Abbate et al., 2013). Nevertheless, research demonstrates that most bystanders are far from apathetic in emergencies (Glassman and Hadad, 2008). To the contrary, they are typically quite concerned about the victim, but are psychologically “frozen” by well-established psychological processes, such as pluralistic ignorance, diffusion of responsibility, and sheer fears of appearing foolish.

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public (France et al., 2007; Deacon and Baird, 2009). This phrase even crops up in some academic sources; for example, one author wrote that one overarching framework for conceptualizing mental illness is a “biophysical model that posits a chemical imbalance” (Wheeler, 2011, p. 151). Nevertheless, the evidence for the chemical imbalance model is at best slim (Lacasse and Leo, 2005; Leo and Lacasse, 2008). One prominent psychiatrist even dubbed it an urban legend (Pies, 2011). There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels. Moreover, although serotonin reuptake inhibitors, such as fluoxetine (Prozac) and sertraline (Zoloft), appear to alleviate the symptoms of severe depression, there is evidence that at least one serotonin reuptake enhancer, namely tianepine (Stablon), is also efficacious for depression (Akiki, 2014). The fact that two efficacious classes of medications exert opposing effects on serotonin levels raises questions concerning a simplistic chemical imbalance model.

by Scott O. Lilienfeld, Katheryn C. Sauvigné, Steven Jay Lynn, Robin L. Cautin, Robert D. Latzman, and Irwin D. Waldman, Frontiers in Psychology |  Read more:
Image: Frontiers in Psychology

Retail Investors Now True Believers with Record Exposure

As far as the stock market is concerned, it took a while – in fact, it took eight years, but retail investors are finally all in, bristling with enthusiasm. TD Ameritrade’s Investor Movement Index rose to 8.59 in December, a new record. TDA’s clients were net buyers for the 11th month in a row, one of the longest buying streaks and ended up with more exposure to the stock market than ever before in the history of the index.

This came after a blistering November, when the index had jumped 15%, “its largest single-month increase ever,” as TDA reported at the time, to 8.53, also a record:


Note how retail investors had been to varying degrees among the naysayers from the end of the Financial Crisis till the end of 2016, before they suddenlybecame true believers in February 2017.

“I don’t think the investors who are engaging regularly are doing so in a dangerous fashion,” said TDA Chief Market Strategist JJ Kinahan in an interview. But he added, clients at the beginning of 2017 were “up to their knees in it and then up to their thighs, and now up to their chests.”

The implication is that they could get in a little deeper before they’d drown.

“As the year went on, people got more confident,” he said. And despite major geopolitical issues, “the market was never tested at all” last year. There was this “buy-the-dip mentality” every time the market dipped 1% or 2%.

But one of his “bigger fears” this year is this very buy-the-dip mentality, he said. People buy when the market goes down 1% or 2%, and “it goes down 5%, then it goes down 8% — and they turn into sellers, and then they get an exponential move to the downside.”

In addition to some of the big names in the US – Amazon, Microsoft, Bank of America, etc. – TDA’s clients were “believers” in Chinese online retail and were big buyers of Alibaba and Tencent. But they were sellers of dividend stocks AT&T and Verizon as the yield of two-year Treasuries rose to nearly 2%, and offered a risk-free alternative at comparable yields.

And he added, with an eye out for this year: “It’s hard to believe that the market can go up unchallenged.”

This enthusiasm by retail investors confirms the surge in margin debt – a measure of stock market leverage and risk – which has been jumping from record to record, and hit a new high of $581 billion, up 16% from a year earlier.

And as MarketWatch reported, “cash balances for Charles Schwab clients reached their lowest level on record in the third quarter, according to Morgan Stanley, which wrote that retail investors ‘can’t stay away’ from stocks,” while the stock allocation index by the American Association of Individual Investors “jumped to 72%, its highest level since 2000…” as “retail investors – according to a Deutsche Bank analysis of consumer sentiment data – view the current environment as “the best time ever to invest in the market.”

by Wolf Richter, Wolf Street |  Read more:
Image: TD Ameritrade
[ed. What could go wrong?]

Your Next Obsession: Retro Japanese Video Game Art


I am obsessed with something new in the world of design. Well, actually, something quite old. Specifically, late 90s and early 2000s Japanese video game art. And also, video game ads. And also, photos of old video game hardware. I am knee-deep in gaming nostalgia.

A lot of the art I’ve become fascinated with is a particular aesthetic born around the fourth generation of video gaming (spanning from the 16-bit boom of the PC Engine / TurboGrafx-16 and Sega Genesis, through to the original PlayStation, Sega Saturn, and the Dreamcast). One which blends hand-drawn art and lettering, dramatic typography, highly technical layouts, and colorful, sometimes cartoonish patterns.

Design, like fashion, moves in cycles, and we’re starting to see a new wave of Japanese game art in pop design. You can see it in the Richard Turley-led Wieden + Kennedy rebranding of the Formula One logo / design language (heavy, heavy shades of Wipeout) or in the varied styles of Australian artist Jonathan Zawada.

Cory Schmitz — a designer who’s worked on projects like the Oculus Rift rebranding and logo design for the game Shadow of the Colossus — has been assembling many of the best examples of the original era on his Tumblr, QuickQuick. I reached out to him to ask about what he was drawn to in this particular style: “As a designer this stuff is super inspirational because it’s so different from current design trends. A lot of unexpected colors, type, and compositions. And I really like the weird sense of nostalgia I get from stuff I haven’t necessarily seen before.” It’s Cory’s curation you’ll see a lot of in the card stack here.

As we move away from the Web 2.0 / Apple mandate of clean, orderly, sterile design, into a more playful, experimental, artistic phase (hello Dropbox redesign), this particular style of art feels like an obvious meeting point born out of a desire for orderly information delivery and a more primal need for some degree of controlled chaos. Mostly, though, it just looks really fucking cool.

by Joshua Topolsky, The Outline | Read more:
Image: Ian Anderson, Designers Republic

Monday, January 8, 2018


Tom Guald
via:

Image: Angela Weiss/AFP via Getty
via:
[ed. My dream girl.]

Fight Me, Psychologists: Birth Order Effects Exist and Are Very Strong

“Birth order” refers to whether a child is the oldest, second-oldest, youngest, etc. in their family. For a while, pop psychologists created a whole industry around telling people how their birth order affected their personality: oldest children are more conservative, youngest children are more creative, etc.

Then people got around to actually studying it and couldn’t find any of that. Wikipedia’s birth order article says:
Claims that birth order affects human psychology are prevalent in family literature, but studies find such effects to be vanishingly small….the largest multi-study research suggests zero or near-zero effects. Birth-order theory has the characteristics of a zombie theory, as despite disconfirmation, it continues to have a strong presence in pop psychology and popular culture.
I ought to be totally in favor of getting this debunked. After all, the replication crisis in psychology highlights the need to remain skeptical of poorly-supported theories. And some of the seminal work disproving birth order was done by Judith Rich Harris, an intellectual hero of mine who profoundly shaped my worldview with her book The Nurture Assumption.

So I regret to have to inform you that birth order effects are totally a real thing.

I first started thinking this at transhumanist meetups, when it would occasionally come up that everyone there was an oldest child. The pattern was noticeable enough that I included questions about birth order on the latest SSC survey. This blog deals with a lot of issues around transhumanism, futurology, rationality, et cetera, so I thought it would attract the same kind of people.

7,248 people gave me enough information to calculate their birth order, but I am very paranoid because previous studies have failed by failing to account for family size. That is, people of certain economic classes/religions/races/whatever tend to have larger family sizes, and if you’re in a large family, you’re more likely to be a later-born child. In order to be absolutely sure I wasn’t making this mistake, I concentrated on within-family-size analyses. For example, there were 2965 respondents with exactly one sibling…

…and a full 2118 of those were the older of the two. That’s 71.4%. p ≤ 0.00000001. (...)

So what is going on here?

It’s unlikely that age alone is driving these results. In sibships of two, older siblings on average were only about one year older than younger siblings. That can’t explain why one group reads this blog so much more often than the other.

And all of the traditional pop psychology claims about birth order don’t seem to hold up. I didn’t find any effect on anything that could be reasonably considered conservativism or rebelliousness.

But there is at least one reputable study that did find a few personality differences. This is Rohrer et al (2015), which examined a battery of personality traits and found birth order effects only IQ and Openness to Experience, both very small.

I was only partly able to replicate this work. Rohrer et al found that eldest siblings had an advantage of about 1.5 IQ points. My study found the same: 1.3 to 1.7 IQ points depending on family size – but because of the sample size this did not achieve significance. (...)

The Openness results were clearer. Eldest children had significantly higher Openness (73rd %ile vs. 69th %ile, p = 0.001). Like Rohrer, I found no difference in any of the other Big Five traits.

Because I only had one blunt measure of Openness, I couldn’t do as detailed an analysis as Rohrer’s team. But they went on to subdivide Openness into two subcomponents, Intellect and Imagination, and found birth order only affected Intellect. They sort of blew Intellect off as just “self-estimated IQ”, but I don’t think this is right. Looking at it more broadly, it seems to be a measure of intellectual curiosity – for example, one of the questions they asked was, “I am someone who is eager for knowledge”. Educational Testing Service describes it as “liking complex problems”, and its opposite as “avoiding philosophical discussion”.

This seems promising. If older siblings were more likely to enjoy complex philosophical discussion, that would help explain why they are so much more likely to read a blog about science and current events. Unfortunately, the scale is completely wrong. Rohrer et al’s effects are tiny – going from a firstborn to a secondborn has an effect size of 0.1 SD on Intellect. In order to contain 71.6% firstborns, this blog would have to select for people above the 99.99999999th percentile in Intellect. There are only 0.8 people at that level in the world, so no existing group is that heavily selected.

I think the most likely explanation is that tests for Openness have limited validity, which makes the correlation look smaller than it really is. If being an eldest sibling increases true underlying Openness by a lot, but your score on psychometric tests for Openness only correlates modestly with true underlying Openness, that would look like being an eldest sibling only increasing test-measured-Openness a little bit.

(cf. Riemann and Kandler (2010), which finds that the heritability of Openness shoots way up if you do a better job assessing it)

If we suppose that birth order has a moderate effect size on intellectual curiosity of 0.5 SD, that would imply that science blogs select for people in the top 3% or so of intellectual curiosity, a much more reasonable number. Positing higher (but still within the range of plausibility) effect sizes would decrease the necessary filtering even further.

If this is right, it suggests Rohrer et al undersold their conclusion. Their bottom line was something like “birth order effects may exist for a few traits, but are too small to matter”. I agree they may only exist for a few traits, but they can be strong enough to skew ratios in some heavily-selected communities like this one.

When I asked around about this, a couple of people brought up further evidence. Liam Clegg pointed out that philosophy professor Michael Sandel asks his students to raise their hand if they’re the oldest in their family, and usually gets about 80% of the class. And Julia Rohrer herself was kind enough to add her voice and say that:
I’m not up to fight you because I think you might be onto something real here. Just to throw in my own anecdotal data: The topic of birth order effect comes up quite frequently when I chat with people in academic contexts, and more often than not (~80% of the time), the other person turns out to be firstborn. Of course, this could be biased by firstborns being more comfortable bringing up the topic given that they’re supposedly smarter, and it’s only anecdotes. Nonetheless, it sometimes makes me wonder whether we are missing something about the whole birth order story.
But why would eldest siblings have more intellectual curiosity? There are many good just-so stories, like parents having more time to read to them as children. But these demand strong effects of parenting on children’s later life outcomes, of exactly the sort that behavioral genetic studies consistently find not to exist. An alternate hypothesis could bring in weird immune stuff, like that thing where people with more older brothers are more likely to be gay because of maternal immunoreactivity to the Y chromosome (which my survey replicates, by the way). But this is a huge stretch and I don’t even know if people are sure this explains the homosexuality results, let alone the birth order ones.

If mainstream psychology becomes convinced this effect exists, I hope they’ll start doing the necessary next steps. This would involve seeing if biological siblings matter more or less than adopted siblings, whether there’s a difference between paternal and maternal half-siblings, how sibling age gaps work into this, and whether only children are more like oldests or youngests. Their reward would be finding some variable affecting children’s inherent intellectual curiosity – one that might offer opportunities for intervention.

by Scott Alexander, Slate Star Codex |  Read more:
Image: Emily
[ed. I participated in this survey. Also a firstborn in my family.]

Who Cares About Inequality?

Lloyd Blankfein is worried about inequality. The CEO of Goldman Sachs—that American Almighty, who swindled the economy and walked off scot-free— sees new “divisions” in the country. “Too much,” Blankfein lamented in 2014, “has gone to too few people.”

Charles Koch is worried, too. Another great American plutocrat—shepherd of an empire that rakes in $115 billion and spits out $200 million in campaign contributions each year—decried in 2015 the “welfare for the rich” and the formation of a “permanent underclass.” “We’re headed for a two-tiered society,” Koch warned.

Their observations join a chorus of anti-inequality advocacy among the global elite. The World Bank called inequality a “powerful threat to global progress.” The International Monetary Fund claimed it was “not a recipe for stability and sustainability” —threat-level red for the IMF. And the World Economic Forum, gathered together at Davos last year, described inequality as the single greatest global threat.

It is a stunning consensus. In Zuccotti Park, the cry of the 99% was an indictment. To acknowledge the existence of the super-rich was to incite class warfare. Not so today. Ted Cruz, whom the Kochs have described as a ‘hero’, railed against an economy where wealthy Americans “have gotten fat and happy.” He did so on Fox News.

What the hell is happening here? Why do so many rich people care so much about inequality? And why now?

The timing of the elite embrace of the anti-inequality agenda presents a puzzle precisely because it is so long overdue.

For decades, political economists have struggled to understand why inequality has remained uncontested all this time. Their workhorse game theoretic model, developed in the early 1980s by Allan Meltzer and Scott Richard, predicts that democracies respond to an increase in equality with an increase in top-rate taxation—a rational response of the so-called ‘median voter.’

And yet, the relationship simply does not hold in the real world. On the contrary, in the United States, we find its literal inverse: amid record high inequality, one of the largest tax cuts in history. This inverted relationship is known as the Robin Hood Paradox.

One explanation of this paradox is the invisibility of the super-rich. On the one hand, they hide in their enclaves: the hills, the Hamptons, Dubai, the Bahamas. In the olden days, the poor were forced to bear witness to royal riches, standing roadside as the chariot moved through town. Today, they live behind high walls in gated communities and private islands. Their wealth is obscured from view, stashed offshore and away from the tax collector. This is wealth as exclusion.

On the other, they hide among us. As Rachel Sherman has recently argued, conspicuous consumption is out of fashion, displaced by an encroaching “moral stigma of privilege” that won’t let the wealthy just live. Not long ago, the rich felt comfortable riding down broad boulevards in stretch limousines and fur coats. Today, they remove price tags from their groceries and complain about making ends meet. This is wealth as assimilation.

The result is a general misconception about the scale of inequality in America. According to one recent study, Americans tend to think that the ratio of CEO compensation to average income is 30-to-1. The actual figures are 350-to-1.

Yet this is only a partial explanation of the Robin Hood Paradox. It is an appealing theory, but I find it doubtful that any public revelation of elite lifestyles would drive these elites to call for reform. It would seem a difficult case to make after the country elected a man to highest office that lives in a golden penthouse of a skyscraper bearing his own name in the middle of the most expensive part of America’s most expensive city.

“I love all people,” President Trump promised at a rally last June. “But for these posts”—the posts in his cabinet—“I just don’t want a poor person.” The crowd cheered loudly. The state of play of the American pitchfork is determined in large part by this very worldview—and the three myths about the rich and poor that sustain it.

The first is the myth of the undeserving poor. American attitudes to inequality are deeply informed by our conception of the poor as lazy. In Why Americans Hate Welfare, Martin Gilens examines the contrast between Americans’ broad support for social spending and narrow support for actually existing welfare programs. The explanation, Gilens argues, is that Americans view the poor as scroungers—a view forged by racial representations of welfare recipients in our media.

In contrast—and this is the second myth—Americans believe in the possibility of their own upward mobility. Even if they are not rich today, they will be rich tomorrow. And even if they are not rich tomorrow, their children will be rich the next day. In a recent survey experiment, respondents overestimated social mobility in the United States by over 20%. It turns out that the overestimation is quite easy to provoke: researchers simply had to remind the participants of their own ‘talents’ in order to boost their perceptions of class mobility. Such a carrot of wealth accumulation has been shown to exert a downward pressure on Americans’ preferences for top-rate taxation.

But the third myth, and perhaps most important, concerns the wealthy. For many years, this was called trickle-down economics. Inequality was unthreatening because of our faith that the wealth at the top would—some way or another—reach the bottom. The economic science was questionable, but cultural memories lingered around a model of paternalistic capitalism that suggested its truth. The old titans of industry laid railroads, made cars, extracted oil. Company towns sprouted across the country, where good capitalists took care of good workers.

But the myth of trickling wealth has become difficult to sustain. Over the last half-century, while productivity has soared, average wages among American workers have grown by just 0.2% each year—while those at the very top grew 138%. Only half of Republicans still believe that trimming taxes for the rich leads to greater wealth for the general population. Only 13% of Democrats do.

Declining faith in trickle-down economics, however, does not necessarily imply declining reverence for the wealthy. 43% of Americans today still believe that the rich are more intelligent than the average American, compared to just 8% that believe they are less. 42% of Americans still believe that the rich are more hardworking than the average, compared to just 24 that believe they are less.

It would seem, therefore, that the trickle-down myth has been displaced by another, perhaps more obstinate myth of the 1% innovator.

The 1% innovator is a visionary: with his billions, he dreams up new and exciting ideas for the twenty-first century. Steve Jobs was one; Elon Musk is another. Their money is not idle—it is fodder for that imagination. As the public sector commitment to futurist innovation has waned—as NASA, for example, has shrunk and shriveled—his role has become even more important. Who else will take us to Mars?

The reality, of course, is that our capitalists are anything but innovative. They’re not even paternal. In fact, they are not really capitalists at all. They are mostly rentiers: rather than generate wealth, they simply extract it from the economy. Consider the rapid rise in real estate investment among the super-rich. Since the financial crash, a toxic mix of historically low interest rates and sluggish growth have encouraged international investors to turn toward the property market, which promises to deliver steady if moderate returns. Among the Forbes 400 “self-made” billionaires, real estate ranks third. Investments and technology—two other rentier industries—rank first and second, respectively.

But the myth of the 1% innovator is fundamental to the politics of inequality, because it suspends public demands for wealth taxation. If the innovators are hard at work, and they need all that capital to design and bring to life the consumer goodies that we enjoy, then we should hold off on serious tax reform and hear them out. Or worse: we should cheer on their wealth accumulation, waiting for the next, more expensive rabbit to be pulled from the hat. The revolt from below can be postponed until tomorrow or the next day.

All together, the enduring strength of these myths only serves to deepen the puzzle of elite anti-inequality advocacy. Why the sudden change of heart? Why not keep promoting the myths and playing down the scale of the “two-tiered society” that Charles Koch today decries?

The unfortunate answer, I believe, is that inequality has simply become bad economics.

by David Adler, Current Affairs | Read more:
Image: uncredited

Sunday, January 7, 2018


Rafael Araujo
via:

Dude, You Broke the Future!

Abstract: We're living in yesterday's future, and it's nothing like the speculations of our authors and film/TV producers. As a working science fiction novelist, I take a professional interest in how we get predictions about the future wrong, and why, so that I can avoid repeating the same mistakes. Science fiction is written by people embedded within a society with expectations and political assumptions that bias us towards looking at the shiny surface of new technologies rather than asking how human beings will use them, and to taking narratives of progress at face value rather than asking what hidden agenda they serve.

In this talk, author Charles Stross will give a rambling, discursive, and angry tour of what went wrong with the 21st century, why we didn't see it coming, where we can expect it to go next, and a few suggestions for what to do about it if we don't like it.


Good morning. I'm Charlie Stross, and it's my job to tell lies for money. Or rather, I write science fiction, much of it about our near future, which has in recent years become ridiculously hard to predict.

Our species, Homo Sapiens Sapiens, is roughly three hundred thousand years old. (Recent discoveries pushed back the date of our earliest remains that far, we may be even older.) For all but the last three centuries of that span, predicting the future was easy: natural disasters aside, everyday life in fifty years time would resemble everyday life fifty years ago.

Let that sink in for a moment: for 99.9% of human existence, the future was static. Then something happened, and the future began to change, increasingly rapidly, until we get to the present day when things are moving so fast that it's barely possible to anticipate trends from month to month.

As an eminent computer scientist once remarked, computer science is no more about computers than astronomy is about building telescopes. The same can be said of my field of work, written science fiction. Scifi is seldom about science—and even more rarely about predicting the future. But sometimes we dabble in futurism, and lately it's gotten very difficult.

How to predict the near future

When I write a near-future work of fiction, one set, say, a decade hence, there used to be a recipe that worked eerily well. Simply put, 90% of the next decade's stuff is already here today. Buildings are designed to last many years. Automobiles have a design life of about a decade, so half the cars on the road will probably still be around in 2027. People ... there will be new faces, aged ten and under, and some older people will have died, but most adults will still be around, albeit older and grayer. This is the 90% of the near future that's already here.

After the already-here 90%, another 9% of the future a decade hence used to be easily predictable. You look at trends dictated by physical limits, such as Moore's Law, and you look at Intel's road map, and you use a bit of creative extrapolation, and you won't go too far wrong. If I predict that in 2027 LTE cellular phones will be everywhere, 5G will be available for high bandwidth applications, and fallback to satellite data service will be available at a price, you won't laugh at me. It's not like I'm predicting that airliners will fly slower and Nazis will take over the United States, is it?

And therein lies the problem: it's the 1% of unknown unknowns that throws off all calculations. As it happens, airliners today are slower than they were in the 1970s, and don't get me started about Nazis. Nobody in 2007 was expecting a Nazi revival in 2017, right? (Only this time round Germans get to be the good guys.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we're now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Ruling out the singularity

Some of you might assume that, as the author of books like "Singularity Sky" and "Accelerando", I attribute this to an impending technological singularity, to our development of self-improving artificial intelligence and mind uploading and the whole wish-list of transhumanist aspirations promoted by the likes of Ray Kurzweil. Unfortunately this isn't the case. I think transhumanism is a warmed-over Christian heresy. While its adherents tend to be vehement atheists, they can't quite escape from the history that gave rise to our current western civilization. Many of you are familiar with design patterns, an approach to software engineering that focusses on abstraction and simplification in order to promote reusable code. When you look at the AI singularity as a narrative, and identify the numerous places in the story where the phrase "... and then a miracle happens" occurs, it becomes apparent pretty quickly that they've reinvented Christianity.

Indeed, the wellsprings of today's transhumanists draw on a long, rich history of Russian Cosmist philosophy exemplified by the Russian Orthodox theologian Nikolai Fyodorvitch Federov, by way of his disciple Konstantin Tsiolkovsky, whose derivation of the rocket equation makes him essentially the father of modern spaceflight. And once you start probing the nether regions of transhumanist thought and run into concepts like Roko's Basilisk—by the way, any of you who didn't know about the Basilisk before are now doomed to an eternity in AI hell—you realize they've mangled it to match some of the nastiest ideas in Presybterian Protestantism.

If it walks like a duck and quacks like a duck, it's probably a duck. And if it looks like a religion it's probably a religion. I don't see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I'm going to wash my hands of the singularity as an explanatory model without further ado—I'm one of those vehement atheists too—and try and come up with a better model for what's happening to us.

Towards a better model for the future

As my fellow SF author Ken MacLeod likes to say, the secret weapon of science fiction is history. History, loosely speaking, is the written record of what and how people did things in past times—times that have slipped out of our personal memories. We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story. With a little bit of history it's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Hapsburg Empire, or to re-spin the October Revolution as a tale of how Mars got its independence.

But history is useful for so much more than that.

It turns out that our personal memories don't span very much time at all. I'm 53, and I barely remember the 1960s. I only remember the 1970s with the eyes of a 6-16 year old. My father, who died last year aged 93, just about remembered the 1930s. Only those of my father's generation are able to directly remember the great depression and compare it to the 2007/08 global financial crisis directly. But westerners tend to pay little attention to cautionary tales told by ninety-somethings. We modern, change-obsessed humans tend to repeat our biggest social mistakes when they slip out of living memory, which means they recur on a time scale of seventy to a hundred years.

So if our personal memories are usless, it's time for us to look for a better cognitive toolkit.

History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.

I'm talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?

Old, slow AI

Let me crib from Wikipedia for a moment:

In the late 18th century, Stewart Kyd, the author of the first treatise on corporate law in English, defined a corporation as:
a collection of many individuals united into one body, under a special denomination, having perpetual succession under an artificial form, and vested, by policy of the law, with the capacity of acting, in several respects, as an individual, particularly of taking and granting property, of contracting obligations, and of suing and being sued, of enjoying privileges and immunities in common, and of exercising a variety of political rights, more or less extensive, according to the design of its institution, or the powers conferred upon it, either at the time of its creation, or at any subsequent period of its existence.
—A Treatise on the Law of Corporations, Stewart Kyd (1793-1794)

In 1844, the British government passed the Joint Stock Companies Act, which created a register of companies and allowed any legal person, for a fee, to register a company, which existed as a separate legal person. Subsequently, the law was extended to limit the liability of individual shareholders in event of business failure, and both Germany and the United States added their own unique extensions to what we see today as the doctrine of corporate personhood.

(Of course, there were plenty of other things happening between the sixteenth and twenty-first centuries that changed the shape of the world we live in. I've skipped changes in agricultural productivity due to energy economics, which finally broke the Malthusian trap our predecessors lived in. This in turn broke the long term cap on economic growth of around 0.1% per year in the absence of famine, plagues, and wars depopulating territories and making way for colonial invaders. I've skipped the germ theory of diseases, and the development of trade empires in the age of sail and gunpowder that were made possible by advances in accurate time-measurement. I've skipped the rise and—hopefully—decline of the pernicious theory of scientific racism that underpinned western colonialism and the slave trade. I've skipped the rise of feminism, the ideological position that women are human beings rather than property, and the decline of patriarchy. I've skipped the whole of the Enlightenment and the age of revolutions! But this is a technocentric congress, so I want to frame this talk in terms of AI, which we all like to think we understand.)

Here's the thing about corporations: they're clearly artificial, but legally they're people. They have goals, and operate in pursuit of these goals. And they have a natural life cycle. In the 1950s, a typical US corporation on the S&P 500 index had a lifespan of 60 years, but today it's down to less than 20 years.

Corporations are cannibals; they consume one another. They are also hive superorganisms, like bees or ants. For their first century and a half they relied entirely on human employees for their internal operation, although they are automating their business processes increasingly rapidly this century. Each human is only retained so long as they can perform their assigned tasks, and can be replaced with another human, much as the cells in our own bodies are functionally interchangeable (and a group of cells can, in extremis, often be replaced by a prosthesis). To some extent corporations can be trained to service the personal desires of their chief executives, but even CEOs can be dispensed with if their activities damage the corporation, as Harvey Weinstein found out a couple of months ago.

Finally, our legal environment today has been tailored for the convenience of corporate persons, rather than human persons, to the point where our governments now mimic corporations in many of their internal structures.

by Charlie Stross, Charlie's Diary |  Read more:
Image: via 

Roy Lichtenstein
via:

Kokee Lodge
photo: markk

The US Democratic Party After The Election Of Donald Trump

In your view, what is the historic position of the Democrats in the US political system and where do they currently stand?

The Democrats have undergone an evolution over their course. It’s the oldest political party in the United States and, just to resume very briefly the late 20th century, it was the party of the New Deal, of the New Frontier, John F Kennedy, the Great Society of Lyndon Johnson. Over the most recent 30-year period, it has become somewhat different from that: a party of third-way centrism with what I think we identify in Europe as a moderately neo-liberal agenda but, in the United States, strongly associated with the financial sector.

Now it’s facing a crisis of that particular policy orientation, which is largely discredited and does not have a broad popular base. This is the meaning of the Sanders campaign and the strong appeal of that campaign in 2016 to younger voters suggests that the future of the Democratic Party, so far as its popular appeal is concerned, lies in a different direction, one that really encompasses substantially more dramatic proposals for change and reform and renovation.

In coming to the structure of a SWOT analysis, where would you identify the strengths and weaknesses of the Democrats today?

The strengths are evident in the fact that the party retains a strong position on the two coasts and the weaknesses are evident in the fact that it doesn’t have a strong position practically anywhere else. The polarisation works very much to the disadvantage of the Democratic Party because the US constitutional system gives extra weight to small states, to rural areas, and the control of those states also means that the Republican Party has gained control of the House of Representatives.

The Democratic Party has failed to maintain a national base of political organisation and has become a party that is largely responsive to a reasonably affluent, socially progressive, professional class and that is not a winning constituency in US national elections. That’s not to say that they might not win some given the alternative at any given time but the position is by no means strong structurally or organisationally.

When it comes to the opportunities and threats that the party is facing, a threat is obviously what happened in the last election with the rise of Donald Trump. How would you frame this in the context of the Democratic Party? Going forward, where do you think there are opportunities?


Up until this most recent election, the Democrats had won the presidential contest in a series of Midwestern and upper Midwestern states on a consistent basis since the 1980s. If one looked at Michigan and Wisconsin and Pennsylvania, Ohio a little less so but Minnesota, certainly, this was known as the Blue Wall. It was a set of states the Democrats felt they had a structurally sound position in.

It was clear, particularly since the global crisis in 2007-2009 and the recession that followed, that that position had eroded because it was rooted in manufacturing jobs and organised labour and those jobs were disappearing after the crisis at an accelerated rate and this preocess was concentrated in those states. Trump saw this and took advantage of it.

The Clinton campaign, which was deeply rooted in the bi-coastal elites that dominated the Democratic Party, failed to see it adequately, failed to take steps that might counter it, failed to appeal to those constituencies and, in fact, treated them with a certain amount of distance if not disdain. It was something that could easily be interpreted as disdain in the way in which they scheduled their campaign.

She never went to Wisconsin, for example, and in certain comments that she made and the way in which she identified the core constituencies of her campaign, she really did not reach out to these communities. Trump, as he said himself, saw the anger and took advantage of it and that was the story of the election.

Hilary Clinton did win the popular vote by a very substantial margin, mainly because she had an overwhelming advantage in the state of California but that was 4 million extra votes that made no difference to the outcome whereas, in these upper Midwestern states, a few tens of thousands of votes were decisive and it was Trump that was able to walk away with the electoral votes of those states.

Obviously, the threat or the challenge of populism, especially right-wing populism, is not unique to the United States. If you broaden the discussion a little bit, what would you recommend? How should progressive parties in the US and beyond react to the challenge that right-wing populism poses?

I dislike the term populism as a general purpose pejorative in politics because it tends to be used by members of the professional classes to describe political appeals to, let’s say, working class constituencies. Populism in the United States in the late 19th century was a former labour movement. It was a movement of debtors against creditors and of easy money and silver advocates against gold advocates and that was the essence of it.

I find a lot to identify with in that tradition and so I’m not inclined to say dismissively that one should be opposed to populism. The Democratic Party’s problem is that it had a core in the New Deal liberal period that was rooted in the organised labour movement – the working class and trade unions. That has been structurally weakened by the deindustrialisation of large parts of the American economy and the party has failed to maintain a popular base.

It could have developed and maintained that base but, in many ways, chose not to do so. Why not? Because if one really invests power in a working class constituency, you have to give serious consideration to what people in that constituency want. It’s obvious that that would be in contradiction with the Democratic Party’s commitment in the ‘90s and noughties to free trade agreements, to use the most flagrant example.

It would require a much more, let’s say, real-world employment policy. It would require a responsiveness that was not there to the housing and foreclosure crisis after the recession. What happened in the period following the great financial crisis was particularly infuriating because everybody could see that the class of big bankers was bailed out and protected whereas people who were ordinary homeowners, particularly people who had been in neighbourhoods that were victimised with subprime loans, suffered aggressive foreclosure.

There was a fury that was building and it was building on a justified basis that the party had not been responsive to a series of really, I think, clearly understood community needs and demands.

You mentioned the constituencies, the working class, one of the discussions that we had in other episodes of this series was: is there still a coherent working class and what does that mean? For instance, if you compare the socio-economic position of, say, skilled workers who now have a pretty good wage compared to, say, cleaners somewhere, is there still some kind of working class identity or is this actually fraying?

There’s certainly the case that working class is a shorthand, which has a certain dated quality to it, for sure, but it’s certainly the case that, since the mid-1970s in the US, the industrial working class represented by powerful trade unions has diminished dramatically and, in particular, in the regions of the country which constituted the manufacturing belt that was built up from, let’s say, the 1900s into the 1950s.

There has been a terrific change in the economic structure of the country and it has diminished the membership, power and influence of the trade unions. No question about that. The concept of working class now does span a bifurcated community… There’s certainly still manufacturing activity and some of it is really quite well paid and it’s certainly better to be a manufacturing worker than to be in the low-wage services sector.

Figuring out how to appeal broadly to those constituencies and to constituencies that lie on a lower level of income than the established professional classes is the challenge. That challenge was met, pretty effectively, by the Sanders campaign in 2016. What Bernie Sanders was proposing was the $15 minimum wage and universal health insurance and debt-free access to higher education plus progressive income taxes and a structural reform of the banking sector.

Those things stitch together some strongly felt needs particularly amongst younger people and that was, I think, why the Sanders campaign took off. People grasped that this was not an unlimited laundry list of ideas. It was a select and focused set, which Sanders advanced and repeated in a very disciplined way over the course of the campaign and so it was young people who rallied to that campaign. That does suggest that there is a policy agenda that could form the basis for the Democratic Party of the future.

by James K. Galbraith, Social Europe |  Read more:
Image: uncredited
[ed. This and other links at Politics 101.]

Fitz and the Tantrums / Ed Sheeran / Lia Kim x May J Lee Choreography



Repost

The Secret Lives of Students Who Mine Cryptocurrency in Their Dorm Rooms

Mark was a sophomore at MIT in Cambridge, Massachusetts, when he began mining cryptocurrencies more or less by accident.

In November 2016, he stumbled on NiceHash, an online marketplace for individuals to mine cryptocurrency for willing buyers. His desktop computer, boosted with a graphics card, was enough to get started. Thinking he might make some money, Mark, who asked not to use his last name, downloaded the platform’s mining software and began mining for random buyers in exchange for payments in bitcoin. Within a few weeks, he had earned back the $120 cost of his graphics card, as well as enough to buy another for $200.

From using NiceHash, he switched to mining ether, then the most popular bitcoin alternative. To increase his computational power, he scrounged up several unwanted desktop computers from a professor who “seemed to think that they were awful and totally trash.” When equipped with the right graphics cards, the “trash” computers worked fine.

Each time Mark mined enough ether to cover the cost, he bought a new graphics card, trading leftover currency into bitcoin for safekeeping. By March 2017, he was running seven computers, mining ether around the clock from his dorm room. By September his profits totaled one bitcoin—worth roughly $4,500 at the time. Now, four months later, after bitcoin’s wild run and the diversification of his cryptocoin portfolio, Mark estimates he has $20,000 in digital cash. “It just kind of blew up,” he says.

Exploiting a crucial competitive advantage and motivated by profit and a desire to learn the technology, students around the world are launching cryptocurrency mining operations right from their dorm rooms. In a typical mining operation, electricity consumption accounts for the highest fraction of operational costs, which is why the largest bitcoin mines are based in China. But within Mark’s dorm room, MIT foots the bill. That gives him and other student miners the ability to earn higher profit margins than most other individual miners.

In the months since meeting Mark, I’ve interviewed seven other miners from the US, Canada, and Singapore who ran or currently run dorm room cryptomining operations, and I’ve learned of many more who do the same. Initially, almost every student began mining because it was fun, cost-free, and even profitable. As their operations grew, so did their interest in cryptocurrency and in blockchain, the underlying technology. Mining, in other words, was an unexpected gateway into discovering a technology that many predict will dramatically transform our lives.  (...)

A dorm room operation

Years before meeting Mark, when I was a junior at MIT, I had heard rumors of my peers mining bitcoin. After its value exploded, and along with it, the necessary computational and electrical power to mine it, I assumed that dorm room mining was no longer viable. What I hadn’t considered was the option of mining alternate cryptocurrencies, including ethereum, which can and do thrive as small-scale operations.

When mining for cryptocurrency, computational power, along with low power costs, is king. Miners around the world compete to solve math problems for a chance to earn digital coins. The more computational power you have, the greater your chances of getting returns.

To profitably mine bitcoin today, you need an application-specific integrated circuit, or ASIC—specialized hardware designed for bitcoin-mining efficiency. An ASIC can have 100,000 times more computational power than a standard desktop computer equipped with a few graphics cards. But ASICs are expensive—the most productive ones easily cost several thousands of dollars—and they suck power. If bitcoin prices aren’t high enough to earn more revenue than the cost of electricity, the pricey hardware cannot be repurposed for any other function.

In contrast, alternate currencies like ethereum are “ASIC-resistant,” because ASICS designed to mine ether don’t exist. That means ether can be profitably mined with just a personal computer. Rather than rely solely on a computer’s core processor (colloquially called a “CPU”), however, miners pair it with graphics cards (“GPUs”) to increase the available computational power. Whereas CPUs are designed to solve one problem at a time, GPUs are designed to simultaneously solve hundreds. The latter dramatically raises the chances of getting coins.

by Karen Hao, Quartz |  Read more:
Image: rebcenter-moscow/Pixabay

William-Adolphe Bouguereau, The song of angels (1881)
via:

Of All the Blogs in the World, He Walks Into Mine

A man born to an Orthodox Jewish family in Toronto and schooled at a Yeshiva and a Japanese-American man raised on the island of Oahu, Hawaii, were married in the rare books section of the Strand Bookstore in Greenwich Village before a crowd of 200 people, against a backdrop of an arch of gold balloons that were connected to each other like intertwined units of a necklace chain or the link emoji, in a ceremony led by a Buddhist that included an operatic performance by one friend, the reading of an original poem based on the tweets of Yoko Ono by another, and a lip-synced rendition of Whitney Houston’s “I Will Always Love You” by a drag queen dressed in a white fringe jumper and a long veil.

The grooms met on the internet. But this isn’t a story about people who swiped right.

Adam J. Kurtz, 29, and Mitchell Kuga, 30, first connected Dec. 1, 2012, five years to the day before their wedding.

It was just before 5 p.m. and Mr. Kurtz, living in the Williamsburg section of Brooklyn, ordered a pizza. As one does, when one is 24 and living amid a generation of creative people whose every utterance and experience might be thought of as content, Mr. Kurtz filmed and posted to Tumblr a 10-minute video showing him awaiting the delivery.

Among those who liked the video was a stranger Mr. Kurtz had already admired from afar. It was a guy named Mitchell who didn’t reveal his last name on his Tumblr account, just his photographic eye for Brooklyn street scenes and, on occasion, his face. Mr. Kurtz had developed a bit of a social-media crush on him. “I would think, ‘He’s not even sharing his whole life, that is so smart and impressive,’” Mr. Kurtz said. (...)

When they met, they both were relatively new to New York. Mr. Kuga had moved to the city from Oahu in 2010, after having studied magazine journalism at Syracuse University, from which he graduated in 2009. He is a freelance journalist who has written for Next Magazine and for Gothamist, including an article about Spam (the food product, not the digital menace).

Mr. Kurtz graduated from the University of Maryland, Baltimore County in 2009 and moved to New York in 2012 to work as a graphic artist. He was always creative and enjoyed making crafts with bits and bobs of paper he had saved, ticket stubs and back-of-the-envelope doodles.

He began to build a large social media following, particularly on Instagram, of those who enjoyed his wry humor in celebrating paper culture through digital media, as well as the witty items he began to sell online (like little heart-shaped Valentine’s Day candies that say, “RT 4 YES, FAV 4 NO” AND “REBLOG ME”).

by Katherine Rosman, NY Times |  Read more:
Image: Rebecca Smeyne
[ed. Gay, straight, sideways... this just hurts my brain.]

Saturday, January 6, 2018

The Real Future of Work

In 2013, Diana Borland and 129 of her colleagues filed into an auditorium at the University of Pittsburgh Medical Center. Borland had worked there for the past 13 years as a medical transcriptionist, typing up doctors’ audio recordings into written reports. The hospital occasionally held meetings in the auditorium, so it seemed like any other morning.

The news she heard came as a shock: A UPMC representative stood in front of the group and told them their jobs were being outsourced to a contractor in Massachusetts. The representative told them it wouldn’t be a big change, since the contractor, Nuance Communications, would rehire them all for the exact same position and the same hourly pay. There would just be a different name on their paychecks.

Borland soon learned that this wasn’t quite true. Nuance would pay her the same hourly rate—but for only the first three months. After that, she’d be paid according to her production, 6 cents for each line she transcribed. If she and her co-workers passed up the new offer, they couldn’t collect unemployment insurance, so Borland took the deal. But after the three-month transition period, her pay fell off a cliff. As a UPMC employee, she had earned $19 per hour, enough to support a solidly middle-class life. Her first paycheck at the per-line rate worked out to just $6.36 per hour—below the minimum wage.

“I thought they made a mistake,” she said. “But when I asked the company, they said, ‘That’s your paycheck.’”

Borland quit not long after. At the time, she was 48, with four kids ranging in age from 9 to 24. She referred to herself as retired and didn’t hold a job for the next two years. Her husband, a medical technician, told her that “you need to be well for your kids and me.” But early retirement didn’t work out. The family struggled financially. Two years ago, when the rival Allegheny General Hospital recruited her for a transcriptionist position, she took the job. To this day, she remains furious about UPMC’s treatment of her and her colleagues.

“The bottom line was UPMC was going to do what they were going to do,” she said. “They don’t care about what anybody thinks or how it affects any family.” UPMC, reached by email, said the outsourcing was a way to save the transcriptionists’ jobs as the demand for transcriptionists fell.

It worked out for her former employer: In the four years since the outsourcing, UPMC’s net income has more than doubled.

What happened to Borland and her co-workers may not be as dramatic as being replaced by a robot, or having your job exported to a customer service center in Bangalore. But it is part of a shift that may be even more historic and important—and has been largely ignored by lawmakers in Washington. Over the past two decades, the U.S. labor market has undergone a quiet transformation, as companies increasingly forgo full-time employees and fill positions with independent contractors, on-call workers or temps—what economists have called “alternative work arrangements” or the “contingent workforce.” Most Americans still work in traditional jobs, but these new arrangements are growing—and the pace appears to be picking up. From 2005 to 2015, according to the best available estimate, the number of people in alternative work arrangements grew by 9 million and now represents roughly 16 percent of all U.S. workers, while the number of traditional employees declined by 400,000. A perhaps more striking way to put it is that during those 10 years, all net job growth in the American economy has been in contingent jobs.

Around Washington, politicians often talk about this shift in terms of the so-called gig economy. But those startling numbers have little to do with the rise of Uber, TaskRabbit and other “disruptive” new-economy startups. Such firms actually make up a small share of the contingent workforce. The shift that came for Borland is part of something much deeper and longer, touching everything from janitors and housekeepers to lawyers and professors.

“This problem is not new,” said Senator Sherrod Brown of Ohio, one of the few lawmakers who has proposed a comprehensive plan on federal labor law reform. “But it’s being talked about as if it’s new.”

The repercussions go far beyond the wages and hours of individuals. In America, more than any other developed country, jobs are the basis for a whole suite of social guarantees meant to ensure a stable life. Workplace protections like the minimum wage and overtime, as well as key benefits like health insurance and pensions, are built on the basic assumption of a full-time job with an employer. As that relationship crumbles, millions of hardworking Americans find themselves ejected from that implicit pact. For many employees, their new status as “independent contractor” gives them no guarantee of earning the minimum wage or health insurance. For Borland, a new full-time job left her in the same chair but without a livable income.

In Washington, especially on Capitol Hill, there’s not much talk about this shift in the labor market, much less movement toward solutions. Lawmakers attend conference after conference on the “Future of Work” at which Republicans praise new companies like Uber and TaskRabbit for giving workers more flexibility in their jobs, and Democrats argue that those companies are simply finding new ways to skirt federal labor law. They all warn about automation and worry that robots could replace humans in the workplace. But there’s actually not much evidence that the future of work is going to be jobless. Instead, it’s likely to look like a new labor market in which millions of Americans have lost their job security and most of the benefits that accompanied work in the 20th century, with nothing to replace them.

by Danny Vinik, Politico |  Read more:
Image: Chris Gash

Jackson Pollock
via:

Lawrence Wheeler
via: