Wednesday, January 10, 2018

After Hours: Off-Peaking

Mr. Money Mustache is in his early 40s, and he has been retired for 12 years. “One of the key principles of Mustachianism,” begins a lofty 2013 post, “is that any and all lineups, queues, and other sardine-like collections of humans must be viewed with the squinty eyes of skepticism.” His blog explains that everything you have been taught about money and time is wrong. Mr. Money Mustache, once the subject of a New Yorker profile, worked as a software engineer and saved half of his salary from the age of 20, and his vision of time is that of an engineer: time becomes a machine that can be tinkered with, hours and minutes rewired to achieve a more elegant purpose. His primary message is that you will not achieve financial security and personal happiness by working harder to get ahead of the pack; you will find these things by carefully studying what the pack is doing and then doing the opposite.

A post entitled “A Peak Life is Lived Off-Peak” extols the virtues of doing everything at the wrong time. The Mustache family lives in Colorado, where everyone goes skiing on the weekends; Mr. Mustache recommends hitting the slopes on Tuesdays. The Mustaches drive through major cities between 10 in the morning and four in the afternoon. Thursday morning is for teaching robotics to his son, whom he homeschools; below-freezing nights in January are for moonlit walks. Holidays are to be taken only when everyone else is at work. “Most people spend most of their time doing what everyone else does, without giving it much thought,” Mr. Money Mustache writes. “And thus, it is usually very profitable to avoid doing what everyone else is doing.”

The Mustaches are not the only online evangelists for the off-peak lifestyle. In a post entitled, “I Want You to Become an Off-Peak Person!” Peter Shankman, an entrepreneur who writes about turning his ADHD to his advantage, recommends grocery shopping at one in the morning. J.P. Livingston’s blog the Money Habit features photos of New York City that make it seem like a small town: a thinly populated subway, a near-empty museum. (The bins in time’s bargain basement seem to be overflowing with Tuesdays: train rides, drinks, meals, museum visits, and movies are cheaper when they happen on what is referred to in Canada as “Toonie Tuesdays,” in Australia as “Tight-Arse Tuesdays.”)

The thesis of off-peak evangelism is summed up by one of Mr. Mustache’s calls for a rejection of conformity: “In our natural state,” he writes, “we are supposed to be a diverse and individualistic species.” It is natural, he argues, for individual schedules to vary — why should we all expect to eat, sleep, work, and play in lockstep, like members of a militaristic cult? Standardized schedules create waste and clog infrastructure. Off-peak evangelism proposes a market value to individuality and diversity as mechanisms for repurposing humanity’s collective wasted time. While not a formalized movement, people who blog about off-peaking often seem to feel that they’ve discovered a secret too good to keep to themselves — something that was right in front of us the whole time, requiring only that we recognize our own power to choose.

Off-peaking is the closest thing to a Platonic form of subculture: its entire content is its opposition to the mainstream. As an economic approach, the solution off-peaking proposes can seem unkind — it’s a microcosm of the larger capitalist idea that it is right to profit from the captivity of others. And yet off-peakers only want, in effect, to slow time down by stretching the best parts of experience while wasting less. The arguments for off-peaking have centered on both the economic and the social advantages of recuperating unexploited time, like a form of temporal dumpster-diving that restores worth to low-demand goods. (...)

Taken at its most individualistic, it can seem that the idea of off-peaking is not to free everyone from the bonds of inefficency, but to position oneself to take advantage of the unthinking conformity of others. Success depends upon continued brokenness, not on fixing what is broken — or at least, on fixing it only for oneself and a canny self-selecting few. In this view, off-peaking is a miniaturized entrepreneurialism that exploits a wonky blip in the way slots of time are assigned value; a matter of identifying an arbitrage opportunity created by the system’s lack of self-awareness.

The comment sections of off-peakers’ blogs are, paradoxically, bustling: stories of going to bed at nine and waking up at four to ensure that the day is perfectly out of step; Legoland on Wednesdays in October; eating in restaurants as soon as they open rather than waiting for standard meal times. There’s a wealth of bargains to be had by juggling one’s calendar to take advantage of deals. (The app Ibotta, which tracks fluctuating prices on consumer goods popular with millennials, determined that Tuesdays are actually the worst days to buy rosé and kombucha; you should buy them on Wednesdays. Avocados are also cheapest on Wednesdays, while quinoa should be bought on Thursdays and hot sauce on Fridays.) Many posters write that they are considering changing professions or homeschooling their children to join the off-peakers.

Some off-peakers are motivated by savings, some by avoiding crowds, but off-peaking also offers a more abstract pleasure: the sheer delight in doing the unexpected. The gravitas attached to the seasons of life listed off in Ecclesiastes is echoed in the moral overtones attached to perceptions of what is appropriate for different hours of the day. It is wrong to laugh when everyone else is weeping or to embrace when everyone else is refraining from embracing. Ordinary activities become subversive when done at the wrong time: eating spaghetti for dinner is ordinary, but having linguini with clam sauce for breakfast breaks the unwritten rules. Once you start transgressing, it can be hard to stop: The arbitrariness of custom begins to chafe.

But off-peakers are generally not hoping to be completely solitary in their pursuits; most people don’t want to be the only person in their step-aerobics class at two in the afternoon. Instead, they want to be one among a smaller, more manageable group than urban cohorts tend to allow. Subcultures offer the pleasure of being different along with the pleasure of being the same; variation becomes a passport to acceptance. The two people who encounter one another at the aquarium on a Wednesday morning appear to have more in common than the two hundred people who see each other there on a weekend. Like other choices that divide people into subsets, off-peaking allows its adherents to discover a kinship that may or may not reveal a significant similarity in worldview.

by Linda Besner, Real Life |  Read more:
Image: Movie Theater, Los Angeles by Ed Freeman
[ed. The New Yorker link on Mr. Money Mustache is a great read in itself.]

The Breeders

The Strange Brands In Your Instagram Feed

It all started with an Instagram ad for a coat, the West Louis (TM) Business-Man Windproof Long Coat to be specific. It looked like a decent camel coat, not fancy but fine. And I’d been looking for one just that color, so when the ad touting the coat popped up and the price was in the double-digits, I figured: hey, a deal!

The brand, West Louis, seemed like another one of the small clothing companies that has me tagged in the vast Facebook-advertising ecosystem as someone who likes buying clothes: Faherty, Birdwell Beach Britches, Life After Denim, some wool underwear brand that claims I only need two pairs per week, sundry bootmakers.

Perhaps the copy on the West Louis site was a little much, claiming “West Louis is the perfection of modern gentlemen clothing,” but in a world where an oil company can claim to “fuel connections,” who was I to fault a small entrepreneur for some purple prose?

Several weeks later, the coat showed up in a black plastic bag emblazoned with the markings of China Post, that nation’s postal service. I tore it open and pulled out the coat. The material has the softness of a Las Vegas carpet and the rich sheen of a velour jumpsuit. The fabric is so synthetic, it could probably be refined into bunker fuel for a ship. It was, technically, the item I ordered, only shabbier than I expected in every aspect.

I went to the West Louis Instagram account and found 20 total posts, all made between June and October of 2017. Most are just pictures of clothes. Doing a reverse image search, it’s clear that the Business-Man Windproof Long Coat is sold throughout the world on a variety of retail websites. Another sweatshirt I purchased through Instagram—I tracked down no less than 15 shops selling the identical item. I bought mine from Thecuttedge.life, but I could have gotten it from Gonthwid, Hzijue, Romwe, HypeClothing, Manvestment, Ladae Picassa, or Kovfee. Each very lightly brands the sweathshirt as its own, but features identical pictures of a mustachioed, tattooed model. That a decent percentage of the brands are unpronounceable in English just adds to the covfefe of it all.

All these sites use a platform called Shopify, which is like the Wordpress or Blogger of e-commerce, enabling completely turnkey online stores. Now, it has over 500,000 merchants, a number that’s grown 74 percent per year over the last five years. On the big shopping days around Thanksgiving, they were doing $1 million dollars in transactions per minute. And the “vast majority” of the stores on the service are small to medium-sized businesses, the company told me.

Shopify serves as the base layer for an emerging ecosystem that solders digital advertising through Facebook onto the world of Asian manufacturers and wholesalers who rep their companies on Alibaba and its foreigner-friendly counterpart, AliExpress.

It’s a fascinating new retail world, a mutation of globalized capitalism that’s been growing in the cracks of mainstream commerce.

Here’s how it works.

“What is up everybody?!” a fresh-faced man with messy brown hair shouts into the camera. Behind him, two computers sit open on a white desk in a white room. By the looks of him, he might not be an adult, but he has already learned to look directly into the camera when delivering the ever-appealing gospel of Easy Money on the Internet.

“In this challenge, I’m going to take a brand new Shopify store to over one thousand dollars,” he says. “So I invite you to follow along with me as I take this brand new store from 0, literally 0, to over one thousand dollars in the next 7 days.”

In the corner of YouTube dedicated to e-commerce, these videos are a bit of a phenomenon, racking up hundreds of thousands of views for highly detailed explanations of how to set up an e-commerce shop on the Internet.

Their star is Rory Ganon. Though his accent is Irish (“tousand”), his diction is pure LA YouTuber. He’s repetitive, makes quick cuts, and delivers every line with the conviction of youth. He appears to live in Ratoath, a small Irish commuter town about half an hour outside Dublin. His Facebook page describes him as a 17-year-old entrepreneur.

His success finding an audience seems predicated on the fact that when he says he’s going to show you everything, he really is going to show you everything. Like, you will watch his screen as he goes about setting up a store, so anyone can follow along at home. He’s a Bob Ross of e-commerce.

These techniques work the same for him as for Gucci. Some Instagram retailers are legit brands with employees and products. Others are simply middlemen for Chinese goods, built in bedrooms, and launched with no capital or inventory. All of them have been pulled into existence by the power of Instagram and Facebook ads combined with a suite of e-commerce tools based around Shopify.

The products don’t matter to the system, nor do they matter to Ganon. The whole idea of retail gets inverted in his videos. What he actually sells in his stores is secondary to how he does it. It’s as if he squirts hot dogs on his ketchup and mustard.

What Ganon does is pick suppliers he’ll never know to ship products he’ll never touch. All his effort goes into creating ads to capture prospective customers, and then optimizing a digital environment that encourages them to buy whatever piece of crap he’s put in front of them.

And he is not alone. (...)

Ganon’s videos are particularly fascinating in describing the mechanics of digital advertising through Instagram and Facebook.

In the tutorial, he briefly discusses finding a niche for the products in your store, and he uses some business school powerpoint terms. But when he actually selects a niche, it is Lions. That’s right: Lions, the animals.

by Alexis C. Madrigal, The Atlantic |  Read more:
Image: Alexis Madrigal

Tuesday, January 9, 2018

Tommy Guerrero


Brano Hlavac

via:

Fifty Psychological and Psychiatric Terms to Avoid

Abstract

The goal of this article is to promote clear thinking and clear writing among students and teachers of psychological science by curbing terminological misinformation and confusion. To this end, we present a provisional list of 50 commonly used terms in psychology, psychiatry, and allied fields that should be avoided, or at most used sparingly and with explicit caveats. We provide corrective information for students, instructors, and researchers regarding these terms, which we organize for expository purposes into five categories: inaccurate or misleading terms, frequently misused terms, ambiguous terms, oxymorons, and pleonasms. For each term, we (a) explain why it is problematic, (b) delineate one or more examples of its misuse, and (c) when pertinent, offer recommendations for preferable terms. By being more judicious in their use of terminology, psychologists and psychiatrists can foster clearer thinking in their students and the field at large regarding mental phenomena. (...)

Inaccurate or Misleading Terms

(1) A gene for. The news media is awash in reports of identifying “genes for” a myriad of phenotypes, including personality traits, mental illnesses, homosexuality, and political attitudes (Sapolsky, 1997). For example, in 2010, The Telegraph (2010) trumpeted the headline, “‘Liberal gene’ discovered by scientists.” Nevertheless, because genes code for proteins, there are no “genes for” phenotypes per se, including behavioral phenotypes (Falk, 2014). Moreover, genome-wide association studies of major psychiatric disorders, such as schizophrenia and bipolar disorder, suggest that there are probably few or no genes of major effect (Kendler, 2005). In this respect, these disorders are unlike single-gene medical disorders, such as Huntington’s disease or cystic fibrosis. The same conclusion probably holds for all personality traits (De Moor et al., 2012).

Not surprisingly, early claims that the monoamine oxidase-A (MAO-A) gene is a “warrior gene” (McDermott et al., 2009) have not withstood scrutiny. This polymorphism appears to be only modestly associated with risk for aggression, and it has been reported to be associated with conditions that are not tied to a markedly heightened risk of aggression, such as major depression, panic disorder, and autism spectrum disorder (Buckholtz and Meyer-Lindenberg, 2013; Ficks and Waldman, 2014). The evidence for a “God gene,” which supposedly predisposes people to mystical or spiritual experiences, is arguably even less impressive (Shermer, 2015) and no more compelling than that for a “God spot” in the brain (see “God spot”). Incidentally, the term “gene” should not be confused with the term “allele”; genes are stretches of DNA that code for a given morphological or behavioral characteristic, whereas alleles are differing versions of a specific polymorphism in a gene (Pashley, 1994).

(2) Antidepressant medication. Medications such as tricyclics, selective serotonin reuptake inhibitors, and selective serotonin and norepinephrine reuptake inhibitors, are routinely called “antidepressants.” Yet there is little evidence that these medications are more efficacious for treating (or preventing relapse for) mood disorders than for several other conditions, such as anxiety-related disorders (e.g., panic disorder, obsessive-compulsive disorder; Donovan et al., 2010) or bulimia nervosa (Tortorella et al., 2014). Hence, their specificity to depression is doubtful, and their name derives more from historical precedence—the initial evidence for their efficacy stemmed from research on depression (France et al., 2007)—than from scientific evidence. Moreover, some authors argue that these medications are considerably less efficacious than commonly claimed, and are beneficial for only severe, but not mild or moderate, depression, rendering the label of “antidepressant” potentially misleading (Antonuccio and Healy, 2012; but see Kramer, 2011, for an alternative view).

(3) Autism epidemic. Enormous effort has been expended to uncover the sources of the “autism epidemic” (e.g., King, 2011), the supposed massive increase in the incidence and prevalence of autism, now termed autism spectrum disorder, over the past 25 years. The causal factors posited to be implicated in this “epidemic” have included vaccines, television viewing, dietary allergies, antibiotics, and viruses.

Nevertheless, there is meager evidence that this purported epidemic reflects a genuine increase in the rates of autism per se as opposed to an increase in autism diagnoses stemming from several biases and artifacts, including heightened societal awareness of the features of autism (“detection bias”), growing incentives for school districts to report autism diagnoses, and a lowering of the diagnostic thresholds for autism across successive editions of the Diagnostic and Statistical Manual of Mental Disorders (Gernsbacher et al., 2005; Lilienfeld and Arkowitz, 2007). Indeed, data indicate when the diagnostic criteria for autism were held constant, the rates of this disorder remained essentially constant between 1990 and 2010 (Baxter et al., 2015). If the rates of autism are increasing, the increase would appear to be slight at best, hardly justifying the widespread claim of an “epidemic.”

(4) Brain region X lights up. Many authors in the popular and academic literatures use such phrases as “brain area X lit up following manipulation Y” (e.g., Morin, 2011). This phrase is unfortunate for several reasons. First, the bright red and orange colors seen on functional brain imaging scans are superimposed by researchers to reflect regions of higher brain activation. Nevertheless, they may engender a perception of “illumination” in viewers. Second, the activations represented by these colors do not reflect neural activity per se; they reflect oxygen uptake by neurons and are at best indirect proxies of brain activity. Even then, this linkage may sometimes be unclear or perhaps absent (Ekstrom, 2010). Third, in almost all cases, the activations observed on brain scans are the products of subtraction of one experimental condition from another. Hence, they typically do not reflect the raw levels of neural activation in response to an experimental manipulation. For this reason, referring to a brain region that displays little or no activation in response to an experimental manipulation as a “dead zone” (e.g., Lamont, 2008) is similarly misleading. Fourth, depending on the neurotransmitters released and the brain areas in which they are released, the regions that are “activated” in a brain scan may actually be being inhibited rather than excited (Satel and Lilienfeld, 2013). Hence, from a functional perspective, these areas may be being “lit down” rather than “lit up.”

(5) Brainwashing. This term, which originated during the Korean War (Hunter, 1951) but which is still invoked uncritically from time to time in the academic literature (e.g., Ventegodt et al., 2009; Kluft, 2011), implies that powerful individuals wishing to persuade others can capitalize on a unique armamentarium of coercive procedures to change their long-term attitudes. Nevertheless, the attitude-change techniques used by so-called “brainwashers” are no different than standard persuasive methods identified by social psychologists, such as encouraging commitment to goals, manufacturing source credibility, forging an illusion of group consensus, and vivid testimonials (Zimbardo, 1997). Furthermore, there are ample reasons to doubt whether “brainwashing” permanently alters beliefs (Melton, 1999). For example, during the Korean War, only a small minority of the 3500 American political prisoners subjected to intense indoctrination techniques by Chinese captors generated false confessions. Moreover, an even smaller number (probably under 1%) displayed any signs of adherence to Communist ideologies following their return to the US, and even these were individuals who returned to Communist subcultures (Spanos, 1996).

(6) Bystander apathy. The classic work of (e.g., Darley and Latane, 1968; Latane and Rodin, 1969) underscored the counterintuitive point that when it comes to emergencies, there is rarely “safety in numbers.” As this and subsequent research demonstrated, the more people present at an emergency, the lower the likelihood of receiving help. In early research, this phenomenon was called “bystander apathy” (Latane and Darley, 1969) a term that endures in many academic articles (e.g., Abbate et al., 2013). Nevertheless, research demonstrates that most bystanders are far from apathetic in emergencies (Glassman and Hadad, 2008). To the contrary, they are typically quite concerned about the victim, but are psychologically “frozen” by well-established psychological processes, such as pluralistic ignorance, diffusion of responsibility, and sheer fears of appearing foolish.

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public (France et al., 2007; Deacon and Baird, 2009). This phrase even crops up in some academic sources; for example, one author wrote that one overarching framework for conceptualizing mental illness is a “biophysical model that posits a chemical imbalance” (Wheeler, 2011, p. 151). Nevertheless, the evidence for the chemical imbalance model is at best slim (Lacasse and Leo, 2005; Leo and Lacasse, 2008). One prominent psychiatrist even dubbed it an urban legend (Pies, 2011). There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels. Moreover, although serotonin reuptake inhibitors, such as fluoxetine (Prozac) and sertraline (Zoloft), appear to alleviate the symptoms of severe depression, there is evidence that at least one serotonin reuptake enhancer, namely tianepine (Stablon), is also efficacious for depression (Akiki, 2014). The fact that two efficacious classes of medications exert opposing effects on serotonin levels raises questions concerning a simplistic chemical imbalance model.

by Scott O. Lilienfeld, Katheryn C. Sauvigné, Steven Jay Lynn, Robin L. Cautin, Robert D. Latzman, and Irwin D. Waldman, Frontiers in Psychology |  Read more:
Image: Frontiers in Psychology

Retail Investors Now True Believers with Record Exposure

As far as the stock market is concerned, it took a while – in fact, it took eight years, but retail investors are finally all in, bristling with enthusiasm. TD Ameritrade’s Investor Movement Index rose to 8.59 in December, a new record. TDA’s clients were net buyers for the 11th month in a row, one of the longest buying streaks and ended up with more exposure to the stock market than ever before in the history of the index.

This came after a blistering November, when the index had jumped 15%, “its largest single-month increase ever,” as TDA reported at the time, to 8.53, also a record:


Note how retail investors had been to varying degrees among the naysayers from the end of the Financial Crisis till the end of 2016, before they suddenlybecame true believers in February 2017.

“I don’t think the investors who are engaging regularly are doing so in a dangerous fashion,” said TDA Chief Market Strategist JJ Kinahan in an interview. But he added, clients at the beginning of 2017 were “up to their knees in it and then up to their thighs, and now up to their chests.”

The implication is that they could get in a little deeper before they’d drown.

“As the year went on, people got more confident,” he said. And despite major geopolitical issues, “the market was never tested at all” last year. There was this “buy-the-dip mentality” every time the market dipped 1% or 2%.

But one of his “bigger fears” this year is this very buy-the-dip mentality, he said. People buy when the market goes down 1% or 2%, and “it goes down 5%, then it goes down 8% — and they turn into sellers, and then they get an exponential move to the downside.”

In addition to some of the big names in the US – Amazon, Microsoft, Bank of America, etc. – TDA’s clients were “believers” in Chinese online retail and were big buyers of Alibaba and Tencent. But they were sellers of dividend stocks AT&T and Verizon as the yield of two-year Treasuries rose to nearly 2%, and offered a risk-free alternative at comparable yields.

And he added, with an eye out for this year: “It’s hard to believe that the market can go up unchallenged.”

This enthusiasm by retail investors confirms the surge in margin debt – a measure of stock market leverage and risk – which has been jumping from record to record, and hit a new high of $581 billion, up 16% from a year earlier.

And as MarketWatch reported, “cash balances for Charles Schwab clients reached their lowest level on record in the third quarter, according to Morgan Stanley, which wrote that retail investors ‘can’t stay away’ from stocks,” while the stock allocation index by the American Association of Individual Investors “jumped to 72%, its highest level since 2000…” as “retail investors – according to a Deutsche Bank analysis of consumer sentiment data – view the current environment as “the best time ever to invest in the market.”

by Wolf Richter, Wolf Street |  Read more:
Image: TD Ameritrade
[ed. What could go wrong?]

Your Next Obsession: Retro Japanese Video Game Art


I am obsessed with something new in the world of design. Well, actually, something quite old. Specifically, late 90s and early 2000s Japanese video game art. And also, video game ads. And also, photos of old video game hardware. I am knee-deep in gaming nostalgia.

A lot of the art I’ve become fascinated with is a particular aesthetic born around the fourth generation of video gaming (spanning from the 16-bit boom of the PC Engine / TurboGrafx-16 and Sega Genesis, through to the original PlayStation, Sega Saturn, and the Dreamcast). One which blends hand-drawn art and lettering, dramatic typography, highly technical layouts, and colorful, sometimes cartoonish patterns.

Design, like fashion, moves in cycles, and we’re starting to see a new wave of Japanese game art in pop design. You can see it in the Richard Turley-led Wieden + Kennedy rebranding of the Formula One logo / design language (heavy, heavy shades of Wipeout) or in the varied styles of Australian artist Jonathan Zawada.

Cory Schmitz — a designer who’s worked on projects like the Oculus Rift rebranding and logo design for the game Shadow of the Colossus — has been assembling many of the best examples of the original era on his Tumblr, QuickQuick. I reached out to him to ask about what he was drawn to in this particular style: “As a designer this stuff is super inspirational because it’s so different from current design trends. A lot of unexpected colors, type, and compositions. And I really like the weird sense of nostalgia I get from stuff I haven’t necessarily seen before.” It’s Cory’s curation you’ll see a lot of in the card stack here.

As we move away from the Web 2.0 / Apple mandate of clean, orderly, sterile design, into a more playful, experimental, artistic phase (hello Dropbox redesign), this particular style of art feels like an obvious meeting point born out of a desire for orderly information delivery and a more primal need for some degree of controlled chaos. Mostly, though, it just looks really fucking cool.

by Joshua Topolsky, The Outline | Read more:
Image: Ian Anderson, Designers Republic

Monday, January 8, 2018


Tom Guald
via:

Image: Angela Weiss/AFP via Getty
via:
[ed. My dream girl.]

Fight Me, Psychologists: Birth Order Effects Exist and Are Very Strong

“Birth order” refers to whether a child is the oldest, second-oldest, youngest, etc. in their family. For a while, pop psychologists created a whole industry around telling people how their birth order affected their personality: oldest children are more conservative, youngest children are more creative, etc.

Then people got around to actually studying it and couldn’t find any of that. Wikipedia’s birth order article says:
Claims that birth order affects human psychology are prevalent in family literature, but studies find such effects to be vanishingly small….the largest multi-study research suggests zero or near-zero effects. Birth-order theory has the characteristics of a zombie theory, as despite disconfirmation, it continues to have a strong presence in pop psychology and popular culture.
I ought to be totally in favor of getting this debunked. After all, the replication crisis in psychology highlights the need to remain skeptical of poorly-supported theories. And some of the seminal work disproving birth order was done by Judith Rich Harris, an intellectual hero of mine who profoundly shaped my worldview with her book The Nurture Assumption.

So I regret to have to inform you that birth order effects are totally a real thing.

I first started thinking this at transhumanist meetups, when it would occasionally come up that everyone there was an oldest child. The pattern was noticeable enough that I included questions about birth order on the latest SSC survey. This blog deals with a lot of issues around transhumanism, futurology, rationality, et cetera, so I thought it would attract the same kind of people.

7,248 people gave me enough information to calculate their birth order, but I am very paranoid because previous studies have failed by failing to account for family size. That is, people of certain economic classes/religions/races/whatever tend to have larger family sizes, and if you’re in a large family, you’re more likely to be a later-born child. In order to be absolutely sure I wasn’t making this mistake, I concentrated on within-family-size analyses. For example, there were 2965 respondents with exactly one sibling…

…and a full 2118 of those were the older of the two. That’s 71.4%. p ≤ 0.00000001. (...)

So what is going on here?

It’s unlikely that age alone is driving these results. In sibships of two, older siblings on average were only about one year older than younger siblings. That can’t explain why one group reads this blog so much more often than the other.

And all of the traditional pop psychology claims about birth order don’t seem to hold up. I didn’t find any effect on anything that could be reasonably considered conservativism or rebelliousness.

But there is at least one reputable study that did find a few personality differences. This is Rohrer et al (2015), which examined a battery of personality traits and found birth order effects only IQ and Openness to Experience, both very small.

I was only partly able to replicate this work. Rohrer et al found that eldest siblings had an advantage of about 1.5 IQ points. My study found the same: 1.3 to 1.7 IQ points depending on family size – but because of the sample size this did not achieve significance. (...)

The Openness results were clearer. Eldest children had significantly higher Openness (73rd %ile vs. 69th %ile, p = 0.001). Like Rohrer, I found no difference in any of the other Big Five traits.

Because I only had one blunt measure of Openness, I couldn’t do as detailed an analysis as Rohrer’s team. But they went on to subdivide Openness into two subcomponents, Intellect and Imagination, and found birth order only affected Intellect. They sort of blew Intellect off as just “self-estimated IQ”, but I don’t think this is right. Looking at it more broadly, it seems to be a measure of intellectual curiosity – for example, one of the questions they asked was, “I am someone who is eager for knowledge”. Educational Testing Service describes it as “liking complex problems”, and its opposite as “avoiding philosophical discussion”.

This seems promising. If older siblings were more likely to enjoy complex philosophical discussion, that would help explain why they are so much more likely to read a blog about science and current events. Unfortunately, the scale is completely wrong. Rohrer et al’s effects are tiny – going from a firstborn to a secondborn has an effect size of 0.1 SD on Intellect. In order to contain 71.6% firstborns, this blog would have to select for people above the 99.99999999th percentile in Intellect. There are only 0.8 people at that level in the world, so no existing group is that heavily selected.

I think the most likely explanation is that tests for Openness have limited validity, which makes the correlation look smaller than it really is. If being an eldest sibling increases true underlying Openness by a lot, but your score on psychometric tests for Openness only correlates modestly with true underlying Openness, that would look like being an eldest sibling only increasing test-measured-Openness a little bit.

(cf. Riemann and Kandler (2010), which finds that the heritability of Openness shoots way up if you do a better job assessing it)

If we suppose that birth order has a moderate effect size on intellectual curiosity of 0.5 SD, that would imply that science blogs select for people in the top 3% or so of intellectual curiosity, a much more reasonable number. Positing higher (but still within the range of plausibility) effect sizes would decrease the necessary filtering even further.

If this is right, it suggests Rohrer et al undersold their conclusion. Their bottom line was something like “birth order effects may exist for a few traits, but are too small to matter”. I agree they may only exist for a few traits, but they can be strong enough to skew ratios in some heavily-selected communities like this one.

When I asked around about this, a couple of people brought up further evidence. Liam Clegg pointed out that philosophy professor Michael Sandel asks his students to raise their hand if they’re the oldest in their family, and usually gets about 80% of the class. And Julia Rohrer herself was kind enough to add her voice and say that:
I’m not up to fight you because I think you might be onto something real here. Just to throw in my own anecdotal data: The topic of birth order effect comes up quite frequently when I chat with people in academic contexts, and more often than not (~80% of the time), the other person turns out to be firstborn. Of course, this could be biased by firstborns being more comfortable bringing up the topic given that they’re supposedly smarter, and it’s only anecdotes. Nonetheless, it sometimes makes me wonder whether we are missing something about the whole birth order story.
But why would eldest siblings have more intellectual curiosity? There are many good just-so stories, like parents having more time to read to them as children. But these demand strong effects of parenting on children’s later life outcomes, of exactly the sort that behavioral genetic studies consistently find not to exist. An alternate hypothesis could bring in weird immune stuff, like that thing where people with more older brothers are more likely to be gay because of maternal immunoreactivity to the Y chromosome (which my survey replicates, by the way). But this is a huge stretch and I don’t even know if people are sure this explains the homosexuality results, let alone the birth order ones.

If mainstream psychology becomes convinced this effect exists, I hope they’ll start doing the necessary next steps. This would involve seeing if biological siblings matter more or less than adopted siblings, whether there’s a difference between paternal and maternal half-siblings, how sibling age gaps work into this, and whether only children are more like oldests or youngests. Their reward would be finding some variable affecting children’s inherent intellectual curiosity – one that might offer opportunities for intervention.

by Scott Alexander, Slate Star Codex |  Read more:
Image: Emily
[ed. I participated in this survey. Also a firstborn in my family.]

Who Cares About Inequality?

Lloyd Blankfein is worried about inequality. The CEO of Goldman Sachs—that American Almighty, who swindled the economy and walked off scot-free— sees new “divisions” in the country. “Too much,” Blankfein lamented in 2014, “has gone to too few people.”

Charles Koch is worried, too. Another great American plutocrat—shepherd of an empire that rakes in $115 billion and spits out $200 million in campaign contributions each year—decried in 2015 the “welfare for the rich” and the formation of a “permanent underclass.” “We’re headed for a two-tiered society,” Koch warned.

Their observations join a chorus of anti-inequality advocacy among the global elite. The World Bank called inequality a “powerful threat to global progress.” The International Monetary Fund claimed it was “not a recipe for stability and sustainability” —threat-level red for the IMF. And the World Economic Forum, gathered together at Davos last year, described inequality as the single greatest global threat.

It is a stunning consensus. In Zuccotti Park, the cry of the 99% was an indictment. To acknowledge the existence of the super-rich was to incite class warfare. Not so today. Ted Cruz, whom the Kochs have described as a ‘hero’, railed against an economy where wealthy Americans “have gotten fat and happy.” He did so on Fox News.

What the hell is happening here? Why do so many rich people care so much about inequality? And why now?

The timing of the elite embrace of the anti-inequality agenda presents a puzzle precisely because it is so long overdue.

For decades, political economists have struggled to understand why inequality has remained uncontested all this time. Their workhorse game theoretic model, developed in the early 1980s by Allan Meltzer and Scott Richard, predicts that democracies respond to an increase in equality with an increase in top-rate taxation—a rational response of the so-called ‘median voter.’

And yet, the relationship simply does not hold in the real world. On the contrary, in the United States, we find its literal inverse: amid record high inequality, one of the largest tax cuts in history. This inverted relationship is known as the Robin Hood Paradox.

One explanation of this paradox is the invisibility of the super-rich. On the one hand, they hide in their enclaves: the hills, the Hamptons, Dubai, the Bahamas. In the olden days, the poor were forced to bear witness to royal riches, standing roadside as the chariot moved through town. Today, they live behind high walls in gated communities and private islands. Their wealth is obscured from view, stashed offshore and away from the tax collector. This is wealth as exclusion.

On the other, they hide among us. As Rachel Sherman has recently argued, conspicuous consumption is out of fashion, displaced by an encroaching “moral stigma of privilege” that won’t let the wealthy just live. Not long ago, the rich felt comfortable riding down broad boulevards in stretch limousines and fur coats. Today, they remove price tags from their groceries and complain about making ends meet. This is wealth as assimilation.

The result is a general misconception about the scale of inequality in America. According to one recent study, Americans tend to think that the ratio of CEO compensation to average income is 30-to-1. The actual figures are 350-to-1.

Yet this is only a partial explanation of the Robin Hood Paradox. It is an appealing theory, but I find it doubtful that any public revelation of elite lifestyles would drive these elites to call for reform. It would seem a difficult case to make after the country elected a man to highest office that lives in a golden penthouse of a skyscraper bearing his own name in the middle of the most expensive part of America’s most expensive city.

“I love all people,” President Trump promised at a rally last June. “But for these posts”—the posts in his cabinet—“I just don’t want a poor person.” The crowd cheered loudly. The state of play of the American pitchfork is determined in large part by this very worldview—and the three myths about the rich and poor that sustain it.

The first is the myth of the undeserving poor. American attitudes to inequality are deeply informed by our conception of the poor as lazy. In Why Americans Hate Welfare, Martin Gilens examines the contrast between Americans’ broad support for social spending and narrow support for actually existing welfare programs. The explanation, Gilens argues, is that Americans view the poor as scroungers—a view forged by racial representations of welfare recipients in our media.

In contrast—and this is the second myth—Americans believe in the possibility of their own upward mobility. Even if they are not rich today, they will be rich tomorrow. And even if they are not rich tomorrow, their children will be rich the next day. In a recent survey experiment, respondents overestimated social mobility in the United States by over 20%. It turns out that the overestimation is quite easy to provoke: researchers simply had to remind the participants of their own ‘talents’ in order to boost their perceptions of class mobility. Such a carrot of wealth accumulation has been shown to exert a downward pressure on Americans’ preferences for top-rate taxation.

But the third myth, and perhaps most important, concerns the wealthy. For many years, this was called trickle-down economics. Inequality was unthreatening because of our faith that the wealth at the top would—some way or another—reach the bottom. The economic science was questionable, but cultural memories lingered around a model of paternalistic capitalism that suggested its truth. The old titans of industry laid railroads, made cars, extracted oil. Company towns sprouted across the country, where good capitalists took care of good workers.

But the myth of trickling wealth has become difficult to sustain. Over the last half-century, while productivity has soared, average wages among American workers have grown by just 0.2% each year—while those at the very top grew 138%. Only half of Republicans still believe that trimming taxes for the rich leads to greater wealth for the general population. Only 13% of Democrats do.

Declining faith in trickle-down economics, however, does not necessarily imply declining reverence for the wealthy. 43% of Americans today still believe that the rich are more intelligent than the average American, compared to just 8% that believe they are less. 42% of Americans still believe that the rich are more hardworking than the average, compared to just 24 that believe they are less.

It would seem, therefore, that the trickle-down myth has been displaced by another, perhaps more obstinate myth of the 1% innovator.

The 1% innovator is a visionary: with his billions, he dreams up new and exciting ideas for the twenty-first century. Steve Jobs was one; Elon Musk is another. Their money is not idle—it is fodder for that imagination. As the public sector commitment to futurist innovation has waned—as NASA, for example, has shrunk and shriveled—his role has become even more important. Who else will take us to Mars?

The reality, of course, is that our capitalists are anything but innovative. They’re not even paternal. In fact, they are not really capitalists at all. They are mostly rentiers: rather than generate wealth, they simply extract it from the economy. Consider the rapid rise in real estate investment among the super-rich. Since the financial crash, a toxic mix of historically low interest rates and sluggish growth have encouraged international investors to turn toward the property market, which promises to deliver steady if moderate returns. Among the Forbes 400 “self-made” billionaires, real estate ranks third. Investments and technology—two other rentier industries—rank first and second, respectively.

But the myth of the 1% innovator is fundamental to the politics of inequality, because it suspends public demands for wealth taxation. If the innovators are hard at work, and they need all that capital to design and bring to life the consumer goodies that we enjoy, then we should hold off on serious tax reform and hear them out. Or worse: we should cheer on their wealth accumulation, waiting for the next, more expensive rabbit to be pulled from the hat. The revolt from below can be postponed until tomorrow or the next day.

All together, the enduring strength of these myths only serves to deepen the puzzle of elite anti-inequality advocacy. Why the sudden change of heart? Why not keep promoting the myths and playing down the scale of the “two-tiered society” that Charles Koch today decries?

The unfortunate answer, I believe, is that inequality has simply become bad economics.

by David Adler, Current Affairs | Read more:
Image: uncredited

Sunday, January 7, 2018


Rafael Araujo
via:

Dude, You Broke the Future!

Abstract: We're living in yesterday's future, and it's nothing like the speculations of our authors and film/TV producers. As a working science fiction novelist, I take a professional interest in how we get predictions about the future wrong, and why, so that I can avoid repeating the same mistakes. Science fiction is written by people embedded within a society with expectations and political assumptions that bias us towards looking at the shiny surface of new technologies rather than asking how human beings will use them, and to taking narratives of progress at face value rather than asking what hidden agenda they serve.

In this talk, author Charles Stross will give a rambling, discursive, and angry tour of what went wrong with the 21st century, why we didn't see it coming, where we can expect it to go next, and a few suggestions for what to do about it if we don't like it.


Good morning. I'm Charlie Stross, and it's my job to tell lies for money. Or rather, I write science fiction, much of it about our near future, which has in recent years become ridiculously hard to predict.

Our species, Homo Sapiens Sapiens, is roughly three hundred thousand years old. (Recent discoveries pushed back the date of our earliest remains that far, we may be even older.) For all but the last three centuries of that span, predicting the future was easy: natural disasters aside, everyday life in fifty years time would resemble everyday life fifty years ago.

Let that sink in for a moment: for 99.9% of human existence, the future was static. Then something happened, and the future began to change, increasingly rapidly, until we get to the present day when things are moving so fast that it's barely possible to anticipate trends from month to month.

As an eminent computer scientist once remarked, computer science is no more about computers than astronomy is about building telescopes. The same can be said of my field of work, written science fiction. Scifi is seldom about science—and even more rarely about predicting the future. But sometimes we dabble in futurism, and lately it's gotten very difficult.

How to predict the near future

When I write a near-future work of fiction, one set, say, a decade hence, there used to be a recipe that worked eerily well. Simply put, 90% of the next decade's stuff is already here today. Buildings are designed to last many years. Automobiles have a design life of about a decade, so half the cars on the road will probably still be around in 2027. People ... there will be new faces, aged ten and under, and some older people will have died, but most adults will still be around, albeit older and grayer. This is the 90% of the near future that's already here.

After the already-here 90%, another 9% of the future a decade hence used to be easily predictable. You look at trends dictated by physical limits, such as Moore's Law, and you look at Intel's road map, and you use a bit of creative extrapolation, and you won't go too far wrong. If I predict that in 2027 LTE cellular phones will be everywhere, 5G will be available for high bandwidth applications, and fallback to satellite data service will be available at a price, you won't laugh at me. It's not like I'm predicting that airliners will fly slower and Nazis will take over the United States, is it?

And therein lies the problem: it's the 1% of unknown unknowns that throws off all calculations. As it happens, airliners today are slower than they were in the 1970s, and don't get me started about Nazis. Nobody in 2007 was expecting a Nazi revival in 2017, right? (Only this time round Germans get to be the good guys.)

My recipe for fiction set ten years in the future used to be 90% already-here, 9% not-here-yet but predictable, and 1% who-ordered-that. But unfortunately the ratios have changed. I think we're now down to maybe 80% already-here—climate change takes a huge toll on infrastructure—then 15% not-here-yet but predictable, and a whopping 5% of utterly unpredictable deep craziness.

Ruling out the singularity

Some of you might assume that, as the author of books like "Singularity Sky" and "Accelerando", I attribute this to an impending technological singularity, to our development of self-improving artificial intelligence and mind uploading and the whole wish-list of transhumanist aspirations promoted by the likes of Ray Kurzweil. Unfortunately this isn't the case. I think transhumanism is a warmed-over Christian heresy. While its adherents tend to be vehement atheists, they can't quite escape from the history that gave rise to our current western civilization. Many of you are familiar with design patterns, an approach to software engineering that focusses on abstraction and simplification in order to promote reusable code. When you look at the AI singularity as a narrative, and identify the numerous places in the story where the phrase "... and then a miracle happens" occurs, it becomes apparent pretty quickly that they've reinvented Christianity.

Indeed, the wellsprings of today's transhumanists draw on a long, rich history of Russian Cosmist philosophy exemplified by the Russian Orthodox theologian Nikolai Fyodorvitch Federov, by way of his disciple Konstantin Tsiolkovsky, whose derivation of the rocket equation makes him essentially the father of modern spaceflight. And once you start probing the nether regions of transhumanist thought and run into concepts like Roko's Basilisk—by the way, any of you who didn't know about the Basilisk before are now doomed to an eternity in AI hell—you realize they've mangled it to match some of the nastiest ideas in Presybterian Protestantism.

If it walks like a duck and quacks like a duck, it's probably a duck. And if it looks like a religion it's probably a religion. I don't see much evidence for human-like, self-directed artificial intelligences coming along any time now, and a fair bit of evidence that nobody except some freaks in university cognitive science departments even want it. What we're getting, instead, is self-optimizing tools that defy human comprehension but are not, in fact, any more like our kind of intelligence than a Boeing 737 is like a seagull. So I'm going to wash my hands of the singularity as an explanatory model without further ado—I'm one of those vehement atheists too—and try and come up with a better model for what's happening to us.

Towards a better model for the future

As my fellow SF author Ken MacLeod likes to say, the secret weapon of science fiction is history. History, loosely speaking, is the written record of what and how people did things in past times—times that have slipped out of our personal memories. We science fiction writers tend to treat history as a giant toy chest to raid whenever we feel like telling a story. With a little bit of history it's really easy to whip up an entertaining yarn about a galactic empire that mirrors the development and decline of the Hapsburg Empire, or to re-spin the October Revolution as a tale of how Mars got its independence.

But history is useful for so much more than that.

It turns out that our personal memories don't span very much time at all. I'm 53, and I barely remember the 1960s. I only remember the 1970s with the eyes of a 6-16 year old. My father, who died last year aged 93, just about remembered the 1930s. Only those of my father's generation are able to directly remember the great depression and compare it to the 2007/08 global financial crisis directly. But westerners tend to pay little attention to cautionary tales told by ninety-somethings. We modern, change-obsessed humans tend to repeat our biggest social mistakes when they slip out of living memory, which means they recur on a time scale of seventy to a hundred years.

So if our personal memories are usless, it's time for us to look for a better cognitive toolkit.

History gives us the perspective to see what went wrong in the past, and to look for patterns, and check whether those patterns apply to the present and near future. And looking in particular at the history of the past 200-400 years—the age of increasingly rapid change—one glaringly obvious deviation from the norm of the preceding three thousand centuries—is the development of Artificial Intelligence, which happened no earlier than 1553 and no later than 1844.

I'm talking about the very old, very slow AIs we call corporations, of course. What lessons from the history of the company can we draw that tell us about the likely behaviour of the type of artificial intelligence we are all interested in today?

Old, slow AI

Let me crib from Wikipedia for a moment:

In the late 18th century, Stewart Kyd, the author of the first treatise on corporate law in English, defined a corporation as:
a collection of many individuals united into one body, under a special denomination, having perpetual succession under an artificial form, and vested, by policy of the law, with the capacity of acting, in several respects, as an individual, particularly of taking and granting property, of contracting obligations, and of suing and being sued, of enjoying privileges and immunities in common, and of exercising a variety of political rights, more or less extensive, according to the design of its institution, or the powers conferred upon it, either at the time of its creation, or at any subsequent period of its existence.
—A Treatise on the Law of Corporations, Stewart Kyd (1793-1794)

In 1844, the British government passed the Joint Stock Companies Act, which created a register of companies and allowed any legal person, for a fee, to register a company, which existed as a separate legal person. Subsequently, the law was extended to limit the liability of individual shareholders in event of business failure, and both Germany and the United States added their own unique extensions to what we see today as the doctrine of corporate personhood.

(Of course, there were plenty of other things happening between the sixteenth and twenty-first centuries that changed the shape of the world we live in. I've skipped changes in agricultural productivity due to energy economics, which finally broke the Malthusian trap our predecessors lived in. This in turn broke the long term cap on economic growth of around 0.1% per year in the absence of famine, plagues, and wars depopulating territories and making way for colonial invaders. I've skipped the germ theory of diseases, and the development of trade empires in the age of sail and gunpowder that were made possible by advances in accurate time-measurement. I've skipped the rise and—hopefully—decline of the pernicious theory of scientific racism that underpinned western colonialism and the slave trade. I've skipped the rise of feminism, the ideological position that women are human beings rather than property, and the decline of patriarchy. I've skipped the whole of the Enlightenment and the age of revolutions! But this is a technocentric congress, so I want to frame this talk in terms of AI, which we all like to think we understand.)

Here's the thing about corporations: they're clearly artificial, but legally they're people. They have goals, and operate in pursuit of these goals. And they have a natural life cycle. In the 1950s, a typical US corporation on the S&P 500 index had a lifespan of 60 years, but today it's down to less than 20 years.

Corporations are cannibals; they consume one another. They are also hive superorganisms, like bees or ants. For their first century and a half they relied entirely on human employees for their internal operation, although they are automating their business processes increasingly rapidly this century. Each human is only retained so long as they can perform their assigned tasks, and can be replaced with another human, much as the cells in our own bodies are functionally interchangeable (and a group of cells can, in extremis, often be replaced by a prosthesis). To some extent corporations can be trained to service the personal desires of their chief executives, but even CEOs can be dispensed with if their activities damage the corporation, as Harvey Weinstein found out a couple of months ago.

Finally, our legal environment today has been tailored for the convenience of corporate persons, rather than human persons, to the point where our governments now mimic corporations in many of their internal structures.

by Charlie Stross, Charlie's Diary |  Read more:
Image: via 

Roy Lichtenstein
via:

Kokee Lodge
photo: markk

The US Democratic Party After The Election Of Donald Trump

In your view, what is the historic position of the Democrats in the US political system and where do they currently stand?

The Democrats have undergone an evolution over their course. It’s the oldest political party in the United States and, just to resume very briefly the late 20th century, it was the party of the New Deal, of the New Frontier, John F Kennedy, the Great Society of Lyndon Johnson. Over the most recent 30-year period, it has become somewhat different from that: a party of third-way centrism with what I think we identify in Europe as a moderately neo-liberal agenda but, in the United States, strongly associated with the financial sector.

Now it’s facing a crisis of that particular policy orientation, which is largely discredited and does not have a broad popular base. This is the meaning of the Sanders campaign and the strong appeal of that campaign in 2016 to younger voters suggests that the future of the Democratic Party, so far as its popular appeal is concerned, lies in a different direction, one that really encompasses substantially more dramatic proposals for change and reform and renovation.

In coming to the structure of a SWOT analysis, where would you identify the strengths and weaknesses of the Democrats today?

The strengths are evident in the fact that the party retains a strong position on the two coasts and the weaknesses are evident in the fact that it doesn’t have a strong position practically anywhere else. The polarisation works very much to the disadvantage of the Democratic Party because the US constitutional system gives extra weight to small states, to rural areas, and the control of those states also means that the Republican Party has gained control of the House of Representatives.

The Democratic Party has failed to maintain a national base of political organisation and has become a party that is largely responsive to a reasonably affluent, socially progressive, professional class and that is not a winning constituency in US national elections. That’s not to say that they might not win some given the alternative at any given time but the position is by no means strong structurally or organisationally.

When it comes to the opportunities and threats that the party is facing, a threat is obviously what happened in the last election with the rise of Donald Trump. How would you frame this in the context of the Democratic Party? Going forward, where do you think there are opportunities?


Up until this most recent election, the Democrats had won the presidential contest in a series of Midwestern and upper Midwestern states on a consistent basis since the 1980s. If one looked at Michigan and Wisconsin and Pennsylvania, Ohio a little less so but Minnesota, certainly, this was known as the Blue Wall. It was a set of states the Democrats felt they had a structurally sound position in.

It was clear, particularly since the global crisis in 2007-2009 and the recession that followed, that that position had eroded because it was rooted in manufacturing jobs and organised labour and those jobs were disappearing after the crisis at an accelerated rate and this preocess was concentrated in those states. Trump saw this and took advantage of it.

The Clinton campaign, which was deeply rooted in the bi-coastal elites that dominated the Democratic Party, failed to see it adequately, failed to take steps that might counter it, failed to appeal to those constituencies and, in fact, treated them with a certain amount of distance if not disdain. It was something that could easily be interpreted as disdain in the way in which they scheduled their campaign.

She never went to Wisconsin, for example, and in certain comments that she made and the way in which she identified the core constituencies of her campaign, she really did not reach out to these communities. Trump, as he said himself, saw the anger and took advantage of it and that was the story of the election.

Hilary Clinton did win the popular vote by a very substantial margin, mainly because she had an overwhelming advantage in the state of California but that was 4 million extra votes that made no difference to the outcome whereas, in these upper Midwestern states, a few tens of thousands of votes were decisive and it was Trump that was able to walk away with the electoral votes of those states.

Obviously, the threat or the challenge of populism, especially right-wing populism, is not unique to the United States. If you broaden the discussion a little bit, what would you recommend? How should progressive parties in the US and beyond react to the challenge that right-wing populism poses?

I dislike the term populism as a general purpose pejorative in politics because it tends to be used by members of the professional classes to describe political appeals to, let’s say, working class constituencies. Populism in the United States in the late 19th century was a former labour movement. It was a movement of debtors against creditors and of easy money and silver advocates against gold advocates and that was the essence of it.

I find a lot to identify with in that tradition and so I’m not inclined to say dismissively that one should be opposed to populism. The Democratic Party’s problem is that it had a core in the New Deal liberal period that was rooted in the organised labour movement – the working class and trade unions. That has been structurally weakened by the deindustrialisation of large parts of the American economy and the party has failed to maintain a popular base.

It could have developed and maintained that base but, in many ways, chose not to do so. Why not? Because if one really invests power in a working class constituency, you have to give serious consideration to what people in that constituency want. It’s obvious that that would be in contradiction with the Democratic Party’s commitment in the ‘90s and noughties to free trade agreements, to use the most flagrant example.

It would require a much more, let’s say, real-world employment policy. It would require a responsiveness that was not there to the housing and foreclosure crisis after the recession. What happened in the period following the great financial crisis was particularly infuriating because everybody could see that the class of big bankers was bailed out and protected whereas people who were ordinary homeowners, particularly people who had been in neighbourhoods that were victimised with subprime loans, suffered aggressive foreclosure.

There was a fury that was building and it was building on a justified basis that the party had not been responsive to a series of really, I think, clearly understood community needs and demands.

You mentioned the constituencies, the working class, one of the discussions that we had in other episodes of this series was: is there still a coherent working class and what does that mean? For instance, if you compare the socio-economic position of, say, skilled workers who now have a pretty good wage compared to, say, cleaners somewhere, is there still some kind of working class identity or is this actually fraying?

There’s certainly the case that working class is a shorthand, which has a certain dated quality to it, for sure, but it’s certainly the case that, since the mid-1970s in the US, the industrial working class represented by powerful trade unions has diminished dramatically and, in particular, in the regions of the country which constituted the manufacturing belt that was built up from, let’s say, the 1900s into the 1950s.

There has been a terrific change in the economic structure of the country and it has diminished the membership, power and influence of the trade unions. No question about that. The concept of working class now does span a bifurcated community… There’s certainly still manufacturing activity and some of it is really quite well paid and it’s certainly better to be a manufacturing worker than to be in the low-wage services sector.

Figuring out how to appeal broadly to those constituencies and to constituencies that lie on a lower level of income than the established professional classes is the challenge. That challenge was met, pretty effectively, by the Sanders campaign in 2016. What Bernie Sanders was proposing was the $15 minimum wage and universal health insurance and debt-free access to higher education plus progressive income taxes and a structural reform of the banking sector.

Those things stitch together some strongly felt needs particularly amongst younger people and that was, I think, why the Sanders campaign took off. People grasped that this was not an unlimited laundry list of ideas. It was a select and focused set, which Sanders advanced and repeated in a very disciplined way over the course of the campaign and so it was young people who rallied to that campaign. That does suggest that there is a policy agenda that could form the basis for the Democratic Party of the future.

by James K. Galbraith, Social Europe |  Read more:
Image: uncredited
[ed. This and other links at Politics 101.]

Fitz and the Tantrums / Ed Sheeran / Lia Kim x May J Lee Choreography



Repost