Sunday, January 14, 2018

False Ballistic Missle Alert


[ed. I was in Honolulu when the alert went out. My initial reaction? Must be a virus. Nothing on the news, no jets screaming overhead, no sirens blaring, nothing. So I figured, just a spoof and clicked the phone off. Unfortunately, my brother was in Kona at the airport when the warning appeared. All flights were immediately cancelled and TSA operations shut down. We could only laugh afterward. When you think about it, where would you rather be in a real situation - standing in a line several hundred people deep waiting to get through TSA - or taking off in a jet? I'm actually surprised people responded as rationally as they did. No massive car pile-ups. No screaming in the streets. No looting or anything else (no strangling airport security). Just everyone seemingly taking it in stride, like... what can you do?

See also: Missle-Alert Error Reveals Uncertaintly About How to React, and What It Felt Like in Hawaii When Warning of an In-Bound Missile Arrived.]

Saturday, January 13, 2018

Audiophilia Forever: An Expensive New Year’s Shopping Guide

Here are some of the most beautiful recorded musical sounds that I have heard in the past few weeks: the matched horns and clarinet, very soft, in Duke Ellington’s “Mood Indigo,” recorded in 1950; Buddy Holly, in his just-hatched-this-morning voice, singing “Everyday,” recorded in 1957; the London Symphony Orchestra in full cry under AndrĂ© Previn, playing Shostakovich’s tragic wartime Symphony No. 8, recorded in 1973; and Willie Watson’s rich-sounding guitar, accompanying him singing “Samson and Delilah,” recorded last year. The source of all these sounds was a vinyl long-playing record.

I tried to quit. I tried to give up audiophilia. You might even say I stopped my ears. That is, I listened to my O.K. high-end audio rig when I could find a few hours, ignoring its inadequacies. But, most of the time, I listened to CDs ripped into iTunes and then played on an iPod with a decent set of headphones. Hundreds of hours of music were inscribed there: Wagner’s “Parsifal” and John Coltrane’s “Blue Train” and the Beatles’ “Rubber Soul”—soul music, indeed! The glories of Western music, if you want to be grand about it, were at my fingertips, and I was mostly content. For years, I relinquished the enthralling, debilitating, purse-emptying habit of high-end audio, that feverish discontent, that adolescent ecstatic longing for more—a better record player, speakers with more bottom weight, a CD player that completely filtered out such digital artifacts as ringing tones, brittleness, and hardness.

Most people listen to music in the way that’s convenient for them; they ignore the high-end stuff, if they’ve even heard of it, as an expensive fetish. But audiophiles are restless; they always have some sort of dream system in their heads. They are ready, if they can afford it, to swap, trade, buy. It’s not enough, for some listeners, to have a good turntable, CD player, streaming box, pre-amplifier, amplifier, phono stage, speakers, and top-shelf wires connecting them all together. No, they also need a power conditioner—to purify the A.C. current. Does it matter, each separate thing? The cables, too? Is it all nonsense? The debates rage on, for those who are interested. At the moment, the hottest thing in audio is “high-resolution streaming”—the hope, half-realized, of getting extraordinary sound through the Internet.

We audiophiles want timbal accuracy. We want the complex strands of an orchestral piece disentangled, voice recordings that reveal chest tones and a clear top, pianos that sound neither tinkly nor dull, with the decay of each note sustained (not cut off, as it is in most digital recordings). We want all that, yet the sound of live music is ineffable. The goal can never be reached. The quest itself is the point. (...)

Yet there’s a serious problem with most of the streaming services: the sound is no more than adequate (exceptions to follow). And therein lies a tale—a tale, from the high-end audiophile’s point of view, of commercial opportunism,betrayal, and, well, audiophile-led redemption. A little potted audio history is now in order.

The first betrayal: in the sixties, Japanese solid-state equipment (Sony, Panasonic, Yamaha, etc.) emerged as a low-cost mass-market phenomenon, driving American quality audio, which had made analog, vacuum-tube equipment, deep underground. The big American names (like Marantz and McIntosh) stayed quietly in business while a variety of engineers and entrepreneurs who loved music started small companies in garages and toolsheds. It was (and is) a story of romantic capitalism—entrepreneurship at its most creative. Skip forward twenty years, to the second betrayal: in 1982, digital sound and the compact disk were proclaimed by publicists and a gullible press as “perfect sound forever.” But any music lover could have told you that early digital was often dreadful—hard, congealed, harsh, even razory, the strings sounding like plastic, the trumpets like sharp instruments going under your scalp. The early transfer of “Rubber Soul,” just to take one example, was unlistenable.

The small but flourishing high-end industry responded to digital in three different ways: it produced blistering critiques of digital sound in the musically and technically literate audiophile magazines The Absolute Sound and Stereophile; it developed CD players that worked to filter out some of the digital artifacts; and it produced dozens of turntables, in every price range, which kept good sound and the long-playing record alive. Years ago, many refused to believe in the LP, but, really, anyone with a decent setup could have proved this to you: a well-recorded LP was warmer, more natural, more musical than a compact disk.

The recording industry woke up, as well: Sony and Phillips, which had developed the compact disc together, released, in 1999, a technology called D.S.D. (Direct Stream Digital) and embedded the results in Super Audio CDs—S.A.C.D. disks. Remember them? Some six thousand titles were produced, and the sound was definitely better than that of a standard CD. But the Super Audio CD was swamped by another marketing phenomenon—the creation of the iPod and similar devices, in 2001, which made vast libraries of music portable. So much for S.A.C.D.s—your music library was now in your hand! For me, the iPod was, for long periods, the default way of listening to music. God knows I have sinned. I knew that I wasn’t hearing anything like the best.

Which brings us to betrayal No. 3: music was streamed to iPods and laptops by squeezing data so that it would fit through the Internet pipes—the sound, in the jargon, was “lossy.” And that’s the sound—MP3 sound—that a generation of young people grew up with. The essentials of any kind of music came through, but nuance, the subtleties of shading and color, got slighted or lost. High-end types, both manufacturers and retailers, still lament this development with rage and tears. Availability was everything for the iPod generation. Well, yes, of course, says the high end, availability is a great boon. But most of the kids didn’t know that they were missing anything in the music.

Except for the few who did. A growing corpus of young music lovers have, in recent years, become attached to vinyl—demanding vinyl from their favorite groups as they issue new albums, flocking to new vinyl stores. For some, it may be about the sound. Or maybe it’s about backing away from corporate culture and salesmanship. Vinyl offers the joys of possessorship: if you go to a store, talk to other music lovers, and buy a record, you are committing to your taste, to your favorite group, to your friends. In New York, the independent-music scene, and the kinds of loyalties it creates, are central to vinyl. In any case, the young people buying vinyl have joined up with two sets of people who never really gave up on it: the scratchmaster d.j.s deploying vinyl on twin turntables, making music with their hands, and the audiophiles hoarding their LPs from decades ago. The audiophile reissue market has come blazingly to life:

by David Denby, New Yorker |  Read more:
Image: Janne Iivonen

Friday, January 12, 2018

How, and Why, the Spectre and Meltdown Patches Will Hurt Performance

As the industry continues to grapple with the Meltdown and Spectre attacks, operating system and browser developers in particular are continuing to develop and test schemes to protect against the problems. Simultaneously, microcode updates to alter processor behavior are also starting to ship.

Since news of these attacks first broke, it has been clear that resolving them is going to have some performance impact. Meltdown was presumed to have a substantial impact, at least for some workloads, but Spectre was more of an unknown due to its greater complexity. With patches and microcode now available (at least for some systems), that impact is now starting to become clearer. The situation is, as we should expect with these twin attacks, complex.

To recap: modern high-performance processors perform what is called speculative execution. They will make assumptions about which way branches in the code are taken and speculatively compute results accordingly. If they guess correctly, they win some extra performance; if they guess wrong, they throw away their speculatively calculated results. This is meant to be transparent to programs, but it turns out that this speculation slightly changes the state of the processor. These small changes can be measured, disclosing information about the data and instructions that were used speculatively.

With the Spectre attack, this information can be used to, for example, leak information within a browser (such as saved passwords or cookies) to a malicious JavaScript. With Meltdown, an attack that builds on the same principles, this information can leak data within the kernel memory.

Meltdown applies to Intel's x86 and Apple's ARM processors; it will also apply to ARM processors built on the new A75 design. Meltdown is fixed by changing how operating systems handle memory. Operating systems use structures called page tables to map between process or kernel memory and the underlying physical memory. Traditionally, the accessible memory given to each process is split in half; the bottom half, with a per-process page table, belongs to the process. The top half belongs to the kernel. This kernel half is shared between every process, using just one set of page table entries for every process. This design is both efficient—the processor has a special cache for page table entries—and convenient, as it makes communication between the kernel and process straightforward.

The fix for Meltdown is to split this shared address space. That way when user programs are running, the kernel half has an empty page table rather than the regular kernel page table. This makes it impossible for programs to speculatively use kernel addresses.

Spectre is believed to apply to every high-performance processor that has been sold for the last decade. Two versions have been shown. One version allows an attacker to "train" the processor's branch prediction machinery so that a victim process mispredicts and speculatively executes code of an attacker's choosing (with measurable side-effects); the other tricks the processor into making speculative accesses outside the bounds of an array. The array version operates within a single process; the branch prediction version allows a user process to "steer" the kernel's predicted branches, or one hyperthread to steer its sibling hyperthread, or a guest operating system to steer its hypervisor.

We have written previously about the responses from the industry. By now, Meltdown has been patched in Windows, Linux, macOS, and at least some BSD variants. Spectre is more complicated; at-risk applications (notably, browsers) are being updated to include certain Spectre mitigating techniques to guard against the array bounds variant. Operating system and processor updates are needed to address the branch prediction version. The branch prediction version of Spectre requires both operating system and processor microcode updates. While AMD initially downplayed the significance of this attack, the company has since published a microcode update to give operating systems the control they need.

These different mitigation techniques all come with a performance cost. Speculative execution is used to make the processor run our programs faster, and branch predictors are used to make that speculation adaptive to the specific programs and data that we're using. The countermeasures all make that speculation somewhat less powerful. The big question is, how much?

by Peter Bright, ARS Technica |  Read more:
Image: Aurich/Getty
[ed. A graduate seminar in micro-processor technology.]

Thursday, January 11, 2018


Gyakusou
[ed. Cool pants.]

Motion Capture Tech For Fixing Your Golf Game

Albert Einstein was once asked if he played golf. “No, no,” said the man who devised the theory of relativity. “Too complicated.” The story has served as a humbling reminder that even geniuses can find golf to be, as Bobby Jones, a co-founder of the Masters Tournament, described it, “a mystifying game.”

But in the past year, golf instructors have begun using an unassuming piece of technology that aims to take the guesswork out of your stroke. MySwing, introduced in late 2016, is a small box with 17 motion-capture sensors that ­attach to various parts of the body—the shin, the top of the feet, around the arms and chest and ­forehead. A separate one attaches to the club.

Once the sensors are calibrated on a Windows-based device, a skeletal avatar appears on screen and begins to move with you in real time. Take a few swings, and the feeling is similar to a science-­fiction fantasy. (Everyone from Game of Thrones to NASA creates characters using mapping tech from MySwing’s Beijing-based parent company, Noitom Ltd.—“motion” spelled backward.) The system ­re-creates the angles, tilt, and rotations of your swing and plays it back from overhead, behind, and the side.

The key, though, is the software, which produces line graphs and bar charts that tell you whether you need to be more patient with your arms and get your lower body to do a better job of initiating the downswing. It can observe, with sometimes excruciating detail, that the bum shoulder you got from playing college football is costing you 20 degrees on your turn, or that your right leg is overcompensating for a weak left one.

Swing-analyzing technology isn’t new, says golf instructor Ben Shear, who advises top pros such as Luke Donald and hosts the Golfers Edge show on SiriusXM’s PGA Tour Radio channel. But the old systems took an hour to set up, whereas MySwing takes about 20 minutes from start to finish. The sensors attach wirelessly, another first, and can be used indoors or outside. Most important, it’s only $6,000, a relatively affordable piece of equipment for a country club that wants a competitive advantage. (TrackMan Golf, the shot-monitoring technology familiar from television tournament broadcasts, runs closer to $25,000.) (...)

“A lot of golfers are guys who sit behind a desk working 60 hours a week, they’ve got three kids who are all in sports, and they’re driving them everywhere,” Shear says. “They’re not going to get to the gym four times a week. But they still want to know what their physical capabilities are. And then I can build a golf swing around what they can actually do.”

Some limitations may not be physical. “If you can’t chip and putt, then this isn’t going to help you all that much,” Shear says with a laugh. “If you’ve got a 4-footer and you just rolled it by 10 feet, then that’s why you’re not good at golf.”

by James Gaddy, Bloomberg |  Read more:
Image: MySwing

Wednesday, January 10, 2018

After Hours: Off-Peaking

Mr. Money Mustache is in his early 40s, and he has been retired for 12 years. “One of the key principles of Mustachianism,” begins a lofty 2013 post, “is that any and all lineups, queues, and other sardine-like collections of humans must be viewed with the squinty eyes of skepticism.” His blog explains that everything you have been taught about money and time is wrong. Mr. Money Mustache, once the subject of a New Yorker profile, worked as a software engineer and saved half of his salary from the age of 20, and his vision of time is that of an engineer: time becomes a machine that can be tinkered with, hours and minutes rewired to achieve a more elegant purpose. His primary message is that you will not achieve financial security and personal happiness by working harder to get ahead of the pack; you will find these things by carefully studying what the pack is doing and then doing the opposite.

A post entitled “A Peak Life is Lived Off-Peak” extols the virtues of doing everything at the wrong time. The Mustache family lives in Colorado, where everyone goes skiing on the weekends; Mr. Mustache recommends hitting the slopes on Tuesdays. The Mustaches drive through major cities between 10 in the morning and four in the afternoon. Thursday morning is for teaching robotics to his son, whom he homeschools; below-freezing nights in January are for moonlit walks. Holidays are to be taken only when everyone else is at work. “Most people spend most of their time doing what everyone else does, without giving it much thought,” Mr. Money Mustache writes. “And thus, it is usually very profitable to avoid doing what everyone else is doing.”

The Mustaches are not the only online evangelists for the off-peak lifestyle. In a post entitled, “I Want You to Become an Off-Peak Person!” Peter Shankman, an entrepreneur who writes about turning his ADHD to his advantage, recommends grocery shopping at one in the morning. J.P. Livingston’s blog the Money Habit features photos of New York City that make it seem like a small town: a thinly populated subway, a near-empty museum. (The bins in time’s bargain basement seem to be overflowing with Tuesdays: train rides, drinks, meals, museum visits, and movies are cheaper when they happen on what is referred to in Canada as “Toonie Tuesdays,” in Australia as “Tight-Arse Tuesdays.”)

The thesis of off-peak evangelism is summed up by one of Mr. Mustache’s calls for a rejection of conformity: “In our natural state,” he writes, “we are supposed to be a diverse and individualistic species.” It is natural, he argues, for individual schedules to vary — why should we all expect to eat, sleep, work, and play in lockstep, like members of a militaristic cult? Standardized schedules create waste and clog infrastructure. Off-peak evangelism proposes a market value to individuality and diversity as mechanisms for repurposing humanity’s collective wasted time. While not a formalized movement, people who blog about off-peaking often seem to feel that they’ve discovered a secret too good to keep to themselves — something that was right in front of us the whole time, requiring only that we recognize our own power to choose.

Off-peaking is the closest thing to a Platonic form of subculture: its entire content is its opposition to the mainstream. As an economic approach, the solution off-peaking proposes can seem unkind — it’s a microcosm of the larger capitalist idea that it is right to profit from the captivity of others. And yet off-peakers only want, in effect, to slow time down by stretching the best parts of experience while wasting less. The arguments for off-peaking have centered on both the economic and the social advantages of recuperating unexploited time, like a form of temporal dumpster-diving that restores worth to low-demand goods. (...)

Taken at its most individualistic, it can seem that the idea of off-peaking is not to free everyone from the bonds of inefficency, but to position oneself to take advantage of the unthinking conformity of others. Success depends upon continued brokenness, not on fixing what is broken — or at least, on fixing it only for oneself and a canny self-selecting few. In this view, off-peaking is a miniaturized entrepreneurialism that exploits a wonky blip in the way slots of time are assigned value; a matter of identifying an arbitrage opportunity created by the system’s lack of self-awareness.

The comment sections of off-peakers’ blogs are, paradoxically, bustling: stories of going to bed at nine and waking up at four to ensure that the day is perfectly out of step; Legoland on Wednesdays in October; eating in restaurants as soon as they open rather than waiting for standard meal times. There’s a wealth of bargains to be had by juggling one’s calendar to take advantage of deals. (The app Ibotta, which tracks fluctuating prices on consumer goods popular with millennials, determined that Tuesdays are actually the worst days to buy rosĂ© and kombucha; you should buy them on Wednesdays. Avocados are also cheapest on Wednesdays, while quinoa should be bought on Thursdays and hot sauce on Fridays.) Many posters write that they are considering changing professions or homeschooling their children to join the off-peakers.

Some off-peakers are motivated by savings, some by avoiding crowds, but off-peaking also offers a more abstract pleasure: the sheer delight in doing the unexpected. The gravitas attached to the seasons of life listed off in Ecclesiastes is echoed in the moral overtones attached to perceptions of what is appropriate for different hours of the day. It is wrong to laugh when everyone else is weeping or to embrace when everyone else is refraining from embracing. Ordinary activities become subversive when done at the wrong time: eating spaghetti for dinner is ordinary, but having linguini with clam sauce for breakfast breaks the unwritten rules. Once you start transgressing, it can be hard to stop: The arbitrariness of custom begins to chafe.

But off-peakers are generally not hoping to be completely solitary in their pursuits; most people don’t want to be the only person in their step-aerobics class at two in the afternoon. Instead, they want to be one among a smaller, more manageable group than urban cohorts tend to allow. Subcultures offer the pleasure of being different along with the pleasure of being the same; variation becomes a passport to acceptance. The two people who encounter one another at the aquarium on a Wednesday morning appear to have more in common than the two hundred people who see each other there on a weekend. Like other choices that divide people into subsets, off-peaking allows its adherents to discover a kinship that may or may not reveal a significant similarity in worldview.

by Linda Besner, Real Life |  Read more:
Image: Movie Theater, Los Angeles by Ed Freeman
[ed. The New Yorker link on Mr. Money Mustache is a great read in itself.]

The Breeders

The Strange Brands In Your Instagram Feed

It all started with an Instagram ad for a coat, the West Louis (TM) Business-Man Windproof Long Coat to be specific. It looked like a decent camel coat, not fancy but fine. And I’d been looking for one just that color, so when the ad touting the coat popped up and the price was in the double-digits, I figured: hey, a deal!

The brand, West Louis, seemed like another one of the small clothing companies that has me tagged in the vast Facebook-advertising ecosystem as someone who likes buying clothes: Faherty, Birdwell Beach Britches, Life After Denim, some wool underwear brand that claims I only need two pairs per week, sundry bootmakers.

Perhaps the copy on the West Louis site was a little much, claiming “West Louis is the perfection of modern gentlemen clothing,” but in a world where an oil company can claim to “fuel connections,” who was I to fault a small entrepreneur for some purple prose?

Several weeks later, the coat showed up in a black plastic bag emblazoned with the markings of China Post, that nation’s postal service. I tore it open and pulled out the coat. The material has the softness of a Las Vegas carpet and the rich sheen of a velour jumpsuit. The fabric is so synthetic, it could probably be refined into bunker fuel for a ship. It was, technically, the item I ordered, only shabbier than I expected in every aspect.

I went to the West Louis Instagram account and found 20 total posts, all made between June and October of 2017. Most are just pictures of clothes. Doing a reverse image search, it’s clear that the Business-Man Windproof Long Coat is sold throughout the world on a variety of retail websites. Another sweatshirt I purchased through Instagram—I tracked down no less than 15 shops selling the identical item. I bought mine from Thecuttedge.life, but I could have gotten it from Gonthwid, Hzijue, Romwe, HypeClothing, Manvestment, Ladae Picassa, or Kovfee. Each very lightly brands the sweathshirt as its own, but features identical pictures of a mustachioed, tattooed model. That a decent percentage of the brands are unpronounceable in English just adds to the covfefe of it all.

All these sites use a platform called Shopify, which is like the Wordpress or Blogger of e-commerce, enabling completely turnkey online stores. Now, it has over 500,000 merchants, a number that’s grown 74 percent per year over the last five years. On the big shopping days around Thanksgiving, they were doing $1 million dollars in transactions per minute. And the “vast majority” of the stores on the service are small to medium-sized businesses, the company told me.

Shopify serves as the base layer for an emerging ecosystem that solders digital advertising through Facebook onto the world of Asian manufacturers and wholesalers who rep their companies on Alibaba and its foreigner-friendly counterpart, AliExpress.

It’s a fascinating new retail world, a mutation of globalized capitalism that’s been growing in the cracks of mainstream commerce.

Here’s how it works.

“What is up everybody?!” a fresh-faced man with messy brown hair shouts into the camera. Behind him, two computers sit open on a white desk in a white room. By the looks of him, he might not be an adult, but he has already learned to look directly into the camera when delivering the ever-appealing gospel of Easy Money on the Internet.

“In this challenge, I’m going to take a brand new Shopify store to over one thousand dollars,” he says. “So I invite you to follow along with me as I take this brand new store from 0, literally 0, to over one thousand dollars in the next 7 days.”

In the corner of YouTube dedicated to e-commerce, these videos are a bit of a phenomenon, racking up hundreds of thousands of views for highly detailed explanations of how to set up an e-commerce shop on the Internet.

Their star is Rory Ganon. Though his accent is Irish (“tousand”), his diction is pure LA YouTuber. He’s repetitive, makes quick cuts, and delivers every line with the conviction of youth. He appears to live in Ratoath, a small Irish commuter town about half an hour outside Dublin. His Facebook page describes him as a 17-year-old entrepreneur.

His success finding an audience seems predicated on the fact that when he says he’s going to show you everything, he really is going to show you everything. Like, you will watch his screen as he goes about setting up a store, so anyone can follow along at home. He’s a Bob Ross of e-commerce.

These techniques work the same for him as for Gucci. Some Instagram retailers are legit brands with employees and products. Others are simply middlemen for Chinese goods, built in bedrooms, and launched with no capital or inventory. All of them have been pulled into existence by the power of Instagram and Facebook ads combined with a suite of e-commerce tools based around Shopify.

The products don’t matter to the system, nor do they matter to Ganon. The whole idea of retail gets inverted in his videos. What he actually sells in his stores is secondary to how he does it. It’s as if he squirts hot dogs on his ketchup and mustard.

What Ganon does is pick suppliers he’ll never know to ship products he’ll never touch. All his effort goes into creating ads to capture prospective customers, and then optimizing a digital environment that encourages them to buy whatever piece of crap he’s put in front of them.

And he is not alone. (...)

Ganon’s videos are particularly fascinating in describing the mechanics of digital advertising through Instagram and Facebook.

In the tutorial, he briefly discusses finding a niche for the products in your store, and he uses some business school powerpoint terms. But when he actually selects a niche, it is Lions. That’s right: Lions, the animals.

by Alexis C. Madrigal, The Atlantic |  Read more:
Image: Alexis Madrigal

Tuesday, January 9, 2018

Tommy Guerrero


Brano Hlavac

via:

Fifty Psychological and Psychiatric Terms to Avoid

Abstract

The goal of this article is to promote clear thinking and clear writing among students and teachers of psychological science by curbing terminological misinformation and confusion. To this end, we present a provisional list of 50 commonly used terms in psychology, psychiatry, and allied fields that should be avoided, or at most used sparingly and with explicit caveats. We provide corrective information for students, instructors, and researchers regarding these terms, which we organize for expository purposes into five categories: inaccurate or misleading terms, frequently misused terms, ambiguous terms, oxymorons, and pleonasms. For each term, we (a) explain why it is problematic, (b) delineate one or more examples of its misuse, and (c) when pertinent, offer recommendations for preferable terms. By being more judicious in their use of terminology, psychologists and psychiatrists can foster clearer thinking in their students and the field at large regarding mental phenomena. (...)

Inaccurate or Misleading Terms

(1) A gene for. The news media is awash in reports of identifying “genes for” a myriad of phenotypes, including personality traits, mental illnesses, homosexuality, and political attitudes (Sapolsky, 1997). For example, in 2010, The Telegraph (2010) trumpeted the headline, “‘Liberal gene’ discovered by scientists.” Nevertheless, because genes code for proteins, there are no “genes for” phenotypes per se, including behavioral phenotypes (Falk, 2014). Moreover, genome-wide association studies of major psychiatric disorders, such as schizophrenia and bipolar disorder, suggest that there are probably few or no genes of major effect (Kendler, 2005). In this respect, these disorders are unlike single-gene medical disorders, such as Huntington’s disease or cystic fibrosis. The same conclusion probably holds for all personality traits (De Moor et al., 2012).

Not surprisingly, early claims that the monoamine oxidase-A (MAO-A) gene is a “warrior gene” (McDermott et al., 2009) have not withstood scrutiny. This polymorphism appears to be only modestly associated with risk for aggression, and it has been reported to be associated with conditions that are not tied to a markedly heightened risk of aggression, such as major depression, panic disorder, and autism spectrum disorder (Buckholtz and Meyer-Lindenberg, 2013; Ficks and Waldman, 2014). The evidence for a “God gene,” which supposedly predisposes people to mystical or spiritual experiences, is arguably even less impressive (Shermer, 2015) and no more compelling than that for a “God spot” in the brain (see “God spot”). Incidentally, the term “gene” should not be confused with the term “allele”; genes are stretches of DNA that code for a given morphological or behavioral characteristic, whereas alleles are differing versions of a specific polymorphism in a gene (Pashley, 1994).

(2) Antidepressant medication. Medications such as tricyclics, selective serotonin reuptake inhibitors, and selective serotonin and norepinephrine reuptake inhibitors, are routinely called “antidepressants.” Yet there is little evidence that these medications are more efficacious for treating (or preventing relapse for) mood disorders than for several other conditions, such as anxiety-related disorders (e.g., panic disorder, obsessive-compulsive disorder; Donovan et al., 2010) or bulimia nervosa (Tortorella et al., 2014). Hence, their specificity to depression is doubtful, and their name derives more from historical precedence—the initial evidence for their efficacy stemmed from research on depression (France et al., 2007)—than from scientific evidence. Moreover, some authors argue that these medications are considerably less efficacious than commonly claimed, and are beneficial for only severe, but not mild or moderate, depression, rendering the label of “antidepressant” potentially misleading (Antonuccio and Healy, 2012; but see Kramer, 2011, for an alternative view).

(3) Autism epidemic. Enormous effort has been expended to uncover the sources of the “autism epidemic” (e.g., King, 2011), the supposed massive increase in the incidence and prevalence of autism, now termed autism spectrum disorder, over the past 25 years. The causal factors posited to be implicated in this “epidemic” have included vaccines, television viewing, dietary allergies, antibiotics, and viruses.

Nevertheless, there is meager evidence that this purported epidemic reflects a genuine increase in the rates of autism per se as opposed to an increase in autism diagnoses stemming from several biases and artifacts, including heightened societal awareness of the features of autism (“detection bias”), growing incentives for school districts to report autism diagnoses, and a lowering of the diagnostic thresholds for autism across successive editions of the Diagnostic and Statistical Manual of Mental Disorders (Gernsbacher et al., 2005; Lilienfeld and Arkowitz, 2007). Indeed, data indicate when the diagnostic criteria for autism were held constant, the rates of this disorder remained essentially constant between 1990 and 2010 (Baxter et al., 2015). If the rates of autism are increasing, the increase would appear to be slight at best, hardly justifying the widespread claim of an “epidemic.”

(4) Brain region X lights up. Many authors in the popular and academic literatures use such phrases as “brain area X lit up following manipulation Y” (e.g., Morin, 2011). This phrase is unfortunate for several reasons. First, the bright red and orange colors seen on functional brain imaging scans are superimposed by researchers to reflect regions of higher brain activation. Nevertheless, they may engender a perception of “illumination” in viewers. Second, the activations represented by these colors do not reflect neural activity per se; they reflect oxygen uptake by neurons and are at best indirect proxies of brain activity. Even then, this linkage may sometimes be unclear or perhaps absent (Ekstrom, 2010). Third, in almost all cases, the activations observed on brain scans are the products of subtraction of one experimental condition from another. Hence, they typically do not reflect the raw levels of neural activation in response to an experimental manipulation. For this reason, referring to a brain region that displays little or no activation in response to an experimental manipulation as a “dead zone” (e.g., Lamont, 2008) is similarly misleading. Fourth, depending on the neurotransmitters released and the brain areas in which they are released, the regions that are “activated” in a brain scan may actually be being inhibited rather than excited (Satel and Lilienfeld, 2013). Hence, from a functional perspective, these areas may be being “lit down” rather than “lit up.”

(5) Brainwashing. This term, which originated during the Korean War (Hunter, 1951) but which is still invoked uncritically from time to time in the academic literature (e.g., Ventegodt et al., 2009; Kluft, 2011), implies that powerful individuals wishing to persuade others can capitalize on a unique armamentarium of coercive procedures to change their long-term attitudes. Nevertheless, the attitude-change techniques used by so-called “brainwashers” are no different than standard persuasive methods identified by social psychologists, such as encouraging commitment to goals, manufacturing source credibility, forging an illusion of group consensus, and vivid testimonials (Zimbardo, 1997). Furthermore, there are ample reasons to doubt whether “brainwashing” permanently alters beliefs (Melton, 1999). For example, during the Korean War, only a small minority of the 3500 American political prisoners subjected to intense indoctrination techniques by Chinese captors generated false confessions. Moreover, an even smaller number (probably under 1%) displayed any signs of adherence to Communist ideologies following their return to the US, and even these were individuals who returned to Communist subcultures (Spanos, 1996).

(6) Bystander apathy. The classic work of (e.g., Darley and Latane, 1968; Latane and Rodin, 1969) underscored the counterintuitive point that when it comes to emergencies, there is rarely “safety in numbers.” As this and subsequent research demonstrated, the more people present at an emergency, the lower the likelihood of receiving help. In early research, this phenomenon was called “bystander apathy” (Latane and Darley, 1969) a term that endures in many academic articles (e.g., Abbate et al., 2013). Nevertheless, research demonstrates that most bystanders are far from apathetic in emergencies (Glassman and Hadad, 2008). To the contrary, they are typically quite concerned about the victim, but are psychologically “frozen” by well-established psychological processes, such as pluralistic ignorance, diffusion of responsibility, and sheer fears of appearing foolish.

(7) Chemical imbalance. Thanks in part to the success of direct-to-consumer marketing campaigns by drug companies, the notion that major depression and allied disorders are caused by a “chemical imbalance” of neurotransmitters, such as serotonin and norepinephrine, has become a virtual truism in the eyes of the public (France et al., 2007; Deacon and Baird, 2009). This phrase even crops up in some academic sources; for example, one author wrote that one overarching framework for conceptualizing mental illness is a “biophysical model that posits a chemical imbalance” (Wheeler, 2011, p. 151). Nevertheless, the evidence for the chemical imbalance model is at best slim (Lacasse and Leo, 2005; Leo and Lacasse, 2008). One prominent psychiatrist even dubbed it an urban legend (Pies, 2011). There is no known “optimal” level of neurotransmitters in the brain, so it is unclear what would constitute an “imbalance.” Nor is there evidence for an optimal ratio among different neurotransmitter levels. Moreover, although serotonin reuptake inhibitors, such as fluoxetine (Prozac) and sertraline (Zoloft), appear to alleviate the symptoms of severe depression, there is evidence that at least one serotonin reuptake enhancer, namely tianepine (Stablon), is also efficacious for depression (Akiki, 2014). The fact that two efficacious classes of medications exert opposing effects on serotonin levels raises questions concerning a simplistic chemical imbalance model.

by Scott O. Lilienfeld, Katheryn C. SauvignĂ©, Steven Jay Lynn, Robin L. Cautin, Robert D. Latzman, and Irwin D. Waldman, Frontiers in Psychology |  Read more:
Image: Frontiers in Psychology

Retail Investors Now True Believers with Record Exposure

As far as the stock market is concerned, it took a while – in fact, it took eight years, but retail investors are finally all in, bristling with enthusiasm. TD Ameritrade’s Investor Movement Index rose to 8.59 in December, a new record. TDA’s clients were net buyers for the 11th month in a row, one of the longest buying streaks and ended up with more exposure to the stock market than ever before in the history of the index.

This came after a blistering November, when the index had jumped 15%, “its largest single-month increase ever,” as TDA reported at the time, to 8.53, also a record:


Note how retail investors had been to varying degrees among the naysayers from the end of the Financial Crisis till the end of 2016, before they suddenlybecame true believers in February 2017.

“I don’t think the investors who are engaging regularly are doing so in a dangerous fashion,” said TDA Chief Market Strategist JJ Kinahan in an interview. But he added, clients at the beginning of 2017 were “up to their knees in it and then up to their thighs, and now up to their chests.”

The implication is that they could get in a little deeper before they’d drown.

“As the year went on, people got more confident,” he said. And despite major geopolitical issues, “the market was never tested at all” last year. There was this “buy-the-dip mentality” every time the market dipped 1% or 2%.

But one of his “bigger fears” this year is this very buy-the-dip mentality, he said. People buy when the market goes down 1% or 2%, and “it goes down 5%, then it goes down 8% — and they turn into sellers, and then they get an exponential move to the downside.”

In addition to some of the big names in the US – Amazon, Microsoft, Bank of America, etc. – TDA’s clients were “believers” in Chinese online retail and were big buyers of Alibaba and Tencent. But they were sellers of dividend stocks AT&T and Verizon as the yield of two-year Treasuries rose to nearly 2%, and offered a risk-free alternative at comparable yields.

And he added, with an eye out for this year: “It’s hard to believe that the market can go up unchallenged.”

This enthusiasm by retail investors confirms the surge in margin debt – a measure of stock market leverage and risk – which has been jumping from record to record, and hit a new high of $581 billion, up 16% from a year earlier.

And as MarketWatch reported, “cash balances for Charles Schwab clients reached their lowest level on record in the third quarter, according to Morgan Stanley, which wrote that retail investors ‘can’t stay away’ from stocks,” while the stock allocation index by the American Association of Individual Investors “jumped to 72%, its highest level since 2000…” as “retail investors – according to a Deutsche Bank analysis of consumer sentiment data – view the current environment as “the best time ever to invest in the market.”

by Wolf Richter, Wolf Street |  Read more:
Image: TD Ameritrade
[ed. What could go wrong?]

Your Next Obsession: Retro Japanese Video Game Art


I am obsessed with something new in the world of design. Well, actually, something quite old. Specifically, late 90s and early 2000s Japanese video game art. And also, video game ads. And also, photos of old video game hardware. I am knee-deep in gaming nostalgia.

A lot of the art I’ve become fascinated with is a particular aesthetic born around the fourth generation of video gaming (spanning from the 16-bit boom of the PC Engine / TurboGrafx-16 and Sega Genesis, through to the original PlayStation, Sega Saturn, and the Dreamcast). One which blends hand-drawn art and lettering, dramatic typography, highly technical layouts, and colorful, sometimes cartoonish patterns.

Design, like fashion, moves in cycles, and we’re starting to see a new wave of Japanese game art in pop design. You can see it in the Richard Turley-led Wieden + Kennedy rebranding of the Formula One logo / design language (heavy, heavy shades of Wipeout) or in the varied styles of Australian artist Jonathan Zawada.

Cory Schmitz — a designer who’s worked on projects like the Oculus Rift rebranding and logo design for the game Shadow of the Colossus — has been assembling many of the best examples of the original era on his Tumblr, QuickQuick. I reached out to him to ask about what he was drawn to in this particular style: “As a designer this stuff is super inspirational because it’s so different from current design trends. A lot of unexpected colors, type, and compositions. And I really like the weird sense of nostalgia I get from stuff I haven’t necessarily seen before.” It’s Cory’s curation you’ll see a lot of in the card stack here.

As we move away from the Web 2.0 / Apple mandate of clean, orderly, sterile design, into a more playful, experimental, artistic phase (hello Dropbox redesign), this particular style of art feels like an obvious meeting point born out of a desire for orderly information delivery and a more primal need for some degree of controlled chaos. Mostly, though, it just looks really fucking cool.

by Joshua Topolsky, The Outline | Read more:
Image: Ian Anderson, Designers Republic

Monday, January 8, 2018


Tom Guald
via:

Image: Angela Weiss/AFP via Getty
via:
[ed. My dream girl.]

Fight Me, Psychologists: Birth Order Effects Exist and Are Very Strong

“Birth order” refers to whether a child is the oldest, second-oldest, youngest, etc. in their family. For a while, pop psychologists created a whole industry around telling people how their birth order affected their personality: oldest children are more conservative, youngest children are more creative, etc.

Then people got around to actually studying it and couldn’t find any of that. Wikipedia’s birth order article says:
Claims that birth order affects human psychology are prevalent in family literature, but studies find such effects to be vanishingly small….the largest multi-study research suggests zero or near-zero effects. Birth-order theory has the characteristics of a zombie theory, as despite disconfirmation, it continues to have a strong presence in pop psychology and popular culture.
I ought to be totally in favor of getting this debunked. After all, the replication crisis in psychology highlights the need to remain skeptical of poorly-supported theories. And some of the seminal work disproving birth order was done by Judith Rich Harris, an intellectual hero of mine who profoundly shaped my worldview with her book The Nurture Assumption.

So I regret to have to inform you that birth order effects are totally a real thing.

I first started thinking this at transhumanist meetups, when it would occasionally come up that everyone there was an oldest child. The pattern was noticeable enough that I included questions about birth order on the latest SSC survey. This blog deals with a lot of issues around transhumanism, futurology, rationality, et cetera, so I thought it would attract the same kind of people.

7,248 people gave me enough information to calculate their birth order, but I am very paranoid because previous studies have failed by failing to account for family size. That is, people of certain economic classes/religions/races/whatever tend to have larger family sizes, and if you’re in a large family, you’re more likely to be a later-born child. In order to be absolutely sure I wasn’t making this mistake, I concentrated on within-family-size analyses. For example, there were 2965 respondents with exactly one sibling…

…and a full 2118 of those were the older of the two. That’s 71.4%. p ≤ 0.00000001. (...)

So what is going on here?

It’s unlikely that age alone is driving these results. In sibships of two, older siblings on average were only about one year older than younger siblings. That can’t explain why one group reads this blog so much more often than the other.

And all of the traditional pop psychology claims about birth order don’t seem to hold up. I didn’t find any effect on anything that could be reasonably considered conservativism or rebelliousness.

But there is at least one reputable study that did find a few personality differences. This is Rohrer et al (2015), which examined a battery of personality traits and found birth order effects only IQ and Openness to Experience, both very small.

I was only partly able to replicate this work. Rohrer et al found that eldest siblings had an advantage of about 1.5 IQ points. My study found the same: 1.3 to 1.7 IQ points depending on family size – but because of the sample size this did not achieve significance. (...)

The Openness results were clearer. Eldest children had significantly higher Openness (73rd %ile vs. 69th %ile, p = 0.001). Like Rohrer, I found no difference in any of the other Big Five traits.

Because I only had one blunt measure of Openness, I couldn’t do as detailed an analysis as Rohrer’s team. But they went on to subdivide Openness into two subcomponents, Intellect and Imagination, and found birth order only affected Intellect. They sort of blew Intellect off as just “self-estimated IQ”, but I don’t think this is right. Looking at it more broadly, it seems to be a measure of intellectual curiosity – for example, one of the questions they asked was, “I am someone who is eager for knowledge”. Educational Testing Service describes it as “liking complex problems”, and its opposite as “avoiding philosophical discussion”.

This seems promising. If older siblings were more likely to enjoy complex philosophical discussion, that would help explain why they are so much more likely to read a blog about science and current events. Unfortunately, the scale is completely wrong. Rohrer et al’s effects are tiny – going from a firstborn to a secondborn has an effect size of 0.1 SD on Intellect. In order to contain 71.6% firstborns, this blog would have to select for people above the 99.99999999th percentile in Intellect. There are only 0.8 people at that level in the world, so no existing group is that heavily selected.

I think the most likely explanation is that tests for Openness have limited validity, which makes the correlation look smaller than it really is. If being an eldest sibling increases true underlying Openness by a lot, but your score on psychometric tests for Openness only correlates modestly with true underlying Openness, that would look like being an eldest sibling only increasing test-measured-Openness a little bit.

(cf. Riemann and Kandler (2010), which finds that the heritability of Openness shoots way up if you do a better job assessing it)

If we suppose that birth order has a moderate effect size on intellectual curiosity of 0.5 SD, that would imply that science blogs select for people in the top 3% or so of intellectual curiosity, a much more reasonable number. Positing higher (but still within the range of plausibility) effect sizes would decrease the necessary filtering even further.

If this is right, it suggests Rohrer et al undersold their conclusion. Their bottom line was something like “birth order effects may exist for a few traits, but are too small to matter”. I agree they may only exist for a few traits, but they can be strong enough to skew ratios in some heavily-selected communities like this one.

When I asked around about this, a couple of people brought up further evidence. Liam Clegg pointed out that philosophy professor Michael Sandel asks his students to raise their hand if they’re the oldest in their family, and usually gets about 80% of the class. And Julia Rohrer herself was kind enough to add her voice and say that:
I’m not up to fight you because I think you might be onto something real here. Just to throw in my own anecdotal data: The topic of birth order effect comes up quite frequently when I chat with people in academic contexts, and more often than not (~80% of the time), the other person turns out to be firstborn. Of course, this could be biased by firstborns being more comfortable bringing up the topic given that they’re supposedly smarter, and it’s only anecdotes. Nonetheless, it sometimes makes me wonder whether we are missing something about the whole birth order story.
But why would eldest siblings have more intellectual curiosity? There are many good just-so stories, like parents having more time to read to them as children. But these demand strong effects of parenting on children’s later life outcomes, of exactly the sort that behavioral genetic studies consistently find not to exist. An alternate hypothesis could bring in weird immune stuff, like that thing where people with more older brothers are more likely to be gay because of maternal immunoreactivity to the Y chromosome (which my survey replicates, by the way). But this is a huge stretch and I don’t even know if people are sure this explains the homosexuality results, let alone the birth order ones.

If mainstream psychology becomes convinced this effect exists, I hope they’ll start doing the necessary next steps. This would involve seeing if biological siblings matter more or less than adopted siblings, whether there’s a difference between paternal and maternal half-siblings, how sibling age gaps work into this, and whether only children are more like oldests or youngests. Their reward would be finding some variable affecting children’s inherent intellectual curiosity – one that might offer opportunities for intervention.

by Scott Alexander, Slate Star Codex |  Read more:
Image: Emily
[ed. I participated in this survey. Also a firstborn in my family.]