Friday, July 13, 2018

Vengeance Is Mine

Watching the results on Election Night was like what I’d imagine living in an eighties teen horror movie would be like — the summer camp air curdling into one of vague suspicion, as a strange dawning sensation of doom takes hold. Slaughter: Ohio, Florida, Michigan — all bloody and prone. Who will be picked off next? Pennsylvania? Wisconsin? Minnesota? Your state? The vote is coming from inside the house.

Trump didn’t think he was going to win — not him, not his cracked, wincing campaign manager, not the sozzled Nazi werewolf chairing his presidential bid, not the jackal pack advising him, not the rival camp, not the media. Trump, that demented circus peanut, knew that he had lost every debate, that he had failed to appeal to the mystical moderate voters who determine elections, that he had trailed in most every poll.

And yet when the ballot boxes were locked and the results came filtering back, Clinton was in trouble. A few hours later, she was dead meat. DOA.

There was no grand strategy here. Trump was obviously petrified and unsure of himself, woozily winding his way onto the Hilton dais to claim victory at 3:00 in the morning. This plainly wasn’t supposed to happen. Trump, pea-brained gurnard that he is, only swims downstream; he’s never supposed to reach the spawning ground.

Meeting with Obama, Trump looked awkward and reticent, his blazer button straining against his gut, his tie snaking under and over his crotch like a long, red tongue. He had no mastery of the moment, no sense of prestige, no long-awaited vindication. Trump looked like he was awkwardly stuck in a waiting room with the president of the United States, dreading the doctor who’d soon be plumbing his asshole for cancer.

But then, the fact that this busted orange mule never knew what he was doing hadn’t slowed him or his ilk down before. Donald Trump is not the anomaly he’s been made out to be — he’s as American as apple pie and measles blankets, the real Jay Gatsby, a bogeyman half Horatio Alger, half Bernie Madoff.

Nor is he unprecedented in American politics. There is no brighter era of America in which Trump’s spiritual fathers cannot be found, ranging from Andrew Jackson to Teddy Roosevelt, and crescendoing with Ronald Reagan — that simpering, vicious moron of whom America is so fond.

Perhaps it seemed impossible Trump could win. I thought Hillary had it put away. The metrics were broken. Clocks no longer worked. The wheel of time had spun off its axis. These excuses are all valid to some extent. But some humility is called for here. I was wrong, and the fact that I saw some of the seeds of destruction, but not the crushing denouement, make my errors worse.

You may have been wrong. Few were right, and those that were tended to belong to the repellent, remorseless core of Trump’s chickenshit klavern. The worst people in the country won. I see no point in false optimism or silver linings here; this is seriously fucked up.

But this catastrophe was not inevitable. This apocalypse can be combated.

In order to defeat Trumpism — that strange bundle of live wires currently electrocuting the country — it is necessary to first understand how it prevailed on Election Day. That its victory is, right now, more tenuous than its most loyal adherents believe does not make it any less potent a threat to the most vulnerable people in America, or to the country’s few remaining worthwhile norms and functioning civic institutions. Or, indeed, to life itself on this planet.

It starts with accepting a simple fact: the Republican Party nominated a candidate better suited to winning a presidential race in 2016.

In an election in which both candidates disgusted most of the country, one excited their base, while the other did not. One candidate seemed authentic, fresh, and energizing to many; the other did not. One candidate spoke, in dark, meandering screeds, of fears not usually reconciled in political speech, and of concerns and issues rarely addressed by either party; the other did not.

There was a variance in energies between the two sides, and enough of a magnetic pull to the demagogue, and away from the gray avatar of everything decrepit and failed in politics, for the most improbable of victories to occur.

That this winning candidate is also one of the biggest assholes on the face of the Earth, an authoritarian fraud who might destroy the world, is irrelevant. Find me a self-proclaimed “rational voter,” and I will show you a liar. A vote for Trump was not some endlessly reasoned and debated decision for many American voters — but an impulsive one.

Many Trump voters went for him for simple reasons that ran a gamut: they didn’t know what else to do, they were fearful, they were entitled, they were angry and embittered, they were racist and malicious, they were confused and without malice, they were ignorant, they were not ignorant — they do not accept life in America as it is today, and they will vote, if courted, for the one guy willing to walk through life as a six foot upraised middle finger to everything known and despised. Maybe some of his promises will come true — and failing that, at least we sent a big “fuck you” up the flagpole.

It was appealing. It was a decision that couldn’t be ignored by the elite who usually rule American life unperturbed.

This shock to the system did not occur in a vacuum. The crisis of President Donald John Trump is the bill coming due on a four-decade social, political, and economic project that has succeeded in worsening, coarsening, and ending the lives of hundreds of millions of Americans. This disease permeates the air in America, crystallizing into a constellation of pain: loneliness, frustration, despair, as immutable as the stars in the night sky — distant, implacable, and hanging over every town in the country.

Yet we don’t even have a name for it. Liberals rear back in horror at the insane climate denialism of their opposite numbers — but what then is the liberal reaction to the reality that our country is a cesspit for the vast majority of its inhabitants, an everyday gambit of fear and humiliation? That “America is already great,” or even more cloying and nonsensical, that “America is great because America is good” — is it any wonder the Democrats lost?

These are insults added to the injury, smug testimony from our leaders that the pain and confusion so many Americans feel is somehow not real — even as, when the cameras are turned away, these same leaders beggar and impoverish more Americans. Much of the post-election nattering has focused on the role of the white working class in voting for Trump, but I would go further: this election was proof of the utter failure of either party to be relevant to 99 percent of American life — to even acknowledge the desperation that is a fact of life for most of the country.

There was an enormous disparity in the energies fueling each major candidacy. The GOP was stormed by a charismatic strongman who delighted in shooting his mainline rivals in the backs of their heads, cheerfully driving a backhoe over the mass grave, to the noisy acclamation of his faithful — the pied piper of a brutal and a popular awakening.

And why not? Jeb Bush and John Kasich and Marco Rubio and the dozen-odd graspers on those awful debate platforms were the architects of so much grief and misery in this country, that even a show trial would’ve stretched on for decades. Better that a wealthy Cheshire Cat cut each of them to the quick on live television. This is not rocket science here. It was enjoyable for people to see this happen. And the promises he made — oh! Baron, like love to me!

For electoral reasons, the Democrats must pretend they care about ordinary people’s well-being — that they are not a party of capital, as the Republicans obviously are. How ironic it was then that Trump’s message, which occasionally cut against the grain of typical GOP messaging, occupied their usual terrain: paeans to manufacturing, to the resurrection of American industry, to vague, all-inclusive health care, under a system in which “I will not allow people to die on the sidewalks and streets of our country.” Marry this to a virulent program of murder, deportation, and scapegoating, and you have the makings of a pretty decent dictator.

It was Trump’s hellish, dystopian vision — that “our country does not feel ‘great already’ to the millions of wonderful people living in poverty, violence and despair” — which nevertheless verged closer to the unspeakable truth. Trump, in his restorative mode, of “making America great again,” unwittingly did what his opponents could not in any plausible sense display: he recognized that this country does not feel great to many people living in it.

In his chauvinistic squeal of a campaign, in which he mobilized white nationalism as a binding force for his faithful, he promised the world to those poor working whites, with a lunatic’s lack of foresight and self-control. Trump cruelly raised the hopes of enough of them to eke out a sidelong victory.

What powered Trump to victory was a maintenance of the Republican coalition, and a hundred thousand voters in several economically depressed northern and midwestern states that had previously gone for Obama. There were racists, there were nascent fascists, there were diseased rich fucks — but, I am sorry to tell you, there were also people who’d have chosen a better option were it presented. If you can’t understand that, you risk two terms of this insanity.

These are the facts of Trump’s narrow electoral victory. He did not win the popular vote. He won where it mattered, among people who don’t feel they matter.

It’s funny, isn’t it, who was right and who was wrong. The Samantha Bees and John Olivers and Trevor Noahs of the world had their fun little jokes about Trump, didn’t they — humorless, vapid Trump, resolutely unable to laugh at himself. He’s orange, with a two-digit IQ, and takes shits in a gold toilet bowl. And his followers, oh, what a gift for comedy — unhinged, unwell, violent — and best of all, loathsome, the perfect target of derision, because who would feel bad mocking the worst people in the world?

And yet. In the words of Trump’s slimey limey, the odious Nigel Farage, crowing to the European Parliament in the wake of the Brexit vote: “You’re not laughing now, are you?”

It’s what you see in front of you and pretend isn’t there that gets you — not what you don’t know, but can’t find out. Case in point: the Democratic standard-bearer this year, Hillary Rodham Clinton — one of the worst presidential candidates in American history.

To hear the Clinton loyalists tell it from the artificial moon they live on, orbiting our corporeal reality in a dissociative fugue state, voters in Fond Du Lac and Saginaw and Scranton voted against Clinton only because of a malicious media, James Comey, Benghazi, emails, and Vladimir Putin — and not because, by every metric, they hate her fucking guts and have done so for thirty years.

This is the reality anybody with two volts of brainpower and a Rust Belt address might’ve stumbled across, yet which somehow eluded every major Democrat in an election year.

Why is Hillary Clinton despised? Misogyny, of course, a deep running vein of it — Clinton is right in her suspicion that her persistence in public life has bred contempt in a way no man could ever invite. The violent extremity and gendered viciousness visited upon her is no accident; it speaks to a deep sickness in American men. She is, after all, a woman who demanded a man’s career, no small source of resentment to many Americans of both genders.

But there’s the rub. The Democrats are, plainly, co-conspirators in the destruction of American life, “history’s second-most enthusiastic capitalist party” — the willing executioners for free-market zealots, warmongers, and Wall Street. A career engaged in such politics is a morally undesirable career, no matter your gender. Especially so when you are the type of politician Hillary Clinton was born to be: an ignorant hawk with no conception of how her feckless adventurism might destroy entire societies; a greedlord, in love with the accumulation of wealth; and, most vividly, a lying hack who couldn’t sound sincere with the Sword of Damocles hanging over her.

She cannibalized Sanders’s platform when it suited her, with the shamelessness of a starving vulture, then discarded it again. She had no ideas, and ran a campaign suggesting as much. I don’t think anybody really deserves Trump — but Hillary Clinton deserved to lose.

The endless celebrity deification, the forced jocularity, the feigned hipness, the idiotic sops to pop culture, the lifeless, stage-managed jokes, the pervading sense that this was all perfunctory to her, an inconvenient hurdle to be cleared, en route to the office that was somehow hers by right — the abiding sense that whoever Hillary Clinton actually is, she is not going to be found in public. It seemed like this inclination was only worsened by her advisers, one of the most rancid collections of suck-ups, influence-peddlers, and incompetents since the Harding Administration. (In fairness, Trump is about to give her a run for the money.)

This echo chamber of sycophants didn’t seem to get that not everyone viewed Hillary’s run as so historic, or deserving of reverence — and in their near-pathological inability to accept criticism or fault, ensured they ran a weak candidate, wounded by a thousand cuts, with no compelling reason for running.

No compelling reason for running — I’m not sure Trump has one either, but what he lacks in design, he makes up for in creepy fascist agitprop. With Clinton, it was never clear what the hell she was doing on stage.

Reading the leaked emails showing the Democratic elite had connived and conspired to boost the fortunes of one of the most widely disliked charlatans in recent political history, in a primary campaign that had all the trappings of a good, old-fashioned Dem machine ratfucking, but with none of the skill, it wasn’t merely that Team Hillary came off as venal and corrupt — it was how stupid they were. For all their whining about the email scandal, it was an entirely self-inflicted wound, a classic Clinton scandal: one part wrongdoing, two parts arrogant refusal to admit wrongdoing.

Not that the email scandal really mattered. Though the Clinton gang will never admit it, Comey was a paper-pusher desperate to avoid appearing to influence the election one way or the other; in covering his ass, it came down slightly against Clinton.

Imagine the hue and cry if Comey hadn’t blurted out the existence of new, unexamined emails, and one of his psycho special agents leaked the news. The Trump mob would’ve flash-fried Comey in hot oil. If anything took hold from that investigation, it only reinforced what a significant number of Americans already believed: that Clinton was really as inauthentic and untrustworthy as she seemed.

The bill of goods was no good at all, from day one. Nobody really wanted her to be president.

Unless you were very excited for Hillary to be the first female president — a proclivity most young women found secondary during the primaries — the only reason to vote for her was to deny Trump access to the nuclear codes. The fact that Team Clinton ran a tactically incompetent campaign, up and down, with no meaningful awareness of the conditions most propitious for victory, was icing on the cake.

The result is, they lost the race for the most powerful office on Earth to a version of Count Dracula that hates reading. If you lose to Donald Trump — serial sex predator and gold-plated bankruptcy pest Donald Trump — after he runs the campaign equivalent of eating paste, you are the biggest loser in the history of loserdom.

What would Hillary Clinton have done as president? Why was she running for president? I suspect the answers to these questions have nothing to do with policy — a subject conspicuously ignored by her most loyal acolytes, intent as they were on constructing a fantasy heroine image of Clinton unconcerned with her total mediocrity.

As with the many governors and senators who ran for president, seeking to fill the bottomless hole of ambition and ego upon which they’ve built the foundation of their empty lives, Hillary reached a point where she ran out of rungs on the ladder.

In this sense, she really did transcend the sexism that has dogged her entire career; she was one of those unlucky few captive to the delusions of high office. There was nowhere else to turn but towards the White House. The only alternative lay, perhaps, in admitting at last that such egomania devoid of principle is deadly — that in some deep and profound way, this is no way to live one’s life.

by Dan O'Sullivan, Jacobin |  Read more:
Image: uncredited
[ed. How did I miss this the first time around? Still relevant.  h/t Naked Capitalism]

A Person Alone: Leaning Out with Ottessa Moshfegh

The narrator of Ottessa Moshfegh’s new novel, My Year of Rest and Relaxation, a 24-year-old New Yorker, wants to shut the world out — by sedating herself into a near-constant slumber made possible by a cornucopia of prescription drugs. In various states of semi-consciousness, she begins “Sleepwalking, sleeptalking, sleep-online-chatting, sleepeating… sleepshopping on the computer and sleepordered Chinese delivery. I’d sleepsmoked. I’d sleeptexted and sleeptelephoned.” Her daily life revolves around sleeping as much as possible, and when she’s not sleeping, she’s pretty much obsessed with strategizing how to knock herself out for even longer the next time, constantly counting out her supply of pills.

Her behavior is so extreme — at one point, she seals her cell phone into a tupperware container, which she discovers floating in a pool of water in the tub the following morning — that a New York Times reviewer dubbed Moshfegh’s work an “antisocial” novel. Moshfegh, the author of Homesick for Another World and Eileen, which was shortlisted for the National Book Critics Circle Award and the Man Booker Prize, has a knack for creating offbeat characters who don’t fit into neat categories. Like other women in Moshfegh’s stories, the heroine in My Year of Rest and Relaxation is unsettling. She is beautiful, thin, privileged — and deeply troubled.

Moshfegh’s work has earned praise from critics for her unflinching portrayal of characters who can make us uncomfortable, and she has become an author to watch. I spoke to Moshfegh on the phone while she was at home in Los Angeles preparing for her book launch. This interview has been edited for length and clarity. (...)

Some novels, like Sheila Heti’s Motherhood, have been put in this category of autofiction. What do you think of that as a category? Do you feel that your novel falls into that in any way?

I don’t even know what autofiction is so I will refuse the category outright and let scholars and critics do the categorizing for themselves.

It’s been used to refer to fiction that has a strong autobiographical foundation. Is that true of this novel? Was that why, in part, it was difficult for you to write?

I think the thing that I have in common with this character is that I am acutely aware of how much I do not like my own mind. When I’m not distracted by my imagination or by something external, time passing feels like I’m just waiting for the time to pass until I die. It’s kind of like vigilant awareness of mortality and mindfulness.

I don’t know how other people feel, but I’m assuming that that feeling is the thing that triggers personality traits and reasons for being certain ways.

The secondary character, Riva, has all these obsessions and issues that she’s filled her life with in order to keep herself from feeling that oceanic dread of emptiness, which is just like you are an enclosed mind in a mortal body that’s going to die. All of that frustration is really what motivated me personally to take an interest in the book.

I’m also interested in the timing of the book. It’s set in New York City, leading up to 9/11. Why did you chose that time period?

I chose that time period because I think New York City before 9/11 was very different. The effect of 9/11 on the entire geographic location — which was experiencing major trauma and then also being abused by the media and politics to believe that this was the reason we were making violence in the world — the way that 9/11 shaped our national identity, I think, was really fucked up and not the proper way to deal with trauma. Just psychologically for the people that experienced it. I don’t know how much healing we actually did.

People talk about 9/11 like a tragedy, but it was also celebrated every time we went to take some action in the Middle East. It became a justification for a lie. I think that’s really exploitative and evil. I think that it’s American evil. That’s what people don’t want to look at.

I mean, sure. People want to look at it now and we have all these neoliberal fascist people taking that to the next level. I don’t know how to reconcile those kinds of images with my reality. I don’t know how to be in New York City without thinking that over there somebody had to make a decision to either choke to death on toxic smoke or jump 70 stories. How the fuck am I ordering my latte? You know?

I don’t know how much New Yorkers are aware of that on a daily basis. I’ve certainly thought about it a lot. In some ways it’s this sort of teenager-y angst. Why is everybody always pretending that everything is okay? Nothing is okay.

In that sense it is about my life. But if people are talking about the connections between people’s fiction and their own life why is that a new concept? That’s been happening since the dawn of storytelling. (...)

You’ve said that you’re somebody who abandons things. You move or you have objects in your life and you go through periods where you purge them. That’s also something this character does towards the ending — gets rid of everything except the essentials. What’s the significance of that move?

Well, I think that the theme is about nostalgia and attachment, and how that can trap us into being somebody that we actually aren’t anymore. And I mean, I don’t want to be like an advocate for throwing everything away, but I’m interested in how objects and places can keep us beholden to a self that isn’t who we want to be, and who maybe wasn’t who we ever were.

I could have been born anywhere. You know? So I’m attached to something because I have experience with it. But if I was born in, I don’t know Madagascar, I would have a totally different set of associations. I wouldn’t be writing this book.

But maybe I would. Maybe I would be writing a book, I just wouldn’t know what New York had been like. I don’t know. But I also think that when you strip everything away, I think it’s an attempt to get at what’s true. Like, you know the feeling when you break up with somebody? Even if you’re in a lot of pain, you also go somewhere really deep because you have to attach to the core of who you are again instead of sharing some part of you that was like more in the middle of your core and your outside personality. And just like a way of getting deeper into who you really are. When you’ve gotten rid of all the accouterments you’ve used to describe your world. (...)

She’s pretty aware of her privilege. So she talks about her inheritance. She has checks that are deposited to her bank account. So this is something that she can afford to do. And you write about the privilege of the art world community. “The next generation of rich kids and art hags,” you write. What did privilege mean in New York nearly 20 years ago, and what does it mean today?
I don’t know if it’s changed from the beginning of time, but I think that when you’re born into wealth you’re, in some ways, at a disadvantage. Because you don’t really need to take part in that instinctual thing that’s bred into every living thing — which is I need to figure out how to survive — if your parents are just doing it for you. Some people are really good at that and I think for some people it’s very confusing.

I think that’s pretty consistent and I think it’s maybe why there’s so much room for absurdity or egomania, like look at our president. If he had to start off working in a factory, I think he would have developed into a different kind of person. Maybe he would still be — he’s kind of an egomaniac.

But I think where we’re at with the discussion of privilege is getting kind of annoying. Whatever privilege you have is suddenly something to be ashamed of in liberal circles. Like nobody really wants to admit that they’re fortunate.
How have you evolved as a writer?

I don’t think I have one answer, but I think that, by the time I finished the book, I sort of exhausted my curiosity with internalism and I’m moving on to some place a little bit more contextual and plot driven.

The book was challenging because of the essence of it being a woman in an apartment. You know, it’s like writing about someone being in jail, which is my first book. And you don’t get out of it unless you’re thinking about the past. So, that paradigm for looking at narratives — I feel a little exhausted by it at this point.

But I know I needed to write that book to give myself a chance to look at the things that are difficult about being a person alone. I mean it’s really about the isolation, you know?

And how to deal with that or whether it’s even to deal with it. I mean, you know now there’s that term “leaning in,” and I feel like she leans in, which I don’t judge her for, but I also know that that doesn’t work. It’s just real life.

And maybe when I was writing it, I kind of hoped that it would. I think I hoped that the answers are always within me. If I have a problem, the answer is within me. And I think when I reached the end of the book, it was like there are no answers. You can only kind of just try to damage your brain in a way that makes you stupid. Which is I think the release that she finds at the end.

by Hope Reese, Longreads | Read more:
Image: Penguin Press
[ed. See also: 'Rest And Relaxation' Is As Sharp As Its Heroine Is Bleary (NPR)]

Thursday, July 12, 2018

Pat Metheny, Richard Bona, Antonio Sanchez

Sisters in Arms

Is the #MeToo “moment” the beginning of a new feminism? Coined by the civil rights activist Tarana Burke in 2006, the term took off in 2017 when celebrities like the actress Alyssa Milano began using it as a Twitter hashtag. Extensive reporting in The New York Times and The New Yorker on harassment in the entertainment and tech industries helped the movement bring down some of those fields’ most powerful figures. By speaking out, a number of famous actresses—some of them better known previously for their not-so-feminist roles as cute witches and beguiling prostitutes—have done so as well. To date, most of #MeToo’s attention has been aimed at the rich and influential: for instance, abusive talk show hosts and other notorious media figures.

#MeToo has too often ignored the most frequent victims of abuse, however, such as waitresses or hotel housekeepers. These are among the invisible people who keep society going—cleaning homes, harvesting our vegetables, and serving salads made of these vegetables. Who among those of us who depend on their labor knows their struggles or even their names? It can seem like an uphill battle to bring attention to the working-class victims of harassment, even though these women are often abused in starker and more brutal fashion than their counterparts in Hollywood.

Bernice Yeung, an investigative journalist for the reporting nonprofit organization Reveal, has helped correct this imbalance. Yeung is no tourist in the lives of the working poor women she covers. She has been writing about the plight of farmworkers and maids harassed and raped by their overseers for more than five years, in places far from executive offices—fields, basements, and break rooms. Her new book, In a Day’s Work, is a bleak but much-needed addition to the literature on sexual harassment in the US.

Consider, for example, a female farmworker whose supervisor violated her in a shed:
He held gardening shears to her throat. He pulled her hair or slapped her while he raped her because he said she wasn’t putting enough effort into it. Then he coerced her into silence by threatening to kill her children in Mexico or by reminding her of the power he had to fire her sister and brother, who also worked at the same farm.
Then there is Georgina Hernández, a cleaning woman in Orange County, California, who rejected her supervisor’s advances when she took a new position cleaning the lobby and exterior of the hotel where they worked. This job paid better than her previous position “scrubbing the oily kitchens and lifting the heavy rubber floor mats.” She ended up, however, having to hide in the women’s bathroom, where her boss followed her, murmuring disturbing little nothings like “you’re delicious.” Finally, he raped and impregnated her.

This sexual and social violence is happening all around us. A domestic worker in Yeung’s book who was assaulted at the rental house she was required to clean recounts that her rapist “cornered me and pinned me against the wall and…tried to pull my pants down again.” Yeung writes that “she did not report these attacks because,” in her words, “I was afraid that I would be fired.” Another janitorial worker, Erika Morales, was put under the supervision of a convicted sex offender who went to her worksite “to watch her as she vacuumed or scrubbed bathrooms. In addition to staring at her and making sexual comments, she says the supervisor sneaked up behind her and grabbed and groped her.”

Like most of the women Yeung interviewed, Morales felt disgust and a sense of shame that then permeated her life. Her pain was compounded for another reason: she was a single mother with two children who couldn’t afford to lose a paycheck. “In that moment, I was going through a situation where I couldn’t stop working,” she tells Yeung. “The father of my children wasn’t there. I was alone with the kids.” It was also a matter of time: it would “take weeks of filling out applications before she could land a new job and [she] didn’t know how she would feed her children in the meantime.” She made an appeal to her assailant himself, pleading that her children needed to eat. He laughed in her face. (...)

In a Day’s Work suggests how the struggles of working-class women align with those of their sisters in the creative class. It offers an opportunity rarely found in our class-polarized society: to bring together women across economic levels around a single issue. What might make this opportunity hard to seize is that the outcome of sexual harassment is significantly different for women of varying social classes and occupations. Actresses like Mira Sorvino can be “heartsick” after learning they may have lost major film roles due to Harvey Weinstein’s machinations against women who refused his sexual demands (he allegedly blacklisted her to the director Peter Jackson and others). But the very idea of having a career that can be derailed might seem foreign to women simply working to get by or to stay in this country. (...)

The history of women’s fight against sexual harassment is as full of disunion and fragmentation as it is of solidarity. Class differences among women have troubled the movement since the 1975 case around which the phrase “sexual harassment” was coined. Carmita Wood, a forty-four-year-old administrative assistant at Cornell University, left her job after her boss, Boyce McDaniel, a famed nuclear physicist, thrust his hand up her shirt, shoved her against her desk, and lunged at her for unwanted kisses. Wood, a mother of four, afterward found working with her assailant unbearable and filed a claim for unemployment benefits. Cornell rejected the claim, and Wood sought help from upper-middle-class female activists at the university’s Human Affairs Office. Together they created Working Women United, which held events where everyone from filmmakers to waitresses shared their horrific stories.

Wood ultimately lost her appeal. Working Women United, for its part, fractured after just a year due to tensions between its working-class and its middle-class members. Carmita Wood was also quickly excluded from news coverage of the group: it seemed that she was less interesting to reporters than its middle-class organizers were. As those women shored up their movement, Wood paid the greater price for her bravery, becoming, as her grandson called her in an interview, “a black sheep” in the local community, struggling for her unemployment benefits. She finally left town and resettled on the West Coast.

The gap between bourgeois and working-class feminists has troubled other alliances as well. Working-class women and trade unions rejected the Equal Rights Amendment (ERA) throughout much of the twentieth century out of concern that it would remove the few on-the-job protections women already had, like limitations on the amount of weight they were required to lift. Even in the 1970s, after the AFL–CIO endorsed the ERA, many working-class activists continued to consider it purely a middle-class issue. Then there is Roe v. Wade. The plaintiff, “Jane Roe” (Norma McCorvey), a working-class woman, felt alienated by the upper-middle-class feminists who had pushed her case to the Supreme Court. McCorvey eventually switched sides to join the pro-life movement.

A particularly stark case in which well-off women mistook their interests for the common cause was a public relations campaign sponsored in 2003 by the National Council of Women’s Organizations (NCWO). The campaign was meant to shame the Masters Golf Tournament, held at a club called Augusta National, which excluded women as members. The assumption behind the campaign was that if famous female golfers were admitted into Augusta, all women would somehow benefit—never mind that the only form of golf most working-class people can afford is of the miniature variety. Many have since doubted the idea that victories like the one the NCWO pursued around Augusta might trickle down to the non-golfing female majority. “Trickle-down economics wasn’t the best experience for people like me,” Tressie McMillan Cottom, a black feminist and sociologist, has written. “You will have to forgive me, then, if I have similar doubts about trickle-down feminism.”

The recommendations in best-selling feminist business books like Sheryl Sandberg’s Lean In likewise are most effective—if they are effective at all—for women who can advocate for their interests without risk of retaliation. “We hold ourselves back in ways both big and small,” Sandberg wrote, “by lacking self-confidence, by not raising our hands, and by pulling back when we should be leaning in.” For ordinary working women, leaning in would be immediate grounds for termination. (...)

But what Yeung’s book suggests primarily is that feminists don’t need a policy program first. Rather, we need unity, or, as we used to say, sisterhood. Whatever #MeToo becomes, it mustn’t simply resemble this winter’s Golden Globes awards, where actresses paraded working-class female activists with them across the red carpet. Nor should it resemble boutique women’s-only workspaces, like Manhattan’s the Wing, where membership can cost up to $3,000 a year. Affluent women can use their privilege to help strengthen the movement among working-class women like the ones who appear in In a Day’s Work, but only if they manage to put their resources to good use.

by Alissa Quart and Barbara Ehrenreich, NYRB |  Read more:
Image: Scott Olson/Getty Images

I’ll Be Watching You

China's Xu Bing constructs the first film made solely from surveillance footage

This week, it finally dawned on the U.S. Media that the China of now is the West of tomorrow, with a small but seedy collection of reports that the waking moments of millions of Chinese are monitored, policed, scanned and logged using artificial intelligence plugged into 200 million cameras and countless lines of code. The New York Times called China’s facial recognition system an effort at “algorithmic governance,” while Vanity Fair linked the upswing in surveillance to an industrial boom for the companies selling the shady technology. It’s America’s future, too. But the vision of an eternally monitored techstate—and the sheer weirdness of it all—has already been shown to us by a conceptual artist working deep in the gritty interior of China.

A woman falls into a river in an alienated, urban space. Did she fall or was she pushed? Dragonfly Eyes, a new film by Chinese artist Xu Bing, is the first film assembled almost entirely from surveillance footage, an anxious, experimental, spooky composite of artist’s film, murder mystery, and found footage. It is a story of anonymous individuals suffering from terrible isolation—a fictional narrative born of real people’s digital detritus—and the woman in the river, Qing Ting, is one of them.

I first saw the trailer for Dragonfly Eyes eighteen months ago, and in its pixelated blur of car crashes and fights and disasters both large-scale and intimate, it was so shocking and real—so far away from the tired conventions of much current independent filmmaking—that I thought of it often before finally seeing it this year at a Chinese art gallery in Sydney. The film began circulating in festivals and cinephile societies in 2017. Much of the footage is harvested from 2015 onwards, when a raft of Chinese sites emerged allowing consumers to upload their own security and home webcam streams: the world has been streamed, and we’ve done it ourselves. In China the all-seeing state eye is even more vast, with 200 million CCTV cameras across China, and predicted to grow three-fold to 600 million by 2020. Such is the supremacy of this ever-present government architecture and accompanying AI-based facial recognition technology that it took only seven minutes to find BBC reporter John Sudworth in a human interest/government propaganda story late last year.

Footage of this kind is silent, and so Dragonfly Eyes superimposes recorded narration, foley (the addition of sound to footage after it’s been shot), and dialogue; Qing Ting is voiced by Liu Yongfang, and Ke Fan, her wandering lover, voiced by Su Shangqing. Timestamps appear on most of the shots, and each source and location is credited at the film’s end, so despite the narrative abstraction and exposed artifice of the film’s construction, Xu’s creation feels creepily alive—because it is.

As the footage traces backwards, we learn of a Hitchcockian tale of switched identities, a love affair, and its abandonment by Qing Ting as she tries to climb the deeply stratified ranks of Chinese society. The character of Qing Ting is resurrected through shots of many Chinese women, but our desire to make meaning of the deluge makes us see her as one cogent character. It’s a stunning trick of narrative. (...)


Dragonfly Eyes is an exercise in montage and making meaning through association, abandoning a direct relationship to story and character. That’s not new—that’s often what experimental cinema is all about—but Dragonfly Eyes is compiled out of cinematography that is free of the human hand and eye. Cinema, once celluloid, now digital, is collected from archives that have slipped into the public realm, with characters crafted from the fuzzy faces of thousands of busy citizens crossing the screen. The cinematographer’s eye is that of many ubiquitous webcams, leaked CCTV streams and surveillance devices, time-stamped and pixelated, blinking millions of times a second, across the expanse of the People’s Republic of China. Other material from vlogs, livestreams and dashcams are captured by personal, consumer-bought webcams and streamed by their owners.

But the conversion and perversion of spy material for artistic purposes resonates beyond the free-market authoritarianism specific to China. This is where cinema is headed, as types of devices and images multiply. And this is our relationship to technology: we all continue to do things we’re uncomfortable with; we all use Facebook when we know our information is shared with advertisers and election manipulators; we all look the other way when devices like AliPay (a Chinese cellphone payment and social ranking platform that uses technology as a form of social control) are introduced to tourist hubs in Australia and elsewhere. We participate in work Fitbit schemes that allow our employers to trace our movements and suggest personal health insurance programs.

In Xu’s hyper-real vision—which is, to my mind, the finest, most authentic science-fiction film in recent years—public data becomes fodder for a new story, and the traces left by the many re-form as a handful of ghostly, shapeshifting protagonists floating through an unstable consumer economy. In the face-lifted figure of Qing Ting, surveillance and surgery create a single body, a single face, a single citizen: complete oneness. The flowing river reflects one moon. And eventually, the scope widens to a society-wide conspiracy: a montage of catastrophe—planes careen downwards, cars swerve off-road, trains derail, construction sites collapse—piles up from the big dystopian now of self-surveilled China.

Xu’s manipulation of civilians’ faces and bodies could appear as a form of government collusion. But the critique is clear: authoritarianism of the past—in the real world and in science-fiction—relied on sneaking, reporting, informing. Today, we have given up our privacy without a fight; the critique applies across East and West. “Her privacy is all used up,” drones the computer meta-narrator. “He and she leave data.”

by Lauren Carroll Harris, The Baffler | Read more:
Image: A still from Dragonfly Eyes by Xu Bing
Video: YouTube
[ed. I'll have to look for this... just watch the first 60 seconds of the trailer.]

Wednesday, July 11, 2018

Where Millennials Come From

And why we insist on blaming them for it.

Imagine, as I often do, that our world were to end tomorrow, and that alien researchers many years in the future were tasked with reconstructing the demise of civilization from the news. If they persevered past the coverage of our President, they would soon identify the curious figure of the millennial as a suspect. A composite image would emerge, of a twitchy and phone-addicted pest who eats away at beloved American institutions the way boll weevils feed on crops. Millennials, according to recent headlines, are killing hotels, department stores, chain restaurants, the car industry, the diamond industry, the napkin industry, homeownership, marriage, doorbells, motorcycles, fabric softener, hotel-loyalty programs, casinos, Goldman Sachs, serendipity, and the McDonald’s McWrap.

The idea that millennials are capriciously wrecking the landscape of American consumption grants quite a bit of power to a group that is still on the younger side. Born in the nineteen-eighties and nineties, millennials are now in their twenties and thirties. But the popular image of this generation—given its name, in 1987, by William Strauss and Neil Howe—has long been connected with the notion of disruptive self-interest. Over the past decade, that connection has been codified by Jean Twenge, a psychology professor at San Diego State University, who writes about those younger than herself with an air of pragmatic evenhandedness and an undercurrent of moral alarm. (An article adapted from her most recent book, “iGen,” about the cohort after millennials, was published in the September issue of The Atlantic with the headline “Have Smartphones Destroyed a Generation?” It went viral.) In 2006, Twenge published “Generation Me: Why Today’s Young Americans Are More Confident, Assertive, Entitled—and More Miserable Than Ever Before.” The book’s cover emblazoned the title across a bare midriff, a flamboyant illustration of millennial self-importance, sandwiched between a navel piercing and a pair of low-rise jeans.

According to Twenge, millennials are “tolerant, confident, open-minded, and ambitious, but also disengaged, narcissistic, distrustful, and anxious.” She presents a barrage of statistics in support of this assessment, along with anecdotal testimonials and pop-cultural examples that neatly confirm the trends she identifies. (A revised edition, published in 2014, mentions the HBO show “Girls” six times.) Twenge acknowledges that the generation has come of age inside an “economic squeeze created by underemployment and rising costs,” but she mostly explains millennial traits in terms of culture and choice. Parents overemphasized self-esteem and happiness, while kids took their cues from an era of diversity initiatives, decentralized authority, online avatars, and reality TV. As a result, millennials have become irresponsible and fundamentally maladjusted. They “believe that every job will be fulfilling and then can’t even find a boring one.” They must lower their expectations and dim their glittering self-images in order to become functional adults.

This argument has a conservative appeal, given its focus on the individual rather than on the structures and the conditions that govern one’s life. Twenge wonders, “Is the upswing in minority kids’ self-esteem an unmitigated good?” and then observes, “Raising children’s self-esteem is not going to solve the problems of poverty and crime.” It’s possible to reach such moralizing conclusions even if one begins with the opposite economic premise. In “The Vanishing American Adult,” published in May, Senator Ben Sasse, Republican of Nebraska, insists that we live in a time of generalized “affluenza,” in which “much of our stress now flows not from deprivation but, oddly, from surplus.” Millennials have “far too few problems,” he argues. Sasse chastises parents for allowing their kids to succumb to the character-eroding temptations of contemporary abundance and offers suggestions for turning the school-age generation into the sort of hardworking, financially independent grownups that the millennials have yet to become.

The image of millennials has darkened since Strauss and Howe walked the beat: in their 2000 book, “Millennials Rising,” they claimed that the members of this surging generation were uniquely earnest, industrious, and positive. But the decline in that reputation is hardly surprising. Since the nineteen-sixties, most generational analysis has revolved around the groundbreaking idea that young people are selfish. Twenge’s term for millennials merely flips an older one, the “me generation,” inspired by a 1976 New York cover story by Tom Wolfe about the baby boomers. (The voluble Wolfe, born in 1930, is a member of the silent generation.) Wolfe argued that three decades of postwar economic growth had produced a mania for “remaking, remodeling, elevating, and polishing one’s very self . . . and observing, studying, and doting on it.” The fear of growing selfishness has, in the forty years since, only increased.

That fear is grounded in concrete changes: the story of American self-interest is a continuous one that nonetheless contains major institutional and economic shifts. Adapting to those shifts does tend to produce certain effects. I was born smack in the middle of the standard millennial range, and Twenge’s description of my generation’s personality strikes me as broadly accurate. Lately, millennial dreams tend less toward global fame and more toward affordable health insurance, but she is correct that my cohort has grown up under the influence of novel and powerful incentives to focus on the self. If for the baby boomers self-actualization was a conscious project, and if for Gen X—born in the sixties and seventies—it was a mandate to be undermined, then for millennials it’s more like an atmospheric condition: inescapable, ordinary, and, perhaps, increasingly toxic. A generation has inherited a world without being able to live in it. How did that happen? And why do so many people insist on blaming them for it?

Kids These Days: Human Capital and the Making of Millennials,” by Malcolm Harris (Little, Brown), is the first major accounting of the millennial generation written by someone who belongs to it. Harris is twenty-eight—the book’s cover announces his birth year next to a sardonic illustration of elementary-school stickers—and he has already rounded the bases of young, literary, leftist media: he is a writer and editor for the online magazine the New Inquiry; he has written for Jacobin and n+1. He got his first taste of notoriety during Occupy Wall Street: shortly after activists settled in at Zuccotti Park, he wrote a blog post for Jacobin in which he claimed to have “heard unconfirmed reports that Radiohead is planning a concert at the occupation this week.” He set up an e-mail account using the name of the band’s manager and wrote to Occupy organizers, conveying the band’s interest in performing. Later, in a piece for Gawker titled “I’m the Jerk Who Pranked Occupy Wall Street,” he explained that his goal was to get more people to the protest, and expressed disdain for the way the organizers responded. (Fooled by his e-mail, they held a press conference and confirmed the band’s plan to appear.)

Harris’s anatomizing of his peers begins with the star stickers that, along with grade-school participation trophies, so fascinate Sasse, Twenge, and other writers of generational trend pieces. “You suck, you still get a trophy” is how Twenge puts it, describing contemporary K through five as an endless awards ceremony. Harris, on the other hand, regards elementary school as a capitalist boot camp, in which children perform unpaid labor, learn the importance of year-over-year growth through standardized testing, and get accustomed to constant, quantified, increasingly efficient work. The two descriptions are not as far apart as one might think: assuring kids that they’re super special—and telling them, as Sasse does, that they have a duty to improve themselves through constant enrichment—is a good way to get them to cleave to a culture of around-the-clock labor. And conditioning them to seek rewards in the form of positive feedback—stars and trophies, hearts and likes—is a great way to get them used to performing that labor for free.

My memories of childhood—in a suburban neighborhood in west Houston that felt newly hatched, as open as farmland—are different, breezy and hot and sunlit. I attended, mostly on scholarship, a Southern Baptist school attached to one of the largest megachurches in America, and elementary school seemed like the natural price of admission for friends, birthday parties, and long summers full of shrieking, unsupervised play. (The very young aren’t much for picking up on indoctrination techniques; the religious agitprop felt natural enough, too.) But some kind of training did kick in around the time I entered high school, when I began spending fourteen-hour days on campus with the understanding that I needed to earn a scholarship to a good college. College, of course, is where the millennial lounges around on lush green quads, spends someone else’s money, insists on “safe spaces,” protests her school’s heteronormative core curriculum, and wages war on her professors if she receives a grade below an A. I did the first two of those things, thanks to the Jefferson Scholars Foundation at the University of Virginia. I also took six classes a semester, worked part time, and crammed my schedule with clubs and committees—in between naps on the quad and beers with friends on my porch couch and long meditative sessions figuring out what kind of a person I was going to be.

Most undergraduates don’t have such a luxurious and debt-free experience. The majority of American college students never live on campus; around a third go to community college. The type of millennial that much of the media flocks to—white, rich, thoughtlessly entitled—is largely unrepresentative of what is, in fact, a diverse and often downwardly mobile group. (Millennials are the first generation to have just a fifty-fifty chance of being financially better off than their parents.) Many millennials grew up poor, went to crummy schools, and have been shuttled toward for-profit colleges and minimum-wage jobs, if not the prison system. (For-profit colleges, which disproportionately serve low-income students, account for roughly a tenth of undergraduates, and more than a third of student-loan defaults.) Average student debt has doubled just within this generation, surging from around eighteen thousand dollars at graduation for the class of 2003 to thirty-seven thousand for the class of 2016. (Under the tax plan recently passed by House Republicans, the situation worsens for student borrowers and their families: that bill eliminates the deduction on student-loan interest and voids the income-tax exemption for tuition benefits.)

A young college graduate, having faithfully followed the American path of hard work and achievement, might now find herself in a position akin to a homeowner with negative equity: in possession of an asset that is worth much less than what she owes. In these conditions, the concept of self-interest starts to splinter. For young people, I suspect, the idea of specialness looks like a reward but mostly functions as punishment, bestowing on us the idea that there is no good way of existing other than constantly generating returns. (...)

When Twenge first published “Generation Me,” social media had not yet become ubiquitous. Facebook was limited to colleges and high schools, Twitter hadn’t formally launched, and Instagram didn’t exist. But the millennial narrative was already taking its mature shape, and social media fit into it seamlessly: the narcissism of status updates, the shallow skimming of shiny surfaces, the inability to sit still. One might therefore conclude that the story of generational self-centeredness is so flexible as to have no real definition—it can cover anything, with a little stretching. But there is another possibility: that social media feeds on the same conditions that have made millennials what they are.

“Over the last decade, anxiety has overtaken depression as the most common reason college students seek counseling services,” the Times Magazine noted in October. Anxiety, Harris argues, isn’t just an unfortunate by-product of an era when wages are low and job security is scarce. It’s useful: a constant state of adrenalized agitation can make it hard to stop working and encourage you to think of other aspects of your life—health, leisure, online interaction—as work. Social media provides both an immediate release for that anxiety and a replenishment of it, so that users keep coming back. Many jobs of the sort that allow millennials to make sudden leaps into financial safety—in tech, sports, music, film, “influencing,” and, occasionally, journalism—are identity-based and mercurial, with the biggest payoffs and opportunities going to those who have developed an online following. What’s more, cultivating a “personal brand” has become a matter of prudence as well as ambition: there is a powerful incentive to be publicly likable at a time when strangers routinely rate and review one another over minor transactions—cat-sitting, assembling ikea furniture, sharing a car ride or a spare bedroom—and people are forced to crowdsource money for their medical bills.

Young people have curled around their economic situation “like vines on a trellis,” as Harris puts it. And, when humans learn to think of themselves as assets competing in an unpredictable and punishing market, then millennials—in all their anxious, twitchy, phone-addicted glory—are exactly what you should expect. The disdain that so many people feel for Harris’s and my generation reflects an unease about the forces of deregulation, globalization, and technological acceleration that are transforming everyone’s lives. (It does not seem coincidental that young people would be criticized for being entitled at a time when people are being stripped of their entitlements.) Millennials, in other words, have adjusted too well to the world they grew up in; their perfect synchronization with economic and cultural disruption has been mistaken for the source of the disruption itself.

by Jia Tolentino, New Yorker |  Read more:
Image: Adrian Tomine

A Landmark Legal Shift Opens Pandora's Box for DIY Guns

Five years ago, 25-year-old radical libertarian Cody Wilson stood on a remote central Texas gun range and pulled the trigger on the world’s first fully 3-D-printed gun. When, to his relief, his plastic invention fired a .380-caliber bullet into a berm of dirt without jamming or exploding in his hands, he drove back to Austin and uploaded the blueprints for the pistol to his website, Defcad.com.

He'd launched the site months earlier along with an anarchist video manifesto, declaring that gun control would never be the same in an era when anyone can download and print their own firearm with a few clicks. In the days after that first test-firing, his gun was downloaded more than 100,000 times. Wilson made the decision to go all in on the project, dropping out of law school at the University of Texas, as if to confirm his belief that technology supersedes law.

The law caught up. Less than a week later, Wilson received a letter from the US State Department demanding that he take down his printable-gun blueprints or face prosecution for violating federal export controls. Under an obscure set of US regulations known as the International Trade in Arms Regulations (ITAR), Wilson was accused of exporting weapons without a license, just as if he'd shipped his plastic gun to Mexico rather than put a digital version of it on the internet. He took Defcad.com offline, but his lawyer warned him that he still potentially faced millions of dollars in fines and years in prison simply for having made the file available to overseas downloaders for a few days. "I thought my life was over," Wilson says.

Instead, Wilson has spent the last years on an unlikely project for an anarchist: Not simply defying or skirting the law but taking it to court and changing it. In doing so, he has now not only defeated a legal threat to his own highly controversial gunsmithing project. He may have also unlocked a new era of digital DIY gunmaking that further undermines gun control across the United States and the world—another step toward Wilson's imagined future where anyone can make a deadly weapon at home with no government oversight.

Two months ago, the Department of Justice quietly offered Wilson a settlement to end a lawsuit he and a group of co-plaintiffs have pursued since 2015 against the United States government. Wilson and his team of lawyers focused their legal argument on a free speech claim: They pointed out that by forbidding Wilson from posting his 3-D-printable data, the State Department was not only violating his right to bear arms but his right to freely share information. By blurring the line between a gun and a digital file, Wilson had also successfully blurred the lines between the Second Amendment and the First.

"If code is speech, the constitutional contradictions are evident," Wilson explained to WIRED when he first launched the lawsuit in 2015. "So what if this code is a gun?”

The Department of Justice's surprising settlement, confirmed in court documents earlier this month, essentially surrenders to that argument. It promises to change the export control rules surrounding any firearm below .50 caliber—with a few exceptions like fully automatic weapons and rare gun designs that use caseless ammunition—and move their regulation to the Commerce Department, which won't try to police technical data about the guns posted on the public internet. In the meantime, it gives Wilson a unique license to publish data about those weapons anywhere he chooses.

"I consider it a truly grand thing," Wilson says. "It will be an irrevocable part of political life that guns are downloadable, and we helped to do that."

Now Wilson is making up for lost time. Later this month, he and the nonprofit he founded, Defense Distributed, are relaunching their website Defcad.com as a repository of firearm blueprints they've been privately creating and collecting, from the original one-shot 3-D-printable pistol he fired in 2013 to AR-15 frames and more exotic DIY semi-automatic weapons. The relaunched site will be open to user contributions, too; Wilson hopes it will soon serve as a searchable, user-generated database of practically any firearm imaginable.

All of that will be available to anyone anywhere in the world with an uncensored internet connection, to download, alter, remix, and fabricate into lethal weapons with tools like 3-D printers and computer-controlled milling machines. “We’re doing the encyclopedic work of collecting this data and putting it into the commons,” Wilson says. “What’s about to happen is a Cambrian explosion of the digital content related to firearms.” He intends that database, and the inexorable evolution of homemade weapons it helps make possible, to serve as a kind of bulwark against all future gun control, demonstrating its futility by making access to weapons as ubiquitous as the internet.

Of course, that mission seemed more relevant when Wilson first began dreaming it up, before a political party with no will to rein in America’s gun death epidemic held control of Congress, the White House, and likely soon the Supreme Court. But Wilson still sees Defcad as an answer to the resurgent gun control movement that has emerged in the wake of the Parkland, Florida, high school shooting that left 17 students dead in February.

The potential for his new site, if it functions as Wilson hopes, would also go well beyond even the average Trump supporter’s taste in gun rights. The culture of homemade, unregulated guns it fosters could make firearms available to even those people who practically every American agrees shouldn’t possess them: felons, minors, and the mentally ill. The result could be more cases like that of John Zawahiri, an emotionally disturbed 25-year-old who went on a shooting spree in Santa Monica, California, with a homemade AR-15 in 2015, killing five people, or Kevin Neal, a Northern California man who killed five people with AR-15-style rifles—some of which were homemade—last November.

"This should alarm everyone," says Po Murray, chairwoman of Newtown Action Alliance, a Connecticut-focused gun control group created in the wake of the mass shooting at Sandy Hook Elementary School in 2013. "We’re passing laws in Connecticut and other states to make sure these weapons of war aren’t getting into the hands of dangerous people. They’re working in the opposite direction."

When reporters and critics have repeatedly pointed out those potential consequences of Wilson's work over the last five years, he has argued that he’s not seeking to arm criminals or the insane or to cause the deaths of innocents. But nor is he moved enough by those possibilities to give up what he hopes could be, in a new era of digital fabrication, the winning move in the battle over access to guns.

With his new legal victory and the Pandora's box of DIY weapons it opens, Wilson says he's finally fulfilling that mission. “All this Parkland stuff, the students, all these dreams of ‘common sense gun reforms'? No. The internet will serve guns, the gun is downloadable.” Wilson says now. “No amount of petitions or die-ins or anything else can change that."

Defense Distributed operates out of an unadorned building in a north Austin industrial park, behind two black-mirrored doors marked only with the circled letters "DD" scrawled by someone's finger in the dust. In the machine shop inside, amid piles of aluminum shavings, a linebacker-sized, friendly engineer named Jeff Winkleman is walking me through the painstaking process of turning a gun into a collection of numbers.

Winkleman has placed the lower receiver of an AR-15, the component that serves as the core frame of the rifle, on a granite table that's been calibrated to be perfectly flat to one ten-thousandth of an inch. Then he places a Mitutoyo height gauge—a thin metal probe that slides up and down on a tall metal stand and measures vertical distances—next to it, poking one edge of the frame with its probe to get a baseline reading of its position. "This is where we get down to the nitty gritty," Winkleman says. "Or, as we call it, the gnat's ass."

Winkleman then slowly rotates the guage's rotary handle to move its probe down to the edge of a tiny hole on the side of the gun's frame. After a couple careful taps, the tool's display reads 0.4775 inches. He has just measured a single line—one of the countless dimensions that define the shape of any of the dozens of component of an AR-15—with four decimal places of accuracy. Winkleman's job at Defense Distributed now is to repeat that process again and again, integrating that number, along with every measurement of every nook, cranny, surface, hole, lip, and ridge of a rifle, into a CAD model he's assembling on a computer behind him, and then to repeat that obsessively comprehensive model-building for as many guns as possible.

That a digital fabrication company has opted for this absurdly manual process might seem counterintuitive. But Winkleman insists that the analog measurements, while infinitely slower than modern tools like laser scanners, produce a far more accurate model—a kind of gold master for any future replications or alterations of that weapon. "We're trying to set a precedent here," Winkelman says. "When we say something is true, you absolutely know it's true."

One room over, Wilson shows me the most impressive new toy in the group's digitization toolkit, one that arrived just three days earlier: A room-sized analog artifact known as an optical comparator. The device, which he bought used for $32,000, resembles a kind of massive cartoon X-ray scanner.

Wilson places the body of an AR-9 rifle on a pedestal on the right side of the machine. Two mercury lamps project neon green beams of light onto the frame from either side. A lens behind it bends that light within the machine and then projects it onto a 30-inch screen at up to 100X magnification. From that screen's mercury glow, the operator can map out points to calculate the gun's geometry with microscopic fidelity. Wilson flips through higher magnification lenses, then focuses on a series of tiny ridges of the frame until the remnants of their machining look like the brush strokes of Chinese calligraphy. "Zoom in, zoom in, enhance" Wilson jokes.

Turning physical guns into digital files, instead of vice-versa, is a new trick for Defense Distributed. While Wilson's organization first gained notoriety for its invention of the first 3-D printable gun, what it called the Liberator, it has since largely moved past 3-D printing. Most of the company's operations are now focused on its core business: making and selling a consumer-grade computer-controlled milling machine known as the Ghost Gunner, designed to allow its owner to carve gun parts out of far more durable aluminum. In the largest room of Defense Distributed's headquarters, half a dozen millennial staffers with beards and close-cropped hair—all resembling Cody Wilson, in other words—are busy building those mills in an assembly line, each machine capable of skirting all federal gun control to churn out untraceable metal glocks and semiautomatic rifles en masse.

For now, those mills produce only a few different gun frames for firearms, including the AR-15 and 1911 handguns. But Defense Distributed’s engineers imagine a future where their milling machine and other digital fabrication tools—such as consumer-grade aluminum-sintering 3-D printers that can print objects in metal—can make practically any digital gun component materialize in someone's garage.

by Andy Greenberg, Wired |  Read more:
Images: Olman Hernandez and Michelle Groskopf

Tuesday, July 10, 2018


Suzanne Saroff

Walmart Nation: Mapping the Largest Employers in the U.S.


Walmart Nation: Mapping the Largest Employers in the U.S.

In an era where Amazon steals most of the headlines, it’s easy to forget about brick-and-mortar retailers like Walmart.

But, even though the market values the Bezos e-commerce juggernaut at about twice the sum of Walmart, the blue big-box store is very formidable in other ways. For example, revenue and earnings are two areas where Walmart still reigns supreme, and the stock just hit all-time highs yesterday on an earnings beat.

That’s not all, though. As today’s map shows, Walmart is dominant in one other notable way: the company is the biggest private employer in America in a whopping 22 states.

Seriously, Juice Is Not Healthy

Obesity affects 40 percent of adults and 19 percent of children in the United States and accounts for more than $168 billion in health care spending each year. Sugary beverages are thought to be one of the major drivers of the obesity epidemic. These drinks (think soda and sports drinks) are the largest single source of added sugars for Americans and contribute, on average, 145 added calories a day to our diets. For these reasons, reducing sugary beverage consumption has been a significant focus of public health intervention. Most efforts have focused on sodas.

But not juice. Juice, for some reason, gets a pass. It’s not clear why.

Americans drink a lot of juice. The average adult drinks 6.6 gallons per year. More than half of preschool-age children (ages 2 to 5) drink juice regularly, a proportion that, unlike for sodas, has not budged in recent decades. These children consume on average 10 ounces per day, more than twice the amount recommended by the American Academy of Pediatrics.

Parents tend to associate juice with healthfulness, are unaware of its relationship to weight gain and are reluctant to restrict it in their child’s diet. After all, 100 percent fruit juice — sold in handy individual servings — has been marketed as a natural source of vitamins and calcium. Department of Agriculture guidelines state that up to half of fruit servings can be provided in the form of 100 percent juice and recommend drinking fortified orange juice for the vitamin D. Some brands of juice are even marketed to infants.

Government programs designed to provide healthy food for children, such as the Special Supplemental Nutrition Program for Women, Infants, and Children, offer juice for kids. Researchers have found that children in the program are more likely to exceed the recommended daily fruit juice limit than those who are similarly poor but not enrolled.

Despite all the marketing and government support, fruit juices contain limited nutrients and tons of sugar. In fact, one 12-ounce glass of orange juice contains 10 teaspoons of sugar, which is roughly what’s in a can of Coke. (...)

It’s tempting to minimize the negative contributions of juice to our diets because it’s “natural” or because it contains “vitamins.” Studies that support this view exist, but many are biased and have been questioned.

And we doubt you’d take a multivitamin if it contained 10 teaspoons of sugar.

There is no evidence that juice improves health. It should be treated like other sugary beverages, which are fine to have periodically if you want them, but not because you need them. Parents should instead serve water and focus on trying to increase children’s intake of whole fruit. Juice should no longer be served regularly in day care centers and schools. Public health efforts should challenge government guidelines that equate fruit juice with whole fruit, because these guidelines most likely fuel the false perception that drinking fruit juice is good for health.

by Erika R. Cheng, Lauren G. Fiechtner and Aaron E. Carroll, NY Times | Read more:
Image:Nicolas Ortega

Monday, July 9, 2018

Amazon Is Already Undercutting Prices on Over-the-Counter Pills

As pharmacy chains await Amazon.com Inc.’s entry into the prescription-drug market, the online retail giant is already undercutting them for non-prescription medicine for aches, colds and allergies.

Median prices for over-the-counter, private-brand medicine sold by Walgreens Boots Alliance Inc. and CVS Health Corp. were about 20 percent higher than Basic Care, the over-the-counter drug line sold exclusively by Amazon, according to a report Friday by Jefferies Group analysts.

Last week, Amazon announced that it was buying PillPack, a pharmacy company that will give it an entry point into the U.S.’s $328.6 billion market for prescription drugs. Shares of CVS and Walgreens plunged on the news, as investors bet Amazon could lure pharmacy customers with lower prices, and give them one less reason to go to the corner drugstore.

Amazon began selling the Basic Care line in August with roughly 35 products and has since expanded its range to 65 drugs, according to the Jefferies analysts. The products include mild painkillers, cold and flu medication, sleeping aids and other medication commonly found in the pharmacy aisle.

Cheaper Drugs

At a midtown Manhattan Duane Reade, part of the Walgreens chain, a store-brand pack of 500 acetaminophen pills costs $18.99. Amazon is selling the same count and strength product for $7.40. Two allergy medications, cetirizine -- also known as Zyrtec, and loratadine -- sold under the Claritin brand, cost about three-quarters less on Amazon than the drugstore chains’ house brands did in the store.

According to the Jefferies report, 84 percent of Walgreens’ and 72 percent of CVS’s house-brand drugs were more expensive than the Basic Care line.

In-house brands are a way for retailers to sell over-the-counter products that can compete with manufacturers’ brand offerings, such as Tylenol or Advil. Amazon’s Basic Care brand is made by Perrigo Co., which also makes in-house brands for other retailers.

Many, but not all, of Amazon’s over-the-counter drugs are available through the retailer’s Prime service, which offers free shipping and fast delivery.

by Aziza Kasumov, Bloomberg | Read more:
Image: Getty

Teddybears

The Cognition Crisis

Our lives on this planet have improved in so many amazing ways over the last century. On average, we are now healthier, more affluent and literate, less violent and longer living. Despite these unprecedented positive changes, clear signs exist that we are in the midst of an emerging crisis — one that has not yet been recognized in its full breadth, even though it lurks just beneath the surface of our casual conversations and swims in the undercurrents of our news feeds. This is not the well-known crisis that we’ve induced upon the earth’s climate, but one that is just as threatening to our future. This is a crisis of our minds. A cognition crisis.

A cognition crisis is not defined by a lack of information, knowledge or skills. We have done a fine job in accumulating those and passing them along across millennia. Rather, this a crisis at the core of what makes us human: the dynamic interplay between our brain and our environment — the ever-present cycle between how we perceive our surroundings, integrate this information, and act upon it.

This ancient perception-action cycle ensured our earliest survival by allowing our primordial predecessors to seek nutrients and avoid toxins. It is from these humble beginnings that the human brain evolved to pursue more diverse resources and elude more inventive threats. It is from here that human cognition emerged to support our success in an increasingly complex and competitive environment: attention, memory, perception, creativity, imagination, reasoning, decision making, emotion and aggression regulation, empathy, compassion, and wisdom. And it is here that our crisis exists.

Today, hundreds of millions of people around the world seek medical assistance for serious impairments in their cognition: major depressive disorder, anxiety, schizophrenia, autism, post-traumatic stress disorder, dyslexia, obsessive-compulsive disorder, bipolar disorder, attention deficit hyperactivity disorder (ADHD), addiction, dementia, and more. In the United States alone, depression affects 16.2 million adults, anxiety 18.7 million, and dementia 5.7 million — a number that is expected to nearly triple in the coming decades.
American teens have experienced a 33% increase in depressive symptoms, with 31% more having died by suicide between 2010 and 2015.

The immense personal, societal and economic impact of cognitive dysfunction warrants heightened consideration because the crisis is growing, not receding. Despite substantial investment in research and treatments by governments, foundations, and companies around the world, the prevalence and impact of these conditions are escalating. Between 2005 and 2015, the number of people worldwide with depression and anxiety increased by 18.4% and 14.9% respectively, while individuals with dementia exhibited a 93% increase over those same years.

To some degree, these trends reflect the overall growth and aging of the world’s population. This will only continue to increase in the future: the global population of seniors is predicted to swell to 1.5 billion by 2050. Although there are clear benefits to living longer, an unfortunate negative consequence is the burden it places on many aspects of cognition.

There are signs something else is going on, too. Over the last several decades, worrying tears have appeared in the cognitive fabric of our youth, notably in terms of emotional regulation and attentional deployment. American teens have experienced a 33% increase in depressive symptoms, with 31% more having died by suicide in 2015 than in 2010. ADHD diagnoses have also increased dramatically. While a growing awareness of these conditions — and with it, more frequent diagnoses — are likely factors, it does not seem this is the whole story; the magnitude of this escalation points to a deeper problem. (...)

Neuroscientists and leadership in the medical world now appreciate that much more unites seemingly disparate aspects of cognition than divide them. For example, attention deficits are now recognized to be a prominent feature of major depressive disorder, and are included in the most recent diagnostic criteria — the bible used by mental health experts — as a “diminished ability to concentrate.” The reality is that each of us has one mind, and embracing this will foster our ability to nurture it.

There is also, as I’ve said, a common, underlying aggravator that has exerted an impact across all domains of cognition: the dramatic plunge we’ve taken into the information age on the back of the digital revolution. Every way we interact with our environment, as well as with each other and ourselves, has been radically transformed by technology.

The old environment, where our cognition evolved, is long gone. The new environment, where multidimensional information flows like water (from a firehose!), challenges our brain and behavior at a fundamental level.

This has been shown in the laboratory, where scientists have documented the influence of information overload on attention, perception, memory, decision making, and emotional regulation. And it has also been shown in the real world, where we see strong associations between the use of technology and rising rates of depression, anxiety, suicide, and attention deficits, especially in children.

Although the exact mechanism is still under exploration, a complex story is emerging. We are seeing accelerating reward cycles associated with intolerance to delayed gratification and sustained attention; excessive information exposure connected with stress, depression, and anxiety (e.g., fear of missing out and being non-productive); and, of course, multitasking has been linked to safety issues (such as texting while driving) and a lack of focus (which impacts our relationships, our studies, and our work).

What’s more, our constant engagement with technology interferes with the pursuit of other behaviors critical for maintaining a healthy mind, such as nature exposure, physical movement, face-to-face contact, and restorative sleep. Its negative influence on empathy, compassion, cooperation, and social bonding are just beginning to be understood.

by Adam Gazzaley MD, PhD, Medium |  Read more:
Image: Maria Medem
[ed. It ain't just technology. Economic insecurity and inequality, corporate rapaciousness (in all its various forms), parasitic "healthcare" profiteering, environmental degradation, dysfunctional politics, militarized policing, constant bombardment by consumer marketing industries (see also: Speech Defects), pervasive surveillance, endless wars and more. If you don't have some form of cognitive impairment you're probably nuts.]

Could Seoul Be the Next Great Cyberpunk City?

Since the original Blade Runner takes place in an imagined late-2010s Los Angeles, I’d have gotten a kick out of seeing its sequel, which after prolonged speculation finally came out late last year, in the actual late-2010s Los Angeles. But having moved to Korea a few years ago, I settled for a screening here in Seoul. In some ways, this ultimately felt like the more appropriate city in which to see the movie: when Blade Runner 2049‘s first trailer came out, I wrote here about its apparent acknowledgement of the considerable Korean influence felt in Los Angeles since its predecessor’s release. While no small number of Koreans already lived there back in 1982, the makers of Blade Runner — like everyone else at the time — couldn’t see past the economic rise of Japan, whose cash-flooded conglomerates then seemed poised to buy up not just Hollywood’s studios the downtown skyline as well.

When I did make it back to Los Angeles earlier this year, I saw sights that proved more memorable than even the spectacles of Blade Runner 2049. Coming in from the airport, for instance, I looked up to see the Korean Air logo looming 73 stories above downtown at the top of the Wilshire Grand Center, a building still under construction when last I saw it. Then, lowering my sights from that glowing orb so reminiscent of the South Korean flag, I spotted a tent village that had sprouted in the darkness of a freeway underpass. The first Blade Runner envisioned Los Angeles as having plunged into a kind of third-world condition, with its ruling class perched high above (if not on a different planet from) the teeming common element doing business in countless different languages down in the streets. Something tells me that the contrast in the real 2019 might look even starker than that.

But then contrast lies at the heart of the science-fiction tradition of cyberpunk, the most influential examples of which include Blade Runner as well as William Gibson’s Neuromancer, published in 1984 and now considered the archetypical cyberpunk novel. The common description of Gibson’s work of that period, “high tech meets low life,” also broadly characterizes cyberpunk itself, which, unlike so much sci-fi of earlier generations before, understands that technological progress doesn’t come with moral progress. Nor does it come with the kind of widespread social or economic progress upon which many stories of the future once premised themselves. Nor does that high tech penetrate all areas equally: “The future is already here ,” said Gibson in what has turned out to be one his most-quoted lines. “ It’s just not evenly distributed.”

A visitor to Seoul, even if they’ve come from the supposedly developed West, might feel as if they’ve entered one of those unevenly distributed chunks of the future. The obvious elements are all in place: the shiny skyscrapers and the colossal video screens on their sides; the punctual, usually unsoiled subway trains and the riders streaming high-definition video or playing startlingly advanced-looking games on their phones. But one doesn’t have to stay long to be impressed less by the technology itself than by how thoroughly it has integrated with the life of nearly all Seoulites. Here, in a place where even grandmothers stand in line for the latest smartphone, the fact that some middle-aged Americans have never bothered to get one at all looks like an example of the reverse luxury possible only in a terminally decadent culture.

Cyberpunk’s list of required conditions includes not just technology, but all-pervasive technology. Often, characters must wield their personal technology to evade the impersonal technology commanded by their corporate overlords. Does anyone make an attempt to evade the surveillance going on in seemingly every corner of Seoul, a saturation that surprises even visitors familiar with the all-seeing CCTV cameras of London? Enthusiasts of cyberpunk, attuned to its essentially dystopian nature, will also quickly take note of how Korea seems to have taken the genre’s convention of a few mega-corporations running the show as its economic model. True Korean corporate loyalists can buy just about everything — food, entertainment, healthcare, car, home, and much else besides — from the same conglomerate, or chaebol.

I actually saw Blade Runner 2049 at one of a chain of chaebol-owned movie theaters myself. At that same multiplex, conveniently located right across the street from my apartment building, I’d previously seen Oshii Mamoru’s acclaimed animated cyberpunk film Ghost in the Shell. The original Blade Runner had taken Los Angeles and Japanified it, bringing in as much the boisterous, freewheeling urban Japan of centuries past as the outwardly straight-laced technopolis of 1980s Tokyo. Ghost in the Shell cast as its near-future Japanese setting of “New Port City” an apparently little-altered Hong Kong; the tour-de-force montage in the middle of the movie constitutes a master class in not just how to make a place come realistically alive in animation, but to unify setting, theme, form, and substance at a stroke. (...)

As they and other similarly inclined foreign photographers know, cyberpunk does not live by skyscrapers and outdoor video screens alone. If it did, any of the new metropolises erected whole across China over the past few weeks would offer a superior setting. While Seoul does have one foot in the future, from the Western perspective, the other foot kept firmly in the past makes it a potentially great cyberpunk city. Or to use a different metaphor, one I’ve come across from time to time in Korean books about Korean society, this country runs simultaneously on two “clocks,” one of them pushed so aggressively to the present that it runs perpetually fast, and another that has barely moved for decades or even centuries. Korea, like cyberpunk itself, everywhere mixes the futuristic with the things and ways of the past.

by Colin Marshall, LARB | Read more:
Image: uncredited

Sunday, July 8, 2018

Black Dub


[ed. Feat. Trixie Whitley. See also: Wild Country (with her dad Chris).]

Low Bar


Gary Trudeau, Doonesbury
via:

Pentagon Audit: “There Will Be Unpleasant Surprises”

For the first time in its history, the Department of Defense is now undergoing a financial audit.

The audit, announced last December, is itself a major undertaking that is expected to cost $367 million and to involve some 1200 auditors. The results are to be reported in November 2018.

“Until this year, DoD was the only large federal agency not under full financial statement audit,” Pentagon chief financial officer David L. Norquist told the Senate Budget Committee in March. Considering the size of the Pentagon, the project is “likely to be the largest audit ever undertaken,” he said.

The purpose of such an audit is to validate the agency’s financial statements, to detect error or fraud, to facilitate oversight, and to identify problem areas. Expectations regarding the outcome are moderate.

“DOD is not generally expected to receive an unqualified opinion [i.e. an opinion that affirms the accuracy of DoD financial statements] on its first-ever, agency-wide audit in FY2018,” the Congressional Research Service said in a new report last week. See Defense Primer: Understanding the Process for Auditing the Department of Defense, CRS In Focus, June 26, 2018.

In fact, “It took the Department of Homeland Security, a relatively new and much smaller enterprise, about ten years to get to its first clean opinion,” Mr. Norquist noted at the March Senate hearing.

In the case of the DoD audit, “I anticipate the audit process will uncover many places where our controls or processes are broken. There will be unpleasant surprises. Some of these problems may also prove frustratingly difficult to fix.”

by Steven Aftergood, Federation of American Scientists | Read more:
Image: via
[ed. See also: How $21 Trillion in U.S. Tax Money Disappeared]