Sunday, April 22, 2018

Is the Internet Complete?

In 2013, a debate was held between friends Peter Thiel and Marc Andreessen, the thrust of which was to determine whether we are living through an innovation golden age, or whether innovation was in fact stalling. Thiel, of course, played the innovation sceptic, and it is interesting now with five years remove to look back on the debate to see how history has vindicated his position. In short all of those things that were ‘just around the corner’ in 2013 are, sure enough, still ‘just around the corner.’

One strand of Thiel’s argument at the time (and since) was that the ostentatious progress made in computing in the last 15 years has blinded us to the lack of technological progress made elsewhere. We can hardly have failed to notice the internet revolution, and thus we map that progress onto everything, assuming that innovation is a cosmic force rather than something which happens on a piecemeal basis.

Certainly, this argument has gained more traction since 2013. However, in this piece I’d like to add an extra layer to it. Is it possible that innovation is not only stalling in non-tech areas, but in tech itself? Could we make an argument to say that the internet itself is, in fact, complete?

The driving logic for this argument is easy to dismiss—namely that all of the big ‘possible’ ideas associated with the internet have been taken. One might say that companies like Google, Facebook, and Amazon were all inevitabilities from the moment computers around the world started to link up, and that once these roles were filled, innovation started to dry up as there was fundamentally ‘nothing left to do.’

The first counter to this is that it’s easy to say in hindsight. Sure Amazon—or a company like it—seems like an inevitability now, but there was once a time when people were highly sceptical of the idea that anyone would want to conduct any type of financial transaction over the internet. The second counter, proved by the first, is to say that we can’t possibly know what might be coming over the horizon at any given time. The next Google might be just about to break, and if it were to do so then it would make a mockery of such defeatism.

Both of these arguments are fair and true. However they simply refute the idea that the internet is finished at this moment, rather than the more fundamental idea that it’s possible for the internet to be finished at all. It is this second idea—or at least the theoretical possibility of it—that I want to illustrate here.

Let’s compare the internet to another world changing innovation—the car. The car started as a ‘base concept’; a motorised chassis to transport you from point A to point B. That was the car on ‘day one,’ and this underlying concept has remained true up to this present day. However, that does not mean that the idea was complete on ‘day one.’ Over time, the car was innovated upon and developed. We added passenger seats so you could take people with you. We added a roof, so it wasn’t only suitable for fair weather. We added air conditioning to keep us comfortable, and a radio to keep us entertained. And, of course, we dramatically improved its performance and reliability. All in all, it probably took about 60 years for the car to go from ‘base concept’ to ‘finished article,’ from which point all cars have remained, on the whole, the same. Sure, a car from 2018 is far more advanced than a car from 1965, but it isn’t fundamentally different. It’s just a more polished version of the same thing. The 1965 car is, however, quite a lot different from an 1895 car, because that was the period of true innovation that fleshed out the idea.

We can say, therefore, that the car—as a concept—is ‘finished.’ Now, that isn’t to say of course that there has been no innovation since 1965, and that there won’t be any innovation in the future. Far from it. But it is to say that this innovation has been, on the whole, mere improvement on a static idea. Cars are cars, TVs are TVs, washing machines are washing machines. Once the idea is complete, we merely fiddle in the edges.

In spite of this precedent, we don’t see the internet in the same way. We don’t see the internet as a ‘base concept’ (i.e. a vast directory of information), which is gradually being shaped and polished into a finished article, from which point it will just tick along. Why not? I would suggest it’s because of the business structure. With the car, you had competing businesses each turning out their own version of the idea. Ford versus Mercedes versus Nissan. However, with the internet, you don’t have different ‘competing internets,’ you just have one—and business’s role within it is to look after the component pieces.

It’s a bit like there had only ever been one car, and different brands had each brought a new addition to the table to create the final useful thing. Facebook came along and put in the seats, Google the driving interface, YouTube the radio, and so on until the car was finished.

Seeing the internet this way, we might speculate that we have come to the end of the initial shaping of the idea, and that from this point on we shall merely be optimising it. We have on our hands the internet equivalent of a 58 Chevy—there’s a long way to go, but fundamentally it does what we want it to do.

by Alex Smith, Quillette |  Read more:
Image: uncredited
[ed. See also: The Comments section.]

Peer Pressure

As I was writing this review, two friends called to ask me about ''that book that says parents don't matter.'' Well, that's not what it says. What ''The Nurture Assumption'' does say about parents and children, however, warrants the lively controversy it began generating even before publication.

Judith Rich Harris was chucked out of graduate school at Harvard 38 years ago, on the grounds that she was unlikely to become a proper experimental psychologist. She never became an academic and instead turned her hand to writing textbooks in developmental psychology. From this bird's-eye vantage point, she began to question widespread belief in the ''nurture assumption -- the notion that parents are the most important part of a child's environment and can determine, to a large extent, how the child turns out.'' She believes that parents must share credit (or blame) with the child's own temperament and, most of all, with the child's peers. ''The world that children share with their peers is what shapes their behavior and modifies the characteristics they were born with,'' Harris writes, ''and hence determines the sort of people they will be when they grow up.''

The public may be forgiven for saying, ''Here we go again.'' One year we're told bonding is the key, the next that it's birth order. Wait, what really matters is stimulation. The first five years of life are the most important; no, the first three years; no, it's all over by the first year. Forget that: It's all genetics! Cancel those baby massage sessions!

What makes Harris's book important is that it puts all these theories into larger perspective, showing what each contributes and where it's flawed. Some critics may pounce on her for not having a Ph.D. or an academic position, and others will quarrel with the importance she places on peers and genes, but they cannot fault her scholarship. Harris is not generalizing from a single study that can be attacked on statistical grounds, or even from a single field; she draws on research from behavior genetics (the study of genetic contributions to personality), social psychology, child development, ethology, evolution and culture. Lively anecdotes about real children suffuse this book, but Harris never confuses anecdotes with data. The originality of ''The Nurture Assumption'' lies not in the studies she cites, but in the way she has reconfigured them to explain findings that have puzzled psychologists for years.

First, researchers have been unable to find any child-rearing practice that predicts children's personalities, achievements or problems outside the home. Parents don't have a single child-rearing style anyway, because how they treat their children depends largely on what the children are like. They are more permissive with easy children and more punitive with defiant ones.

Second, even when parents do treat their children the same way, the children turn out differently. The majority of children of troubled and even abusive parents are resilient and do not suffer lasting psychological damage. Conversely, many children of the kindest and most nurturing parents succumb to drugs, mental illness or gangs.

Third, there is no correlation -- zero -- between the personality traits of adopted children and their adoptive parents or other children in the home, as there should be if ''home environment'' had a strong influence.

Fourth, how children are raised -- in day care or at home, with one parent or two, with gay parents or straight ones, with an employed mom or one who stays home -- has little or no influence on children's personalities.

Finally, what parents do with and for their children affects children mainly when they are with their parents. For instance, mothers influence their children's play only while the children are playing with them; when the child is playing alone or with a playmate, it makes no difference what games were played with mom.

Most psychologists have done what anyone would do when faced with this astonishing, counterintuitive evidence -- they've tried to dismiss it. Yet eventually the most unlikely idea wins if it has the evidence to back it up. As Carole Wade, a behavioral scientist, puts it, trying to squeeze existing facts into an outdated theory is like trying to fit a double-sized sheet onto a queen-sized bed. One corner fits, but another pops out. You need a new sheet or a new bed.

''The Nurture Assumption'' is a new sheet, one that covers the discrepant facts. I don't agree with all the author's claims and interpretations; often she reaches too far to make her case -- throwing the parent out with the bath water, as it were. But such criticisms should not detract from her accomplishment, which is to give us a richer, more accurate portrait of how children develop than we've had from outdated Freudianism or piecemeal research.

The first problem with the nurture assumption is nature. The findings of behavior genetics show, incontrovertibly, that many personality traits and abilities have a genetic component. No news here; many others have reported this research, notably the psychologist Jerome Kagan in ''The Nature of the Child.'' But genes explain only about half of the variation in people's personalities and abilities. What's the other half?

Harris's brilliant stroke was to change the discussion from nature (genes) and nurture (parents) to its older version: heredity and environment. ''Environment'' is broader than nurture. Children, like adults, have two environments: their homes and their world outside the home; their behavior, like ours, changes depending on the situation they are in. Many parents know the eerie experience of having their child's teacher describe their child in terms they barely recognize (''my kid did what?''). Children who fight with their siblings may be placid with friends. They can be honest at home and deceitful at school, or vice versa. At home children learn how their parents want them to behave and what they can get away with; but, Harris shows, ''These patterns of behavior are not like albatrosses that we have to drag along with us wherever we go, all through our lives. We don't even drag them to nursery school.''

Harris has taken a factor, peers, that everyone acknowledges is important, but instead of treating it as a nuisance in children's socialization, she makes it a major player. Children are merciless in persecuting a kid who is different -- one who says ''Warshington'' instead of ''Washington,'' one who has a foreign accent or wears the wrong clothes. (Remember?) Parents have long lamented the apparent cruelty of children and the obsessive conformity of teen-agers, but, Harris argues, they have missed the point: children's attachment to their peer groups is not irrational, it's essential. It is evolution's way of seeing to it that kids bond with each other, fit in and survive. Identification with the peer group, not identification with the parent, is the key to human survival. That is why children have their own traditions, words, rules, games; their culture operates in opposition to adult rules. Their goal is not to become successful adults but successful children. Teen-agers want to excel as teen-agers, which means being unlike adults.

It has been difficult to tease apart the effects of parents and peers, Harris observes, because children's environments often duplicate parental values, language and customs. (Indeed, many parents see to it that they do.) To see what factors are strongest, therefore, we must look at situations in which these environments clash. For example, when parents value academic achievement and a student's peers do not, who wins? Typically, peers. Differences between black and white teen-agers in achievement have variously been attributed to genes or single mothers, but differences vanish when researchers control for the peer group: whether its members value achievement and expect to go to college, or regard academic success as a hopeless dream or sellout to ''white'' values.

Are there exceptions? Of course, and Harris anticipates them. Some children in anti-intellectual peer groups choose the lonely path of nerdy devotion to schoolwork. And some have the resources, from genes or parents, to resist peer pressure. But exceptions should not detract from the rule: that children, like adults, are oriented to their peers. Do you dress, think and behave more like others of your generation, your parents or the current crop of adolescents?

by Carol Tavris, NY Times (1998) | Read more:
Image: Goodreads
[ed. See also: The Nurture Assumption: First Chapter (Judith Rich Harris, NY Times).]

Ryan Shorosky

Jean Francois De Witte
via:

Saturday, April 21, 2018

The End of the Joint As We Know It

Willie Nelson may be a legendary country musician, but he is first and foremost the world’s most famous joint ambassador. Legend has it that he once smoked a joint — what he referred to in his 1988 autobiography as an “Austin Torpedo” — on the roof of the White House with Jimmy Carter’s middle son. Snoop Dogg, another self-appointed sticky-icky spokesman, says that when the two met for an Amsterdam stoner summit in 2008, Nelson showed up with not one but three smoking devices, and promptly puffed him to the floor. (“I had to hit the timeout button,” Snoop later said of smoking with Nelson.) A quick Google search will turn up an entire genre of Nelson portraiture in which the singer is framed by the haze of a freshly lit jay.

All that to say, you might be surprised to hear that Nelson is no longer much of a joint guy. “I use a vaporizer these days,” he told the British magazine Uncut in 2015. “Even though marijuana smoke is not as dangerous as cigarette smoke, any time you put any kind of smoke in your lungs it takes a toll of some kind.” GQ investigated Nelson’s claims later that year, uncovering that, while joints were still very much part of his rotation, a good portion of his pot consumption had shifted to vape pens so as to be more discreet. “And he eats candy or has oil at night for sleeping,” Nelson’s wife, Annie, added.

That the most famous stoner in the world is now exploring more healthful avenues for pot consumption is a sign of the times. According to the cannabis consumer insights firm BDS Analytics, which has logged more than 800 million transactions at dispensaries across Colorado, Washington, Oregon, and California, legal sales of concentrates (vape pen cartridges and dabs), topicals (patches, salves, lotions), and edibles are rapidly outgrowing those of loose-leaf weed product — what cannabis industry types refer to as “flower.” In 2014, the year that Colorado first began selling legal pot, 65 percent of sales revenue came from flower, while only 13 percent came from concentrates. Last year, flower made up only 47 percent of total sales in the state. The new majority of the market is distributed to concentrates at 29 percent and edibles — which barely existed at the dawn of the legal pot movement — at 15 percent. “There’s more choices available to people, and in that respect we’re seeing a lot of evolution in terms of consumption methods,” Linda Gilbert, the managing director of BDS Analytics’ consumer research division, told me. “There is an evolution of looking at marijuana in the consumer’s mind, from being about getting stoned to actually thinking of it as a wellness product.”

And also as a part of everyday life. Where there were once bowls, grinders, and rolling papers, there are now myriad sleek contraptions: dainty plastic oil pens and weed walkie-talkies and smokable iPhone cases. These days, the consumption method of choice may not even be inhalable. Maybe it’s a canister of Auntie Dolores’s vegan, sugar-free pretzels. Or a $6 bottle of Washington state’s Happy Apple cider. Perhaps you go the transdermal route and slather on some $90 Papa & Barkley THC-and-CBD-infused Releaf Balm. No matter the product, the packaging has traded the psychedelic pot leafs of yore for clean lines and Helvetica fonts. (...)

As smoking accessories have modernized in the past 10 years, and as more states have legalized sales, grinding and rolling up bud has gradually become a more obscure ritual. And the era of the hastily rolled marijuana cigarette — crystallized by everyone from Cheech and Chong to Barack Obama — is slowly coming to a close. “If you fast-forward 10 years and look back at the cannabis market, I’ll take a guess that in some ways we’ll think about consuming cannabis flower like we think about consuming a cigar now,” said Alan Gertner, the CEO of Hiku, a Canadian cannabis producer and retailer that aims to make pot consumption more mainstream. “It’s a ritual, it’s a heritage moment, it’s about celebration. But ultimately cannabis flower for any individual is somewhat hard to interact with. The idea that a 20-year-old is going to learn to roll a joint is sort of ludicrous.” (...)

As wellness-mania swept the nation, pot-trepreneurs saw a chance to capitalize on a portion of the estimated $3.7 trillion market worldwide. Over the past few years, cannabis and its nonpsychoactive byproducts have taken the form of medicine: inhalers designed to dole out exact dosages, supplements, patches, and tinctures. Though state laws still prohibit pot-related companies from advertising on any mainstream platform, many of them now see the value of building recognizable, commercially viable brands. The idea is that to encourage more first-time pot consumers, the point of entry must be significantly less complicated than it used to be. That can mean anything from offering a prerolled joint to a pill you take before going to bed. “Right now the market is still dominated by hardcore stoners,” Micah Tapman, a cofounder and managing director at the Colorado investment firm CanopyVentures said. “If they’re hitting something they want 50 milligrams. Whereas the new consumer that is coming up will be a much lighter-weight consumption. The soccer mom demographic is probably going to gravitate toward the very discreet vaporizers or topicals. They’re not typically going to want a bong sitting on their coffee table.”

In other words, less horticulture, more convenience. Gertner, who previously worked as a head of sales at Google before starting his own coffee, cannabis, and clothing brand, likens the current weed consumption landscape to that of the North American coffee market in the last 30 years. People smoke joints for the same reason they used to drink only plain black coffee: potency. “It was basically like: How quickly can I get caffeine into my system?” Gertner said. As companies like Starbucks introduced new nomenclature around coffee and a reworked guidebook for how to consume it, people began to see the beverage differently. “The coffee experience is now grounded in community, as opposed to grounded in the idea in just straight-up caffeine consumption,” Gertner said. “You went from a world where we optimized for potency to a world where we started to optimize for brand, convenience, and taste. You’re not necessarily drinking a Frappuccino because of caffeine content, you’re drinking a Frappuccino for other reasons.” The cannabis market is on a similar path of mass consumption. The earthy taste, smell, and delivery of weed smoke are being muted and manipulated. Just like drinking Frappuccinos, that means customers are sometimes ingesting extra calories or unsavory fillers in the process. And like most artisanal coffee brands, these professionalized cannabis brands can also charge a premium. The joint will always have a place in weed culture, but advanced technology has made it functionally outdated. “You start to think of this future where you say, I can have a cannabis drink, why would I smoke a joint?” Gertner said.

by Alyssa Bereznak, The Ringer | Read more:
Image: uncredited

Michael Cohen and the End Stage of the Trump Presidency

On May 1, 2003, the day President George W. Bush landed on the U.S.S. Abraham Lincoln in front of the massive “Mission Accomplished” sign, I was in Baghdad performing what had become a daily ritual. I went to a gate on the side of the Republican Palace, in the Green Zone, where an American soldier was receiving, one by one, a long line of Iraqis who came with questions and complaints. I remember a man complaining that his house had been run over by a tank. There was a woman who had been a government employee and wanted to know about her salary. The soldier had a form he was supposed to fill out with each person’s request and that person’s contact information. I stood there as the man talked to each person and, each time, said, “Phone number?” And each person would answer some version of “The phone system of Iraq has been destroyed and doesn’t work.” Then the soldier would turn to the next person, write down the person’s question or complaint, and then ask, “Phone number?”

I arrived in Baghdad on April 12th of that year, a few days after Saddam’s statue at Firdos Square had been destroyed. There were a couple of weeks of uncertainty as reporters and Iraqis tried to gauge who was in charge of the country and what the general plan was. There was no electricity, no police, no phones, no courts, no schools. More than half of Iraqis worked for the government, and there was no government, no Army, and so no salaries for most of the country. At first, it seemed possible that the Americans simply needed a bit of time to communicate the new rules. By the end of April, though, it was clear: there was no plan, no new order. Iraq was anarchic.

We journalists were able to use generators and satellite dishes to access outside information, and what we saw was absurd. Americans seemed convinced things were going well in Iraq. The war—and the President who launched it—were seen favorably by seventy per cent of Americans. Then came these pictures of a President touting “Mission Accomplished”—the choice of words that President Trump used in a tweet on Saturday, the morning after he ordered an air strike on Syria. On the ground, we were not prophets or political geniuses. We were sentient adults who were able to see the clear, obvious truth in front of us. The path of Iraq would be decided by those who thrived in chaos.

I had a similar feeling in December, 2007. I came late to the financial crisis. I had spent much of 2006 and 2007 naïvely swatting away warnings from my friends and sources who told me of impending disaster. Finally, I decided to take a deep look at collateralized debt obligations, or C.D.O.s, those financial instruments that would soon be known as toxic assets. I read technical books, talked to countless experts, and soon learned that these were, in Warren Buffett’s famous phrase, weapons of financial mass destruction. They were engineered in such a way that they could exponentially increase profits but would, also, exponentially increase losses. Worse, they were too complex to be fully understood. It was impossible, even with all the information, to figure out what they were worth once they began to fail. Because these C.D.O.s had come to form the core value of most major banks’ assets, no major bank had clear value. With that understanding, the path was clear. Eventually, people would realize that the essential structure of our financial system was about to implode. Yet many political figures and TV pundits were happily touting the end of a crisis. (Larry Kudlow, now Trump’s chief economic adviser, led the charge of ignorance.)

In Iraq and with the financial crisis, it was helpful, as a reporter, to be able to divide the world into those who actually understand what was happening and those who said hopeful nonsense. The path of both crises turned out to be far worse than I had imagined.

I thought of those earlier experiences this week as I began to feel a familiar clarity about what will unfold next in the Trump Presidency. There are lots of details and surprises to come, but the endgame of this Presidency seems as clear now as those of Iraq and the financial crisis did months before they unfolded. Last week, federal investigators raided the offices of Michael Cohen, the man who has been closer than anybody to Trump’s most problematic business and personal relationships. This week, we learned that Cohen has been under criminal investigation for months—his e-mails have been read, presumably his phones have been tapped, and his meetings have been monitored. Trump has long declared a red line: Robert Mueller must not investigate his businesses, and must only look at any possible collusion with Russia. That red line is now crossed and, for Trump, in the most troubling of ways. Even if he were to fire Deputy Attorney General Rod Rosenstein and then have Mueller and his investigation put on ice, and even if—as is disturbingly possible—Congress did nothing, the Cohen prosecution would continue. Even if Trump pardons Cohen, the information the Feds have on him can become the basis for charges against others in the Trump Organization.

This is the week we know, with increasing certainty, that we are entering the last phase of the Trump Presidency. This doesn’t feel like a prophecy; it feels like a simple statement of the apparent truth. I know dozens of reporters and other investigators who have studied Donald Trump and his business and political ties. Some have been skeptical of the idea that President Trump himself knowingly colluded with Russian officials. It seems not at all Trumpian to participate in a complex plan with a long-term, uncertain payoff. Collusion is an imprecise word, but it does seem close to certain that his son Donald, Jr., and several people who worked for him colluded with people close to the Kremlin; it is up to prosecutors and then the courts to figure out if this was illegal or merely deceitful. We may have a hard time finding out what President Trump himself knew and approved.

However, I am unaware of anybody who has taken a serious look at Trump’s business who doesn’t believe that there is a high likelihood of rampant criminality. In Azerbaijan, he did business with a likely money launderer for Iran’s Revolutionary Guard. In the Republic of Georgia, he partnered with a group that was being investigated for a possible role in the largest known bank-fraud and money-laundering case in history. In Indonesia, his development partner is “knee-deep in dirty politics”; there are criminal investigations of his deals in Brazil; the F.B.I. is reportedly looking into his daughter Ivanka’s role in the Trump hotel in Vancouver, for which she worked with a Malaysian family that has admitted to financial fraud. Back home, Donald, Jr., and Ivanka were investigated for financial crimes associated with the Trump hotel in SoHo—an investigation that was halted suspiciously. His Taj Mahal casino received what was then the largest fine in history for money-laundering violations.

Listing all the financial misconduct can be overwhelming and tedious. I have limited myself to some of the deals over the past decade, thus ignoring Trump’s long history of links to New York Mafia figures and other financial irregularities. It has become commonplace to say that enough was known about Trump’s shady business before he was elected; his followers voted for him precisely because they liked that he was someone willing to do whatever it takes to succeed, and they also believe that all rich businesspeople have to do shady things from time to time. In this way of thinking, any new information about his corrupt past has no political salience. Those who hate Trump already think he’s a crook; those who love him don’t care.

I believe this assessment is wrong. Sure, many people have a vague sense of Trump’s shadiness, but once the full details are better known and digested, a fundamentally different narrative about Trump will become commonplace. Remember: we knew a lot about problems in Iraq in May, 2003. Americans saw TV footage of looting and heard reports of U.S. forces struggling to gain control of the entire country. We had plenty of reporting, throughout 2007, about various minor financial problems. Somehow, though, these specific details failed to impress upon most Americans the over-all picture. It took a long time for the nation to accept that these were not minor aberrations but, rather, signs of fundamental crisis. Sadly, things had to get much worse before Americans came to see that our occupation of Iraq was disastrous and, a few years later, that our financial system was in tatters.

The narrative that will become widely understood is that Donald Trump did not sit atop a global empire. He was not an intuitive genius and tough guy who created billions of dollars of wealth through fearlessness. He had a small, sad global operation, mostly run by his two oldest children and Michael Cohen, a lousy lawyer who barely keeps up the pretenses of lawyering and who now faces an avalanche of charges, from taxicab-backed bank fraud to money laundering and campaign-finance violations.

by Adam Davidson, New Yorker |  Read more:
Image: Yana Paskova / Getty
[ed. I'm usually loathe to post anything about Trump, but in this case making an exception. With our lick spittle Congress (Republicans and Democrats) and generally absent and clueless American electorate, we shall see.] 

Dream Home


Carlos Diniz, Monarch Bay Homes, Entry from Street, 1961
via:
[ed. Central court yard and entry, master bedroom/bath on the left, kitchen and dining room right, guest bedroom/bath in the middle, living room/deck in back or to the side. Perfect.]

The Beach Boy

The friends met for dinner, as they did the second Sunday of every month, at a small Italian restaurant on the Upper East Side. There were three couples: Marty and Barbara, Jerry and Maureen, and John and Marcia, who had recently returned from a weeklong island getaway to celebrate their twenty-ninth wedding anniversary. “Were the beaches beautiful? How was the hotel? Was it safe? Was it memorable? Was it worth the money?” the friends asked.

Marcia said, “You had to see it to believe it. The ocean was like bathwater. The sunsets? Better than any painting. But the political situation, don’t get me started. All the beggars!” She put a hand over her heart and sipped her wine. “Who knows who’s in charge? It’s utter chaos. Meanwhile, the people all speak English! ” The vestiges of colonialism, the poverty, the corruption—it had all depressed her. “And we were harassed,” she told the friends. “By prostitutes. Male ones. They followed us down the beach like cats. The strangest thing. But the beach was absolutely gorgeous. Right, John?”

John sat across the table, swirling his spaghetti. He glanced up at Marcia, nodded, winked.

The friends wanted to know what the prostitutes had looked like, how they’d dressed, what they’d said. They wanted details.

“They looked like normal people,” Marcia said, shrugging. “You know, just young, poor people, locals. But they were very complimentary. They kept saying, ‘Hello, nice people. Massage? Nice massage for nice people?’ ”

“Little did they know!” John joked, furrowing his eyebrows like a maniac. The friends laughed.

“We’d read about it in the guidebook,” Marcia said. “You’re not supposed to acknowledge them at all. You don’t even look them in the eye. If you do, they’ll never leave you alone. The beach boys. The male prostitutes, I mean. It’s sad,” she added. “Tragic. And, really, one wonders how anybody can starve in a place like that. There was food everywhere. Fruit on every tree. I just don’t understand it. And the city was rife with garbage. _Rife! _” she proclaimed. She put down her fork. “Wouldn’t you say, hon?”

“I wouldn’t say ‘rife,’ ” John answered, wiping the corners of his mouth with his cloth napkin. “Fragrant, more like.”

The waiter collected the unfinished plates of pasta, then returned and took their orders of cheesecake and pie and decaffeinated coffee. John was quiet. He scrolled through photos on his cell phone, looking for a picture he’d taken of a monkey seated on the head of a Virgin Mary statue. The statue was painted in bright colors, and its nose was chipped, showing the white, chalky plaster under the paint. The monkey was black and skinny, with wide-spaced, neurotic eyes. Its tail curled under Mary’s chin. John turned the screen of his phone toward the table.

“This little guy,” he said.

“Aw!” the friends cried. They wanted to know, “Were the monkeys feral? Were they smelly? Are the people Catholic? Are they all very religious there?”

“Catholic,” Marcia said, nodding. “And the monkeys were everywhere. Cute but very sneaky. One of them stole John’s pen right out of his pocket.” She rattled off whatever facts she could remember from the nature tour they’d taken. “I think there are laws about eating the monkeys. I’m not so sure. They all spoke English,” she repeated, “but sometimes it was hard to understand them. The guides, I mean, not the monkeys.” She chuckled.

“The monkeys spoke Russian, naturally,” John said, and put away his phone.

The table talk moved on to plans for renovating kitchens, summer shares, friends’ divorces, new movies, books, politics, sodium, and cholesterol. They drank the coffees, ate the desserts. John peeled the wrapper off a roll of antacids. Marcia showed off her new wristwatch, which she’d purchased duty-free at the airport. Then she reapplied her lipstick in the reflection in her water glass. When the check came, they all did the math, divvying up the cost. Finally, they paid and went out onto the street and the women hugged and the men shook hands.

“Welcome home,” Jerry said. “Back to civilization.”

“Ooh-ooh ah-ah!” John cried, imitating a monkey.

“Jesus, John,” Marcia whispered, blushing and batting the air with her hand as if shooing a fly.

Each couple went off in a different direction. John was a bit drunk. He’d finished Marcia’s second glass of wine because she’d said it was giving her a headache. He took her arm as they turned the corner onto East Eighty-second Street toward the Park. The streets were nearly empty, late as it was. The whole city felt hushed, focussed, like a young dancer counting her steps.

Marcia fussed with her silk scarf, also purchased duty-free at the airport. The pattern was a paisley print in red and black and emerald green and had reminded her of the vibrant colors she’d seen the locals wearing on the island. Now she regretted buying the scarf. The tassels were short and fuzzy, and she thought they made the silk look cheap. She could give the scarf away as a gift, she supposed, but to whom? It had been so expensive, and her closest friends—the only people she would ever spend so much money on—had just seen her wearing it. She sighed and looked up at the moon as they entered the Park.

“Thank God Jerry and Maureen are getting along again,” Marcia said. “It was exhausting when they weren’t.”

“Marty was funny about the wine, wasn’t he?” John said. “I told him I was fine with Syrah. What does it matter? Que sera, sera.” He unhooked his arm from Marcia’s elbow and put it around her shoulder.

“It gave me such a headache,” Marcia complained. “Should we cut across the field, or go around?”

“Let’s be bold.”

They stepped off the gravel onto the grass. It was a dark, clear night in the Park, quiet except for the sound of distant car horns and ripping motors echoing faintly through the trees. John tried for a moment to forget that the city was right there, surrounding them. He’d been disappointed by how quickly his life had returned to normal after the vacation. As before, he woke up in the morning, saw patients all day long, returned home to eat dinner with Marcia, watched the evening news, bathed, and went to bed. It was a good life, of course. He wasn’t suffering from a grave illness; he wasn’t starving; he wasn’t being exploited or enslaved. But, gazing out the window of the tour bus on the island, he had felt envious of the locals, of their ability to do whatever was in their nature. His own struggles seemed like petty complications, meaningless snags in the dull itinerary that was his life. Why couldn’t he live by instinct and appetite, be primitive, be free?

At a rest stop, John had watched a dog covered in mange and bleeding pustules rub itself against a worn wooden signpost. He was lucky, he thought, not to be that dog. And then he felt ashamed of his privilege and his discontentedness. “I should be happy,” he told himself. “Marcia is.” Even the beggars tapping on car windows, begging for pennies, were smiling. “Hello, nice people,” the beach boys had said. John had wanted to return their salutations and ask what it was that they had to offer. He’d been curious. But Marcia had shushed him, taken his hand, and plodded down the beach with her eyes fixed on the blank sand.

Crossing the lawn in Central Park, John now tried to recall the precise rhythm of the crashing waves on the beach on the island, the smell of the ocean, the magic and the danger he’d sensed brewing under the surface of things. But it was impossible. This was New York City. When he was in it, it was the only place on earth. He looked up. The moon was just a sliver, a comma, a single eyelash in the dark, starless sky.

“I forgot to call Lenore,” Marcia was saying as they walked. “Remind me tomorrow. She’ll be upset if I don’t call. She’s so uptight.”

They reached the edge of the lawn and stepped onto a paved path that led them up to a bridge over a plaza, where people were dancing in pairs to traditional Chinese music. John and Marcia stopped to watch the dark shapes moving in the soft light of lanterns. A young man on a skateboard rumbled past them.

“Home sweet home,” Marcia said.

John yawned and tightened his arm around her shoulder. The silk of Marcia’s scarf was slippery, like cool water rippling between his fingers. He leaned over and kissed her forehead. There she was, his wife of nearly thirty years. As they walked on, he thought of how pretty she’d been when they were first married. In all their years together, he had never been interested in other women, had never strayed, had even refused the advances of a colleague one night, a few years ago, at a conference in Baltimore. The woman had been twenty years his junior, and when she invited him up to her room John had blushed and made a stuttering apology, then spent the rest of the evening on the phone with Marcia. “What did she expect from me?” he’d asked. “Some kind of sex adventure?”

“We can watch that movie when we get home,” Marcia said as they reached the edge of the Park. “The one about the jazz musician.”

“Whatever you like,” John said. He yawned again.

“Maureen said it was worth watching.”

by Ottessa Moshfegh, New Yorker | Read more:
Image: David Brandon Geeting for The New Yorker / Design by Tamara Shopsin

Friday, April 20, 2018

Twenty Years Later: On Massive Attack and Mezzanine

In 1998, when I was a writer for Vibe magazine (which was the leading black culture journal), I went to London to interview the trip-hop kings Massive Attack. They were preparing to release their third album, the beautifully complex and brooding Mezzanine. Although they collaborated with other singers and musicians, the core Massive trio consisted of Grant “Daddy G” Marshall, Andy “Mushroom” Vowles, and Robert “3D” Del Naja. Del Naja penned most of the Dadaistic lyrics on Mezzanine and thought of its title.

As a pop journalist, I had already covered their contemporaries Portishead and Tricky, so of course I felt it was my duty and destiny to fly to London to cover Mezannine. I had to beg the cornball editor in chief to send me, and in the end, the story was never published. But I never forgot the experience of sitting with Massive, trying to refrain from being too much of a fanboy. The year before, when I’d visited Paris, I’d taken Blue Lines along to serve as my soundtrack of the city. Me and my beautiful homegirl Wendy Washington rode out to the Palace of Versailles as Massive’s remake of the soul classic “Be Thankful for What You Got” blared from the speakers.

Mezzanine is that place in between, when you’re not sure if it’s yesterday or today,” Del Naja told me at Olympic Studios in London. “That little space where it’s quite scary and erotic.” Also known as an excellent graffitist and painter (inspired by Jean-Michel Basquiat) and rumored to be the mysterious street artist Banksy (a claim he denied), Del Naja had seemingly become the leader of the group. He was its resident auteur, and his Francis Bacon view of the world was visible in the band’s videos, album designs, and stage lighting.

The band first came together in their hometown of Bristol. Though Del Naja was shorter than the lanky Daddy G or the equally tall Mushroom, who were both somewhat reserved, his presence towered over the group, and it caused an earthquake break between the brotherhood. “When we got together to record, we realized the amount of creative friction between us,” Mushroom would confess later. “In fact, we wound up recording in separate studios.” The producer Neil Davidge later described the process as “messy,” but from that angst, tension, and messiness, Massive Attack delivered a masterpiece.

Mezzanine, which celebrates its twentieth anniversary on April 20, was a departure from the gritty electronica of Massive Attack’s first two projects, Blue Lines and Protection. It incorporates more rock elements, including a newly hired band with the guitarist Angelo Bruschini, formerly of the New Wave band the Numbers, leading the charge and change. Mezzanine is an album best listened to loud, preferably on earphones, to properly hear the layers of weirdness and rhythms, a soulful sound collage that was miles away from the “Parklifes” and “Champagne Supernovas” of their Brit-pop contemporaries Blur and Oasis.

“In the beginning, the sampler was our main musical instrument,” Daddy G said in his slight West Indian accent. “When we first formed Massive Attack, basically we were DJs who went into the studio with our favorite records and created tracks. At the time, we tried to rip off the entire style of American hip-hop performers, but we realized, as artists, it’s important to be yourself. We realized it made no sense for us to talk about the South Bronx. Slowly but surely, we had to reclaim our identities as Brit artists who wanted to do something different with our music.”

Massive Attack unintentionally kicked off a new British Invasion in the nineties that was as powerful as the Beatles in the sixties, Led Zeppelin in the seventies, or Duran Duran in the eighties. Beginning with their sophisticated debut, Blue Lines, which featured the vocalist Shara Nelson on the masterful “Unfinished Sympathy” and “Safe from Harm,” there was something special about their blunted cinematic (Martin Scorsese was another hero) sound that had a paranoid artfulness. For me, having long grown bored with the stunted growth of many American rap artists during that era of “jiggy” materialism and thug tales of nineties rap, their slowed-down music (tape loops, samples, and beats) created an often dreamy, sometimes nightmarish sound that was fresh and futuristic. The author Will Self called it a “sinuous, sensual, subversive soundscape.”

Blue Lines was accessible avant-garde and comprehensible experimentation. From first listen, I could tell they were as inspired by the pioneering producers Marley Marl, Lee “Scratch” Perry, and Prince as they were by Burt Bacharach, John Barry, and Brian Eno. The trip-hop label was bestowed on the group by the Brit journalist Jonathan Taylor to describe the trippy music that was simultaneously street and psychedelic. Trip-hop was a tag that, like jazz, was often rejected by the practitioners, but it fit perfectly. A few years later, when I started contributing short fiction to the Brown Sugar erotica series, I imagined the stories as textual films, and it was Massive Attack that supplied the seductive score.

“Angel,” the third single from Mezzanine, would go on to become one of their most licensed songs, used for the opening credits of the series House as well as by the director George Miller in Mad Max: Fury Road. Over the years, Massive Attack’s music has been used in many movies (Pi, The Matrix, The Insider) and television programs (Luther, True Blood, Power). The videos for their own songs, including the four singles from Mezzanine (“Risingson,” “Teardrop,” “Angel,” and “Inertia Creeps”), were always sinister and disturbing. Massive’s hybrid music achieved pop-cult status, selling millions of copies while still being critically lauded.

Yet in 1998, at least, the group itself was still somewhat anonymous. They could walk around the city without being bothered. Mushroom and I popped out of the studio and went to a juice stand. He told me about his years living in New York, where he was the protégé of Devastating Tito from the rap group the Fearless Four. “Have you ever heard of them?” he asked shyly. When I told him my best friend Jerry Rodriguez had directed their video for “Problems of the World” in 1983, Mushroom smiled. “Finally,” he replied, “someone who knows about the old school.”

by Michael A. Gonzales, Paris Review | Read more:
Image: Massive Attack, uncredited

The Mortgage Business Is No Fun Anymore

[ed. See: Wells Fargo Pays $1 Billion to Federal Regulators.]

Mortgages.

A few years ago I tried to add up all the fines that Bank of America Corp. had paid for doing bad mortgage stuff. There were a lot of them, enough that, as the internet decays, the chart that I put together appears to no longer be readable. The headline number -- $68 billion of fines, settlements, etc. at the time -- was big, but more interesting to me was the repetitiveness of the fines. Countrywide Financial Corp. sold bad mortgages to Fannie Mae and Freddie Mac between 2004 and 2008, and Bank of America bought Countrywide in 2008, and everyone sued, and Bank of America reached settlements with Freddie (in 2011) and Fannie (in 2013) over those mortgages. And then it reached a settlement with the Federal Housing Finance Agency over those mortgages in 2014. And then it lost a trial and was ordered to pay more money to the Justice Department over some of those loans, though that was later overturned. And then later in 2014 it reached a $16.7 billion settlement with the Justice Department and other agencies covering, among other things, those same loans. Bank of America basically spent the first half of this decade revisiting its mortgage misdeeds, over and over again, and paying for them each time.

It must have been pretty tedious for Bank of America! I wrote at the time:
A popular criticism of the modern approach to punishing bank misdeeds -- giant fines imposed on the banks, not much in the way of individual punishments and a preference for settlements rather than trials -- is that it turns the fines into just a "cost of doing business," normalizing misbehavior rather than preventing future wrongdoing. 
If a bank does a bad mortgage thing, and you find the person who did the thing (or who signed off on the thing, or who ran the bank when it did the thing, or whatever), and you put him in prison, then that sends a powerful message that the thing was a crime, that it was not business as usual, that it could not be tolerated. If, on the other hand, you hold no individuals responsible, but come back to the bank every year and say "hey remember that mortgage thing? that'll be another $2 billion," then arguably you send the message that the bad mortgage thing was expected in the mortgage business, and that fines for doing it are just a normal part of life.
But you could say that same thing with a different emphasis. If a bank does a bad mortgage thing, and you find the person who did the thing and put him in prison, then the bank could reasonably conclude that the problem was that person, that the bad thing was anomalous, that there is nothing wrong with its mortgage business model as a whole. If, on the other hand, you come back to the bank every year and fine it a few billion dollars for the same mortgage thing, then that will send the bank a message about the costs and benefits of the mortgage business. The message is that the misconduct is not the work of a few bad criminals who are Not Like Us, but that it is endemic to the business model. The bank might reasonably conclude that frequent multibillion-dollar fines are a cost of doing business, and not worth it.

Which approach is correct depends on the facts, of course. If the mortgage business is great, and reliably adds to the long-term prosperity of the nation, but occasionally a mortgage banker murders a customer, then you should probably put the murder bankers in prison and not mess with the business model. But if an aspect of the mortgage business -- say, the originate-to-distribute model that many commentators blame for creating moral hazard, loose underwriting standards and a bubble in house prices -- seems to be pervasively bad and dangerous for the broader economy, and if every bank involved in the mortgage business seems to be getting in trouble for the same sorts of misbehavior, then maybe you do want to add to the costs of doing business. Maybe the way to think about it is not as anomalous crime but as a bad business model that imposes social costs, and to force banks to internalize those costs in the form of huge and frequent fines.

Anyway:
One measure of how much things have changed in the last decade at Bank of America Corp.: The firm has stopped reporting fees from its mortgage business. ... 
After billions in fines and payouts to regulators and investors in the years after the financial crisis, Bank of America has moved away from securitizing and servicing residential mortgages. The firm originated $9.4 billion of new mortgages in the first quarter; in 2009, it topped $100 billion in one quarter.
There is still plenty of mortgage interest from loans held on Bank of America's balance sheet, "but the business of making mortgages to sell them -- the specialty of subprime lender Countrywide Financial Corp. that Bank of America bought in 2008 -- has largely become a relic." Regulators and prosecutors made it incredibly tedious for Bank of America to be in that business, and now Bank of America ... just ... isn't.

Of course this is not purely a story of the fines; the decline of demand for private-label mortgage securitizations matters too. Nor is it a story of the decline of the originate-to-distribute model generally: Nonbank lenders like Quicken Financial have stepped in to replace the big banks in the originate-to-distribute model. (Nor is it even that new: After all, Countrywide was an originate-to-distribute upstart that competed with the big banks, until Bank of America bought it.) Still it is worth commemorating. People spent years arguing that any mortgage fines, no matter how huge and repetitive, didn't matter, that they were less than the profits the banks made from their misconduct, that they were just a "cost of doing business," that they would change nothing. But the fines were enough of a cost of doing business for Bank of America that it's now more or less out of the business.

by Matt Levine, Bloomberg |  Read more:
[ed. Or you could do both: fine banks and prosecute bank executives who 'did the thing'. Why is it an either or proposition?]

DC's Low Graduation Rates

US News: DC Schools Brace For Catastrophic Drop In Graduation Rates. “Catastrophic” isn’t hyperbole; the numbers are expected to drop from 73% (close to the national average of 83%) all the way down to 42%.

There’s no debate about why this is happening – it’s because the previous graduation rate was basically fraudulent, inflated by pressure to show that recent “reforms” were working. Last year there was a big investigation, all the investigators agreed it was fraudulent, DC agreed to do a little less fraud this year, and this is the result. It’s pretty damning, given how everybody was praising the reforms and holding them up as a national model and saying this proved that Tough But Fair Education Policy could make a difference:
As far as scandals in the education policy world go, D.C. schools so profoundly miscalculating graduation rates at a time when the high-profile school district had been so self-laudatory about its achievements may be difficult to top […] Indeed, when Michelle Rhee took the reins of the flailing school system a decade ago, it galvanized the education reform movement, which had just begun blossoming around the country, and ushered in a host of controversial changes that included the shuttering of multiple schools, firing of hundreds of teachers and the institution of new teacher evaluation and compensation models. 
The changes not only dramatically altered the local political landscape in Washington but also shined a national spotlight on D.C. schools that prompted other urban school districts and education policy researchers to consider the nation’s capital a bellwether for the entire education reform movement.
Well, darn.

But the interesting bit isn’t just that DC schools are doing worse than we thought. It’s that DC schools are doing amazingly, uniquely, abysmally bad, below what should even be possible. We make fun of states like Mississippi and Alabama, but both have graduation rates around 80%. The lowest graduation rate in any of the fifty states is in Oregon, which still has 69%. And we are being told DC is 42%!

When we discussed this in the last links thread, people had a couple of explanations:

1. Washington DC has a terrible school system, with uniquely incompetent administrators.

2. Washington DC is poorer, blacker, and more segregated than any other state, and that leads to unique challenges other school systems don’t face. Even though everyone is doing their best, they face insurmountable structural difficulties.

3. Maybe the fraud was so bad that DC over-corrected, and now has stricter standards than anywhere else.

Which of these is most important?

by Scott Alexander, Slate Star Codex | Read more:

[Addendum/Edit: More discussion at Highlights From The Comments On DC Graduation Rates. Main update is that I underestimated the importance of absences, which are what’s causing a lot of the non-graduations, which there might be more of in DC, and which DC might be stricter about than other areas.]

[ed. Do read the Comments link above, it's a no-win situation. See also: Labor Renaissance in the Heartland]

Why a Cashmere Sweater Can Cost $2,000 … or $30

A plain, yet meticulously crafted, sweater made of the world’s finest cashmere can cost $2,000 or more from premier fashion labels such as Loro Piana. You can also grab a simple sweater of 100 percent cashmere off a discount rack at Uniqlo for as little as $29.90.

Made from the softest wool produced by certain breeds of goats, such as the Zalaa Ginst white goat and Tibetan Plateau goat, cashmere was once reserved for the wealthiest fashionistas. (Napoleon Bonaparte’s wife helped popularize the fabric.) But over the past two decades, its cachet skyrocketed and cheaper garments flooded the market.

Nearly $1.4 billion of cashmere garments were exported globally in 2016, up from $1.2 billion in 2010, according to United Nations trade data. That's nearly 5 million kilograms worth of pullovers, cardigans, and other tops. Now it’s seemingly everywhere, at every price point. Ubiquity can spell trouble for a product as it becomes more of a commodity, especially one that’s been historically marketed as a luxury item.

So what makes one sweater better than another? The price depends on the quality of the yarn, where the garment was manufactured, the number of units purchased by the brand, and the markup.  (...)

Cashmere goats are bred in various locations around the world, including Australia, China, and Mongolia, but Scotland and Italy are known for cashmere-manufacturing prowess. Luxury fashion houses such as Loro Piana and Brunello Cuccinelli depend on the expertise of their workers to wash, treat, and refine the fabric. Cashmere, for instance, repels a lot of dye. Italy, however, has developed ways to achieve strong saturation.

Not every manufacturer takes such care. Blended versions of cashmere sweaters, available at most retailers these days, can contain varying quantities of the fabric. In some cases, as little as 5 percent of a garment is made from the good stuff, with the rest a combination of mass-market fabrics such as polyester or nylon. The product is still marketed as a “cashmere-blend.”

Occasionally, even fake cashmere makes it to store shelves. "There is certainly fraud on this front,” says Frances Kozen, a director at the Cornell Institute of Fashion and Fiber Innovation. Deceitful sellers and counterfeiters sometimes create cashmere blends labeled 100 percent cashmere that contain wool, viscose rayon, and acrylic—and possibly even rat fur, she says.

by Kim Bhasin and Justina Vasquez, Bloomberg | Read more:
Image:Alessia Pierdomenico/Bloomberg

Thursday, April 19, 2018

Gil Scott Heron



[ed. No one like Gil. See also: I Think I'll Call It Morning (...be no rain) and The Bottle.]

The Uses and Abuses of “Neoliberalism”

Neoliberalism is the linguistic omnivore of our times, a neologism that threatens to swallow up all the other words around it. Twenty years ago, the term “neoliberalism” barely registered in English-language debates. Now it is virtually inescapable, applied to everything from architecture, film, and feminism to the politics of both Donald Trump and Hillary Clinton. Search the ProQuest database for uses of “neoliberalism” between 1989 and 1999, and you turn up fewer than 2,000 hits. From the crash of 2008–9 to the present, that figure already exceeds 33,000.

On the left, the term “neoliberalism” is used to describe the resurgence of laissez-faire ideas in what is still called, in most quarters, “conservative” economic thought; to wage battle against the anti-tax, anti-government, and anti-labor union agenda that has swept from the Reagan and Thatcher projects into the Tea Party revolt and the Freedom Caucus; to describe the global market economy whose imperatives now dominate the world; to castigate the policies of Bill and Hillary Clinton’s centrist Democratic Party; and to name the very culture and sensibilities that saturate our minds and actions.

Vital material issues are at stake in all these debates. But the politics of words are in play as well. Naming matters. It focuses agendas and attention. It identifies causation and strategies of action. It collects (or rebuffs) allies. Is the overnight ubiquity of the term “neoliberalism” the sign of a new acuteness about the way the world operates? Or is it a caution that a word, accelerating through too many meanings, employed in too many debates, gluing too many phenomena together, and cannibalizing too many other words around it, may make it harder to see both the forces at loose in our times and where viable resistance can be found? (...)

Prying “neoliberalism” apart

For some of those startled by this sudden turn in political language, the success of “neoliberalism” is a measure of its substantive hollowness. After careful study, two political scientists labeled it a “conceptual trash-heap” in 2009: a word into which almost any phenomenon can be tossed and any number of meanings piled up for composting. Others have called it a vacant, empty epithet.

But the problem with neoliberalism is neither that it has no meaning nor that it has an infinite number of them. It is that the term has been applied to four distinctly different phenomena. “Neoliberalism” stands, first, for the late capitalist economy of our times; second, for a strand of ideas; third, for a globally circulating bundle of policy measures; and fourth, for the hegemonic force of the culture that surrounds and entraps us. These four neoliberalisms are intricately related, of course. But the very act of bundling them together, tucking their differences, loose ends, and a clear sense of their actually existing relations under the fabric of a single word, may, perversely, obscure what we need to see most clearly. What would each of these phenomena look like without the screen of common identity that the word “neoliberalism” imparts to them?

by Daniel Rodgers, Dissent |  Read more:
Image: R. Barraez D’Lucca
[ed. See also: How Neoliberalism Worms Its Way Into Your Brain.]