Monday, May 14, 2018

Starbucks: No Purchase Needed To Use The Restroom

Starbucks Executive Chairman Howard Schultz said Thursday that Starbucks' bathrooms will now be open to everyone, whether paying customers or not.

"We don't want to become a public bathroom, but we're going to make the right decision 100 percent of the time and give people the key," Schultz said at the Atlantic Council in Washington, D.C. "Because we don't want anyone at Starbucks to feel as if we are not giving access to you to the bathroom because you are 'less than.' We want you to be 'more than.' "

Two black men, business partners Donte Robinson and Rashon Nelson, both 23, were arrested on April 12 as they sat in a Philadelphia Starbucks after not buying anything and asking to use the restroom.

The store manager called the police after asking them to leave — a "terrible decision," Schultz said.

Video of their arrest sparked outrage on social media and accusations of racial bias. Protesters stood outside and inside the Philadelphia Starbucks store where the arrest occurred.

"The company, the management and me personally — not the store manager — are culpable and responsible. And we're the ones to blame," Schultz said Thursday.

"We were absolutely wrong in every way. The policy and the decision she made, but it's the company that's responsible," he added.

Schultz said the company had a "loose policy" around letting only paying customers use the bathroom, though it was up to the discretion of individual store managers.

by James Doubek, NPR |  Read more:
Image: via
[ed. You still gotta get a key though (and here's where I applaud McDonalds for their no questions asked policy). It might help if government actually did its fucking job and provided basic public services - public bathrooms being one of the most basic.]

An Urban Scene




Colley Whisson: An Urban Scene
via: Colley Whisson and YouTube

A Better Life


Jen Sorenson
via:

Why Buying a House Today is Much Harder Than in 1950

To understand just how unaffordable owning a home can be in American cities today, look at the case of a teacher in San Francisco seeking his or her first house.

Educators in the City by the Bay earn a median salary of $72,340. But, according to a new Trulia report, they can afford less than one percent of the homes currently on the market.

Despite making roughly $18,000 more than their peers in other states, many California teachers—like legions of other public servants, middle-class workers, and medical staff—need to resign themselves to finding roommates or enduring lengthy commutes. Some school districts, facing a brain drain due to rising real estate prices, are even developing affordable teacher housing so they can retain talent.

This housing math is brutal. With the average cost of a home in San Francisco hovering at $1.61 million, a typical 30-year mortgage—with a 20 percent down payment at today’s 4.55 percent interest rate—would require a monthly payment of $7,900 (more than double the $3,333 median monthly rent for a one-bedroom apartment last year).

Over the course of a year, that’s $94,800 in mortgage payments alone, clearly impossible on the aforementioned single teacher’s salary, even if you somehow put away enough for a down payment (that would be $322,000, if you’re aiming for 20 percent).

The figures become more frustrating when you compare them with the housing situation a previous generation faced in the late ’50s. The path an average Bay Area teacher might have taken to buy a home in the middle of the 20th century was, per data points and rough approximations, much smoother.

According to a rough calculation using federal data, the average teacher’s salary in 1959 in the Pacific region was more than $5,200 annually (just shy of the national average of $5,306). At that time, the average home in California cost $12,788. At the then-standard 5.7 percent interest rate, the mortgage would cost $59 a month, with a $2,557 down payment. If your monthly pay was $433 before taxes, $59 a month wasn’t just doable, it was also within the widely accepted definition of sustainable, defined as paying a third of your monthly income for housing. Adjusted for today’s dollars, that’s a $109,419 home paid for with a salary of $44,493.

And that’s on just a single salary.

A dream of homeownership placed out of reach

That midcentury scenario seems like a financial fantasia to young adults hoping to buy homes today. Finding enough money for a down payment in the face of rising rents and stagnant wages, qualifying for loans in a difficult regulatory environment, then finding an affordable home in expensive metro markets can seem like impossible tasks.

In 2016, millennials made up 32 percent of the homebuying market, the lowest percentage of young adults to achieve that milestone since 1987. Nearly two-thirds of renters say they can’t afford a home.

Even worse, the market is only getting more challenging: The S&P CoreLogic Case-Shiller National Home Price Index rose 6.3 percent last year, according to an article in the Wall Street Journal. This is almost twice the rate of income growth and three times the rate of inflation. Realtor.com found that the supply of starter homes shrinks 17 percent every year.

It’s not news that the homebuying market, and the economy, were very different 60 years ago. But it’s important to emphasize how the factors that created the homeownership boom in the ’50s—widespread government intervention that tipped the scales for single-family homes, more open land for development and starter-home construction, and racist housing laws and discriminatory practices that damaged neighborhoods and perpetuated poverty—have led to many of our current housing issues.

From the front lines to the home front

The postwar boom wasn’t just the result of a demographic shift, or simply the flowering of an economy primed by new consumer spending. It was deliberately, and successfully, engineered by government policies that helped multiply homeownership rates from roughly 40 percent at the end of the war to 60 percent during the second half of the 20th century.

The pent-up demand before the suburban boom was immense: Years of government-mandated material shortages due to the war effort, and the mass mobilization of millions of Americans during wartime, meant homebuilding had become stagnant. In 1947, six million families were doubling up with relatives, and half a million were in mobile homes, barns, or garages according to Leigh Gallagher’s book The End of the Suburbs.

The government responded with intervention on a massive scale. According to Harvard professor and urban planning historian Alexander von Hoffman, a combination of two government initiatives—the establishment of the Federal Housing Authority and the Veterans Administration (VA) home loans programs—served as runways for first-time homebuyers.

Initially created during the ’30s, the Federal Housing Authority guaranteed loans as long as new homes met a series of standards, and, according to von Hoffman, created the modern mortgage market.

“When the Roosevelt administration put the FHA in place in the ’30s, it allowed lenders who hadn’t been in the housing market, such as insurance companies and banks, to start lending money,” he says.

The VA programs did the same thing, but focused on the millions of returning soldiers and sailors. The popular GI Bill, which provided tuition-free college education for returning servicemen and -women, was an engine of upward mobility: debt-free educational advancement paired with easy access to finance and capital for a new home.

It’s hard to comprehend just how large an impact the GI Bill had on the Greatest Generation, not just in the immediate aftermath of the war, but also in the financial future of former servicemen. In 1948, spending as part of the GI Bill consumed 15 percent of the federal budget.

The program helped nearly 70 percent of men who turned 21 between 1940 and 1955 access a free college education. In the years immediately after WWII, veterans’ mortgages accounted for more than 40 percent of home loans.

An analysis of housing and mortgage data from 1960 by Leo Grebler, a renowned professor of urban land economics at UCLA, demonstrates the pronounced impact of these programs. In 1950, FHA and VA loans accounted for 51 percent of the 1.35 million home starts across the nation. These federal programs would account for anywhere between 30 and 51 percent of housing starts between 1951 and 1957, according to Grebler’s analysis.

Between 1953 and 1957, 2.4 million units were started under these programs, using $3.6 billion in loans. This investment dwarfs the amount of money spent on public infrastructure during that period. (...)

Skewed perspectives

Many of the pressing urban planning issues we face today—sprawl and excessive traffic, sustainability, housing affordability, racial discrimination, and the persistence of poverty—can be traced back to this boom. There’s nothing wrong with the government promoting homeownership, as long as the opportunities it presents are open and accessible to all.

As President Franklin Roosevelt said, “A nation of homeowners, of people who won a real share in their own land, is unconquerable.”

That vision, however, has become distorted, due to many of the market incentives encouraged by the ’50s housing boom. In wealthy states, especially California, where Prop 13 locked in property tax payments despite rising property values, the incumbent advantage to owning homes is immense.

In Seattle, the amount of equity a homeowner made just holding on to their investment, $119,000, was more than an average Amazon engineer made last year ($104,000).

In many regions, we may have “reached the limits of suburbanization,” since buyers and commuters can’t stomach supercommutes. NIMBYism and local zoning battles have become the norm when any developers try to add much-needed housing density to expensive urban areas.

In many ways, to paraphrase Roosevelt, we’re seeing a “class” of homeowners become unconquerable. The cost of construction; a shortage of cheap, developable land near urban centers (gobbled up by earlier waves of suburbanization); and other factors have made homes increasingly expensive.

In other words, it’s a great time to own a home—and a terrible time to aspire to buy one.

by Patrick Sisson, Curbed | Read more:
Image: Getty

Michael Pollan: How to Change Your Mind

Before Timothy Leary came along, psychedelic drugs were respectable. The American public’s introduction to these substances was gradual, considered, and enthusiastic. Psilocybin appeared in an article, “Seeking the Magic Mushroom,” in a 1957 issue of Life magazine. The author of this first-person account of consuming mind-altering fungi at a traditional ritual in a remote Mexican village, R. Gordon Wasson, was a banker, a vice president at J.P. Morgan. The founder and editor in chief of Time-Life, Henry Robinson Luce, took LSD with his wife under a doctor’s supervision, and he liked to see his magazines cover the possible therapeutic uses of psychedelics. Perhaps most famously, Cary Grant underwent more than 60 sessions of LSD-facilitated psychotherapy in the late 1950s, telling Good Housekeeping that the treatment made him less lonely and “a happy man.” This wasn’t a Hollywood star’s foray into the counterculture, but part of an experimental protocol used by a group of Los Angeles psychiatrists who were convinced they had found a tool that could make talk therapy transformative. And they had the science—or at least the beginnings of it—to back up their claims.

Then came Leary and his Harvard Psilocybin Project. In his new book, How to Change Your Mind: What the New Science of Psychedelics Teaches Us About Consciousness, Dying, Addiction, Depression, and Transcendence, Michael Pollan recounts how nascent but promising research into the therapeutic uses of psychedelic drugs in the 1950s and early 1960s went off the rails. Leary, with Richard Alpert (who would later rename himself Ram Dass), conducted research in a methodologically haphazard and messianic manner, eventually alienating the university’s administration, who fired them. Leary then went on to become a guru (his term) for the hippie movement, urging America’s youth to “turn on, tune in, and drop out.” LSD came to be associated with the anti-war movement, free love, and a general rejection of Middle American mores, and the authorities no longer looked kindly upon it. By 1970, with the Controlled Substances Act, LSD, psilocybin, peyote, DMT, and mescaline were classified as Schedule I drugs—defined as substances with a high potential for abuse, with no currently accepted medical value in the U.S., and unsafe to use even under medical supervision. For four decades, psychedelics were associated with burnt-out cases shambling around college towns like Berkeley and Cambridge, chromosome damage, and the suicide of the daughter of TV personality Art Linkletter.

Pollan is far from the first person to point out that none of the above characterizations are truthful representations of the most familiar psychedelic drugs. As Purdue’s David E. Nichols wrote in 2016 for the peer-reviewed medical journal Pharmacological Reviews, these drugs are “generally considered physiologically safe and do not lead to dependence or addiction.” There have been no known cases of overdose from LSD, psilocybin, or mescaline. The chromosome scare story turned out to be bogus, Diane Linkletter had a history of depression pre-existing her drug use, and while it’s probably a bad idea for any mentally disturbed person to take a powerful psychoactive drug recreationally, there’s no evidence that psychedelics cause mental illness in otherwise healthy people. To the contrary: After 40 years in the wilderness, psychedelics are once more the subject of serious scientific study, with early results suggesting that the drugs, when used under a therapist’s supervision, can help patients suffering from anxiety, depression, post-traumatic stress disorder, obsessive-compulsive disorder, and both alcohol and nicotine addiction. (...)

How to Change Your Mind includes an account of how various psychedelic drugs found their way into American laboratories and homes, the great hiatus of research into their potential uses after they were outlawed in the ’60s and ’70s, and the “renaissance” of scientific interest in the drugs, beginning in the late 1990s and culminating in several government-funded studies today. Pollan himself was no psychonaut when he became interested in that resurgence. He’d tried psilocybin mushrooms twice in his 20s, then let the remaining stash of fungi molder in a jar in the back of a cabinet; the experience was “interesting” but not something he felt moved to repeat. What drew his attention to the subject later in life were two studies and a dinner party, where a 60ish “prominent psychologist,” married to a software engineer, explained that she and her husband found the “occasional use of LSD both intellectually stimulating and of value to their work.” One of the experiments, conducted at Johns Hopkins, UCLA, and NYU, found that large doses of psilocybin, when administered once or twice to patients who had received terminal cancer diagnoses, helped significantly reduce their fear and depression. The other study, conducted by some of the same researchers, observed the similarities between the results of high doses of psilocybin administered by teams of two therapists and what are commonly described as mystical experiences. The latter are episodes characterized by “the dissolution of one’s ego followed by a sense of merging with nature or the universe.” As Pollan notes, this hardly sounds like news to people accustomed to taking psychedelic drugs, but it marked the first validation of the idea in a rigorous scientific setting.

Further research using fMRI scanners has confirmed the similarity in brain activity between people meditating and people having certain kinds of psychedelic trips. But not all trips are the same, as anyone who has dropped acid can attest. Leary’s one great contribution to the understanding of psychedelics was his emphasis on what has become a mantra for contemporary researchers: set and setting. Set refers to what the person taking the drug expects or is told to expect from the experience, and setting refers to the circumstances of the trip itself: whether the subject is alone or with others, outside or inside, listening to particular kinds of music, wearing an eye mask, receiving guidance from someone they trust, being encouraged to explore ideas and feelings by a therapist, and so on.

Pollan took a couple of research trips himself in the course of writing How to Change Your Mind, with results that are interesting only to the extent that they help him make sense of other people’s accounts of their own journeys. The meat of the book is its chapters on the neuroscience of the drugs and their evident ability to suppress activity in a brain system known as the “default mode network.” The DMN acts as our cerebral executive, coordinating and organizing competing signals from other systems. It is, as Pollan sees it, the “autobiographical brain,” and the site of our ego. The long history of people reporting the sensation of their egos dissolving while under the influence of psychedelics meshes with this interpretation. It’s an experience with the potential to both terrify and, paradoxically, comfort those who undergo it.

Why should this effect prove so helpful to the depressed, addicted, and anxious? As Pollan explains it, these disorders are the result of mental and emotional “grooves” in our thinking that have become, as the DMN’s name suggests, default. We are how we think. The right psychedelic experience can level out the grooves, enabling a person to make new cerebral connections and briefly escape from “a rigidity in our thinking that is psychologically destructive.” The aerial perspective this escape offers doesn’t immediately evaporate either. The terminal cancer patients in the psilocybin study felt lasting relief as a result of the glimpse the drugs gave them of a vista beyond the limitations of their own egos—even the ones who didn’t believe in God or other supernatural forces. (...)

If How to Change Your Mind furthers the popular acceptance of psychedelics as much as I suspect it will, it will be by capsizing the long association, dating from Leary’s time, between the drugs and young people. Pollan observes that the young have had less time to establish the cognitive patterns that psychedelics temporarily overturn. But “by middle age,” he writes, “the sway of habitual thinking over the operations of the mind is nearly absolute.” What he sought in his own trips was not communion with a higher consciousness so much as the opportunity to “renovate my everyday mental life.” He felt that the experience made him more emotionally open and appreciative of his relationships.

by Laura Miller, Slate |  Read more:
Image: Jeannette Montgomery Barron
[ed. See also: Hallucinogenic Drugs as Therapy? I Tried It (Michael Pollan - NY Times)

What Google is Doing With Your (Free) Data

The ACCC is investigating accusations Google is using as much as $580 million worth of Australians' phone plan data annually to secretly track their movements.

Australian Competition and Consumer Commission chairman Rod Sims said he was briefed recently by US experts who had intercepted, copied and decrypted messages sent back to Google from mobiles running on the company's Android operating system.

The experts, from computer and software corporation Oracle, claim Google is draining roughly one gigabyte of mobile data monthly from Android phone users' accounts as it snoops in the background, collecting information to help advertisers.

A gig of data currently costs about $3.60-$4.50 a month. Given more than 10 million Aussies have an Android phone, if Google had to pay for the data it is said to be siphoning it would face a bill of between $445 million and $580 million a year.

Google's privacy consent discloses that it tracks location "when you search for a restaurant on Google Maps". But it does not appear to mention the constant monitoring going on in the background even when Maps is not in use.

The Oracle experts say phone owners' data ends up being consumed even if Google Maps is not in use or aeroplane mode is switched on. Nor will removing the SIM card stop it from happening. Only turning off a phone prevents monitoring, it says.

The information fed back to Google includes barometric pressure readings so it can work out, for example, which level of a shopping mall you are on. By combining this with your coordinates Google knows which shops you have visited.

It can then report to advertisers how often online ads have led to store visits, according to Oracle.

by John Rolfe, Queensland Times |  Read more:
Image: News Corp 
[ed. Australia, but probably everywhere else too. Interesting that Oracle raised this issue. Just trying to be helpful I guess?]

Sunday, May 13, 2018


Fortunato Depero (1892 – 1960)
via:

Kai Piha - History of Waikiki

S&P 500 Should be 1,000-Plus Points Lower

A reversion to the mean in U.S. stock prices could mean the market will fall by at least 20%, according to David Rosenberg of Gluskin Sheff and Associates, who gave his prediction at the Strategic Investment Conference 2018 in San Diego.

Rosenberg, the chief economist and strategist at Toronto-based Gluskin Sheff, said this is one of the strangest securities-market rallies of all time. That’s because all asset classes have gone up, even ones that are inversely correlated.

He thinks a breaking point is a year away, and so investors should start taking precautions now.

Smart money pulls back

The beginning of this year started off great for investors. The S&P 500 Index SPX, +0.17% hit record highs at around 2,750 points, and stocks had their best January since 1987.

As if that was not enough, Rosenberg pointed out, many Wall Street strategists raised their target to 3,000. The media extrapolating record returns only added to the rise in investors’ unreasonable expectations.

However, increasingly more hedge fund managers and billionaire investors who timed the previous crashes are backing out.

One of them is Sam Zell, a billionaire real estate investor, whom Rosenberg says is a “hero” of his. Zell predicted the 2008 financial crisis, eight months early. But, essentially, he was right. Today, his view is that valuations are at record highs.

Then we have Howard Marks, a billionaire American investor who is the co-founder and co-chairman of Oaktree Capital Management. He seconds Zell’s view that valuations are unreasonably high and says the easy money has been made.

“And I don’t always try to seek out corroborating evidence. But there are some serious people out there saying some very serious things about the longevity of the cycle,” said Rosenberg.

Big correction coming

Later at the Strategic Investment Conference, Rosenberg shifted from quoting high-profile investors to showing actual data, which paints the same ominous picture.

For starters, Rosenberg pointed out that only 9% of the time in history have U.S. stocks been so expensive.


Then he showed a table with gross domestic product (GDP) growth figures in the last nine bull rallies. This table reveals a dire trend where each subsequent bull rally in the last 70 years generated less GDP growth. Essentially, that means we are paying more for less growth.


According to Rosenberg’s calculations, the S&P 500 should be at least 1,000 points lower than it is today based on economic growth. In spite of this, equity valuations sit at record highs.

Another historically accurate indicator that predicts the end of bull cycles is household net worth’s share of personal disposable income.

As you can see in the chart below, the last two peaks in this ratio almost perfectly coincided with the dot-com crash and the 2008 financial crisis.

Now the ratio is at the highest level since 1975, which is another sign that reversion is near.

What the Fed thinks

As another strong indicator that recession is around the corner, Rosenberg quoted the Federal Reserve Bank of San Francisco. He pointed out that, having access to tons of research, they themselves admit that equity valuations are so stretched that there will be no returns in the next decade:

“Current valuation ratios for households and businesses are high relative to historical benchmarks … we find that the current price-to-earnings ratio predicts approximately zero growth in real equity prices over the next 10 years.”

Basically, the Fed is giving investors an explicit warning that the market will “mean revert.”

by Oliver Garrett, MarketWatch |  Read more:
Images: Haver Economics/Gluskin Sheff
[ed. See also: Why I Think the Stock Market Cannot Crash in 2018]

The Key to Act Two

How do you top life rules? With a life script, that’s how. Here’s an absolutely minimalist 2-step one. Guaranteed to work for 90% of humanity. Across all neurotypes, astrological signs, preferred pronouns, quadrants of the political compass, and Myers-Briggs types. Tested across multiple scenarios, utopian and dystopian, decentralized and centralized. Constructed to be compatible with blockchain futures, rated to survive Category 5 culture wars, and resilient to climate change. Here it is, in picture form first, ready?


And now in words:
First become a key, then go look for a lock.
This script picks up where the first-stage parental booster gives up, at around age 21, marking the beginning of Act 1. The becoming-a-key Act 1 phase lasts 3-21 years. Then there is a bit of an intermission of about 2 years, which for most people is a very confusing, unscripted time, like an inter-airport transfer in a strange foreign city with sketchy-looking shuttle buses that you are reluctant to get on, and long queues at the bathroom.

And then you’re in Act 2, which begins at age 42 on average. In a previous post, I argued that immortality begins at 40. Act 2 is about unlocking the immortality levels of the game of life. The essential truth about Act 2, which you must recognize in order to navigate it well, is this: Unless you make a special effort, you are probably not going to get damaged enough in Act 1 to become a key.

So to work this script, you are going to have to undergo some trials. In double-quick time if you’re already pushing 40.

Karma Hammer

The key-and-lock script derives from an older one that used to be much more popular before the invention of the steam engine: first become a hammer, then go look for a nail.

That wasn’t a very good script. It only worked for Real Men (not all of whom were men, or for that matter, real), and not very well for them. They generally discovered, right after they’d yelled “nailed it!” that they themselves were nails for a bigger hammer poised to descend on them, usually at the poetically perfect moment right after they’d done their own nailing.

This was called karma. Every hammer is a nail for some bigger hammer. People mistake karma for a cyclic view of the universe when it is in fact a recursive view of it.

The rest, which included almost all the women, many of the #notallmen, and all the mis-pronouned, were just lucky if they didn’t get nailed. Every hammer is a nail, but every nail is not a hammer. If you’re keeping track here, the hammer that fells the topmost Putin-class human hammers is called the Grand Void. It was voted the top Marvel supervillain in a recent poll.

But we mostly don’t live in hammer-and-nail societies anymore. We live in lock-and-key societies, soon to be blockchain-flavored.

I’m only about a year and a half into being a key, but looking around at what people older than me are doing, it is clear that they mostly don’t have a clue, or are actively regressing. So I’m making up my own Act 2, just like I did for my own Act 1.

The trick to Act 2 is to recognize that Act 1 was mostly about turning into a key. Even if you didn’t realize that at the time, and thought you were doing other things like Pursuing Happiness, Making Money, Finding the One, or Making a Difference. You were actually acquiring the set of cuts and notches required to be a key.

Which is required to become fully human.

Alienation and Humanization

People think of Act 1 as acquiring “experience” but that not quite it.

You see, most experience is useless.

At worst it just uses you up like a tank of gasoline. At best it wears you down like a cobblestone getting polished by a million footfalls. What it doesn’t do is turn you into a key. That’s why it is useless.

In return for getting used up or worn down, you get to bank unremarkably unique memories that make you feel increasingly different from everybody else, even as you actually become more indistinguishable from them, and your life converges with their lives, collapsing into a shared indistinguishability, a black hole of high-proximity deep disconnection. The hell of other people where you are unique, just like everybody else.

This is called alienation. Keydom is the exact opposite of that.

The only life experiences that count towards keydom are ones that make your personal story irreversibly fork away from all others, while (and this is the irony of keydom) teaching you something about how you’re actually like everybody else.

This is usually called self-actualization: discovering the most general of truths about the human conditions through the most individual of experiences. But I like to call it humanization, because the part that’s getting actualized is drawn from your common humanity, not your special freakshow talents.

So like alienation, humanization is defined by irony. Even as you develop an inner capacity for connecting with others, by becoming attuned to similarities rooted in shared humanity, you find yourself separated from others by the process of individually actualizing shared potentialities.

It’s like a butterfly recognizing its kinship with the caterpillar right when it’s crawling out of the pupa, and exclaiming, “hey, we are wonderful little transformer thingies! Anyone can do this” only to find the other caterpillars going, “what’s that fluttery fool talking about, he’s nothing like us.”

With humans though, most stay caterpillars, because they’re too attached to the things that make them unique caterpillars to want to turn into common butterflies.

Here’s a handy pair of definitions laying out the difference.
Alienation is estrangement from others caused by consequential external similarities masking inconsequential inner differences. 
Humanization is estrangement from others caused by inconsequential external differences masking consequential inner similarities.
You’re alone in both conditions, but lonely only in the first.

Alienation is a bunch of kids all getting the same It haircut and hating each other more as a result.

Humanization is a supervillain whispering to a superhero, “we are the same, you and I,” and then getting into the death-struggle anyway.

Superheroes and supervillains are both successful examples of keydom. Key and antikey actually, but let’s talk about failed examples.

Failed Keydom

Keydom isn’t about experiences, it is about notches. And not score-keeping notches, only key-nature notches. If you accrue enough notches (the minimum is 8), you’ll be ready for keydom.

See, life is long, but there’s a catch: the back half of it is defined by constraints so strong, and so depressingly (and expensively) banal, unless you’ve already reshaped yourself into a key shape that fits into it, you’ll basically get jammed and stuck into the keyhole in the door to the rest of your life rather than properly inside it, living it out. So you’ll spend that half wondering what the hell happened.

I mean, look at Stephen Hawking. Health stuff closed in and reduced him to a twitching finger and darting eyes by 30 or so. Fortunately, he’d already ascended to keydom and unlocked his Act 2 levels by then, thanks more to his outlandish trials I suspect, rather than to his freakish genius.

And on the flip side, perfectly healthy people with a lot of wiggle room can fail to turn into keys, and when the constraints close in, they meekly yield, and submit to getting locked out of the rest of their own lives.

So whatever else it unlocks, the main thing unlocked by the key you turn into during Act 1 is your ability to actually inhabit your own life through Act 2.

by Venkatesh Rao, Ribbonfarm |  Read more:
Image: uncredited
[ed. See also: The Door.]

Saturday, May 12, 2018


ITT 1978.
via:

This Week: Kanye West and the Question of Freedom

When elites speak of tribalism, we tend to think we’re somehow above it. After all, we have educated minds that have developed the intellectual muscles to resist coarser loyalties, have we not? We value unique individuals over the amorphous group. We like to think we can see complexity and nuance rather than wallowing in coarse Manichean ideas, articulated by demagogues, that divide the world into “us” and “the other.” It’s the unthinking masses who do that. Not us. Unlike them, we are aware of the dangers of this temptation, alert to its irrationality. We resist it.

Except, quite often, we don’t. In fact, in our current culture, it’s precisely the elites who seem to be driving tribal identity and thought, and doubling down on ideological and affectional polarization. In another must-read column, Tom Edsall of the New York Times lays out the academic literature that reveals what is already in front of our noses. Professor Lilliana Mason has a new book that deals with this, Uncivil Agreement. She emailed Edsall: “The more highly educated also tend to be more strongly identified along political lines.” He quoted from her book:

Political knowledge tends to increase the effects of identity as more knowledgeable people have more informational ammunition to counter argue any stories they don’t like. 

Edsall also pointed to the findings of a 2016 Pew Research Center study:

Much of the growth in ideological consistency has come among better educated adults — including a striking rise in the share who have across-the-board liberal views, which is consistent with the growing share of postgraduates who identify with or lean toward the Democratic Party.”

And so our elite debate has become far less focused on individual issues as such, and the complicated variety of positions, left, right and center, any thinking indvidual can take. It has become rather an elaborate and sophisticated version of “Which side are you on?” electorate in the past twenty years.

But even this doesn’t capture the emotional intensity of it all, or the way it compounds over time. Remember how you felt the day after Trump was elected? You aren’t alone:

In their 2015 paper, “Losing Hurts: The Happiness Impact of Partisan Electoral Loss,” the authors found that the grief of Republican partisans after their party lost the presidential election in 2012 was twice that of “respondents with children” immediately after “the Newtown shootings” and “respondents living in Boston” after “the Boston Marathon bombings.”

That’s an intense emotion, and it’s that intensity, it seems to me, that is corroding the norms of liberal democracy. It has been made far, far worse by this president, a figure whose election was both a symptom and a cause of this collective emotional unraveling, where the frontal cortex is so flooded by tribal signals that compromise feels like treason, opponents feel like enemies, and demagogues feel like saviors. Instead of a willingness to disagree and tolerate, there is an impulse to loathe and expel. And this is especially true with people we associate with our own side. Friendly dissidents are no longer interesting or quirky; as the stakes appear to rise, they come to seem dangerous, even contagious. And before we even know it, we live in an atmosphere closer and closer to that of The Crucible, where politics merges into a new kind of religious warfare, dissent becomes heresy, and the response to a blasphemer among us is a righteous, metaphorical burning at the stake.

I think that’s the real context for understanding why magazines and newspapers and websites of opinion are increasingly resistant to ideological diversity within their own universes. It’s why when RedState decided it needed to fire some staffers, only the anti-Trump ones were canned. It’s why a banal neocon like Bret Stephens caused many readers to cancel their subscriptions to the New York Times when he questioned climate change, why Twitter feels like a daily auto-da-fé, why controversial campus speakers need extraordinary security on the few occasions they are invited to speak at universities, why the National Review has found itself shifting from “Never Trump” to almost always “Anti-Anti-Trump,” why some are calling for a purge of conservative voices in elite journalism, and why Bari Weiss deploys the phrase “Intellectual Dark Web” to describe a variety of non-tribal thinkers who have certainly not been silenced, but have definitely been morally anathematized, in the precincts of elite opinion.

The dynamic here is deeply tribal. It’s an atmosphere in which the individual is always subordinate to the group, in which the “I” is allowed only when licensed by the “we.” Hence the somewhat hysterical reaction, for example, to Kanye West’s recent rhetorical antics. I’m not here to defend West. He may be a musical genius (I’m in no way qualified to judge) but he is certainly a jackass, and saying something like “slavery was a choice” is so foul and absurd it’s self-negating. I don’t blame anyone for taking him down a few notches, as Ta-Nehisi Coates just did in memorable fashion in The Atlantic. He had it coming. You could almost say he asked for it.

But still. And yet. There was something about the reaction that just didn’t sit right with me, something too easy, too dismissive of an individual artist’s right to say whatever he wants, to be accountable to no one but himself. It had a smack of raw tribalism to it, of collective disciplining, of the group owning the individual, and exacting its revenge for difference. I find myself instinctually siding with the independent artist in these cases, perhaps because I’ve had to fight for my own individuality apart from my own various identities, most of my life. It wasn’t easy being the first openly gay editor of anything in Washington when I was in my 20s. But it was harder still to be someone not defined entirely by my group, to be a dissident within it, a pariah to many, even an oxymoron, because of my politics or my faith. (...)

And so I bristle at Ta-Nehisi’s view that West cannot be a truly black musician and a Trump admirer, based on the logic that the gift of black music “can never wholly belong to a singular artist, free of expectation and scrutiny, because the gift is no more solely theirs than the suffering that produced it … What Kanye West seeks is what Michael Jackson sought — liberation from the dictates of that we.”

I bristle because, of course, Coates is not merely subjecting West to “expectation and scrutiny” which should apply to anyone and to which no one should object; he is subjecting West to anathematization, to expulsion from the ranks. In fact, Coates reserves the worst adjective he can think of to describe West, the most othering and damning binary word he can muster: white. Just as a Puritan would suddenly exclaim that a heretic has been taken over by the Devil and must be expelled, so Coates denounces West for seeking something called “white freedom”:

… freedom without consequence, freedom without criticism, freedom to be proud and ignorant; freedom to profit off a people in one moment and abandon them in the next; a Stand Your Ground freedom, freedom without responsibility, without hard memory; a Monticello without slavery, a Confederate freedom, the freedom of John C. Calhoun, not the freedom of Harriet Tubman, which calls you to risk your own; not the freedom of Nat Turner, which calls you to give even more, but a conqueror’s freedom, freedom of the strong built on antipathy or indifference to the weak, the freedom of rape buttons, pussy grabbers, and fuck you anyway, bitch; freedom of oil and invisible wars, the freedom of suburbs drawn with red lines, the white freedom of Calabasas.

Ta-Nehisi’s essay has sat with me these past few days, as a kind of coda to the place we now find ourselves in. Leave aside the fact that the passage above essentializes and generalizes “whiteness” as close to evil, a sentiment that applied to any other ethnicity would be immediately recognizable as raw bigotry. Leave aside its emotional authenticity and rhetorical dazzle. Notice rather that the surrender of the individual to the we is absolute. That “we” he writes of doesn’t merely influence or inform or shape the individual artist; it “dictates” to him. And it’s at that point that I’d want to draw the line. Because it’s an important line, and without it, a liberal society is close to impossible.

I understand that the freedom enjoyed by a member of an unreflective majority is easier than the freedom of someone in a small minority, and nowhere in America is that truer than in the world of black and white. I understand that much better for having read so much of Ta-Nehisi Coates’s work. I even feel something similar in a different way as a gay man in a straight world, where the general culture is not designed for me, and the architecture of a full civic life was once denied me. But that my own freedom was harder to achieve doesn’t make it any less precious, or sacrosanct. I’d argue it actually makes it more vivid, more real, than it might be for someone who never questioned it. And I am never going to concede it to “straightness,” the way Coates does to “whiteness.”

As an individual, I seek my own freedom, period. Being gay is integral to who I am, but it doesn’t define who I am. There is no gay freedom or straight freedom, no black freedom or white freedom; merely freedom, a common dream, a universalizing, individual experience. “Liberation from the dictates of the we” is everyone’s birthright in America, and it is particularly so for anyone in the creative fields of music or writing. A free artist owes nothing to anyone, especially his own tribe. And if you take the space away from him to be exactly what he wants to be, in all his contradictions and complexity, you are eradicating something critical to a free and healthy society. You are devouring the individual in favor of the mob. You are reducing a kaleidoscope to black and white.

And notice that in Ta-Nehisi’s essay, two concepts — freedom and music — that have long been seen as universal, transcending class or race or gender or any form of identity toward an idea of the eternally human or even divine — are emphatically tribalized and brought decisively down to earth. Freedom, in this worldview, does not and cannot unite Americans of all races; neither can music. Because there is no category of simply human freedom possible in America, now or ever. There is only tribe. And the struggle against the other tribe. And this will never end.

And that, of course, is one of the most dangerous aspects of our elite political polarization: It maps onto the even-deeper tribalism of race, in an age when racial diversity is radically increasing, and when the racial balance of power is shifting under our feet. That makes political tribalism even less resolvable and even more combustible. It makes a liberal politics that rests on a common good close to impossible. It makes a liberal discourse not only unachievable but increasingly, in the hearts and minds of our very elites, immoral. The promise of Obama — the integrating, reasoned, moderate promise of incremental progress — has become the depraved and toxic zero-sum culture of Trump. Empowered and turbocharged by the mob dynamics of social media, we have all become enmeshed in it. (...)

It’s only a decade ago, but it feels like aeons now. The Atlantic was crammed with ideological opposites then, jostling together in the same office, and our engagement with each other and our readerships was a crackling and productive one. There was much more of that back then, before Twitter swallowed blogging, before identity politics became completely nonnegotiable, before we degenerated into these tribal swarms of snark and loathing. I think of it now as a distant island, appearing now and then, as the waves go up and down. The riptide of tribalism can capture us all in the end, until we drown in it.

Haspel’s Lack of Accountability

I watched most of the hearings this week on the nomination of Gina Haspel to be director of the CIA. You can forget, I think, that this was potentially a rare moment in American public life, a moment when we could actually, finally, hold someone in power accountable for the war crimes of the past, and someone really responsible, someone directly in the line of command. But as I watched the proceedings, I could almost feel that opportunity slipping away.

The old euphemisms — “enhanced interrogation techniques” — were hauled out, as if they weren’t now absurd on their face. Haspel was asked whether torture in the abstract works and said no; but she refused to concede that she had authorized torture herself; she dodged the question of whether she believed that the torture she was directly complicit in was even immoral; she exhibited no remorse — just regret that torture had drawn attention away from the good work the CIA was simultaneously doing. She even refused to make a distinction between the decent intelligence we managed to get via conventional interrogation and the cavalcade of lies that torture produced.

Haspel could have expressed some sense of the gravity of this issue; she could have owned the crimes, while pledging never to repeat them. Nominated to work under a president who has demanded personal loyalty of his appointees, even in the FBI and Justice Department, and who has championed even worse forms of torture than Haspel presided over, she could have emphatically insisted that she would refuse an illegal and immoral order in the future, even if she did no such thing in the past.

But she wouldn’t and couldn’t. We found out nothing new about her role — whether she personally supervised torture, whether she even advocated for it, whether she witnessed it firsthand, or ever resisted it as many others did in the grisly gulags the U.S. set up across the world. She pretended that the absolute legal restriction on the use of torture at any time in any place for any reason was unknown to her. She insisted that her moral compass was strong, when of course the plain facts of the matter reveal it to be nonexistent. She gave them not an inch.

If a public servant in a liberal democracy cannot state without reservation that torture is immoral, then she shouldn’t be confirmed in any position of authority. If there is no bright line here, there are no lines anywhere. I listened and watched her impassive expression closely as she went through the motions of minimal accountability. I only wish Hannah Arendt were around to describe it.

by Andrew Sullivan, New York Magazine |  Read more:
Image: Theo Wargo/WireImage
[ed. So.. anything else happen this week?]

Significant Effect

Following the longest teacher strike in West Virginia’s history, the state’s educators won a 5 percent pay raise. The much-needed hike lifted spirits and helped spark walkouts around the country, but the larger political implications of the increase in teacher activism are still unclear.

Are lawmakers who opposed the teacher movement going to pay a political price? Will politicians who stood with them be rewarded?

Republican state Sen. Robert Karnes thought he knew the answer to that. He’s a longtime political foe of the state’s unions — he once referred to union members who were assembled in the legislative gallery as “free riders” as he advocated for right-to-work legislation. During the teacher strike, he had complained that they were holding kids “hostage.”

In late March, he told a local newspaper that he couldn’t imagine there would be much political fallout from the strikes.

“I can’t say that it will have zero effect, but I don’t think it’ll have any significant effect because, more often than not, they probably weren’t voting on the Republican side of the aisle anyways,” he said of the state’s teachers.

On Tuesday, they did just that. And Karnes lost re-election.

Labor activists, it turns out, know how to get involved on the Republican side of the aisle, too. Karnes was facing a primary challenge from fellow Republican Delegate Bill Hamilton, who beat him, with all the votes counted, 5,787 to 3,749. It was a blowout.

Hamilton is a moderate Republican who opposes right-to-work and was sympathetic to the teacher strikes, breaking with those in his party who wanted to offer only a smaller raise.

Unions responded by heavily investing in his campaign; he raised over $10,000 of his $53,850 haul from organized labor. (...)

The win followed a surprising strategy because, as Karnes assumed, organized labor is traditionally aligned with Democrats and participates in Democratic primaries far more often than GOP primaries.

Edwina Howard-Jack, a high school English teacher and Indivisible activist in Upshur County, the area that Karnes represents, told The Intercept that some labor activists were concerned that after the strike wound down, teachers would be less active in politics. But Karnes’s defeat proved to her that they are still a potent force.

“I think that teachers showed their political power in the primary,” she told The Intercept. “Teachers showed up and they were voting in their 55 united, 55 strong shirts. … Once the results started rolling in, it was phenomenal. Teachers were really empowered to say, if we stick together we can make a difference.”

“I heard one teacher today say … after yesterday they may want to think twice about arming teachers,” she joked. She told The Intercept that a number of teachers chose a nonpartisan affiliation so they could vote in the Republican primary on Tuesday; under West Virginia’s rules, you either have to belong to the party or be an unaffiliated voter in order to vote in the primary.

by Zaid Jilani, The Intercept | Read more:
Image: Craig Hudson/Charleston Gazette-Mail via AP
[ed. The power of solidarity (and unions). I find it interesting (and disheartening) that not a single Democrat in Congress found the time to march with these teachers in their protests. See also: As West Virginia Strike Winds Down, Angry Teachers Look to Bolster Progressives in Elections and Teachers Threaten Strike in Arizona Over Low Pay, As Corporate Tax Breaks Constrain Budget and Colorado Teachers Are Mad as Hell.]

Barnes & Noble: The Final Chapter?

To the casual observer Barnes & Noble in Manhattan’s Union Square seemed to be doing everything right last Thursday lunchtime: displays heaving with books; customers milling around; every table at the Starbucks on the third floor taken by customers; a creche full of excited children; magazine racks browsed.

But appearances can be deceptive. America’s largest bookseller is in trouble. A quick chat with the “customers” suggests one reason why.

“I get a coffee, take a seat, read the latest magazines,” said a man who gave his name as Buddy. Asked if he planned to purchase the car and engineering titles he was holding, Buddy replied flatly: “No.”

Perhaps this is what it means to be a bricks-and-mortar retailer in 2018. It’s a feelgood customer experience and a showcase for online purchasing – but the sound of cash registers ringing? Not so much.

Last week, Barnes & Noble, the largest book retailer in the US, saw its stock price plunge nearly 8% just days after the New York Times published an editorial calling for the chain to be saved. “It’s depressing to imagine that more than 600 Barnes & Noble stores might simply disappear,” wrote columnist David Leonhardt. “But the death of Barnes & Noble is now plausible.”

Once the dominant player in US book retailing , the chain, which ironically in its time put countless private, neighborhood booksellers out of business, is suffering as the new big beast, Amazon, swallows its business.

Sales have been on the slide for 11 years; even online sales have fallen. Over the past five years, the company has lost more than $1bn in value. Dozens of stores have closed. A shake-up in February resulted in the loss of 1,800 full-time jobs.

If Barnes & Noble closes it will mark the death of the last major book chain in the US, leaving the field open to Amazon, which sells one out of every two books in the country, according to analysts. Closure is also likely to hurt publishers, who will become even more heavily reliant on Amazon. Big swaths of America will be left without a major bookstore.

It’s not that Barnes and Noble hasn’t tried to innovate – “it’s been very creative in staying alive and surviving into today’s Walmart-and-Amazon dominated society,” said one employee, pointing to games and toys as one area of expansion. The company also pushed into technology, spending heavily to launch and market the Nook e-reader to compete with Amazon’s Kindle.

But arguably innovation is where Barnes & Noble went wrong. Other big booksellers have tackled Amazon’s onslaught by doing precisely the opposite – going back to basics and putting the books first. (...)

“People may drop in for a browse but they won’t make a dedicated trip to a bookstore,” Saunders says. “They don’t have the need and they don’t have the time. The way people shop changed, and that’s been detrimental for Barnes & Noble.”

Saunders said Barnes & Noble also slipped up with the Nook. It’s reported that the company lost $1.3bn on the device hoping to replicate Amazon’s success selling content for the Kindle. “That was a massive distraction for Barnes & Noble that should now be abandoned,” Saunders said.

“If you’re only using it to sell books, and there are a lot of other competitors in the market, it’s not a sensible strategy.” (The company withdrew the device in the UK in 2016).

Nor, he continued, does it make sense to add new merchandise. “The stores just look like an enormous Aladdin’s cave of all sorts of random products, including departments selling CDs and DVDs that are never crowded. The stores themselves are too large for what they need and in the wrong locations.”

by Edward Helmore, The Guardian |  Read more:
Image: Atlantide Phototravel/Getty Images

Friday, May 11, 2018

Gary Numan

On Loving Flawed Family Members

My maternal grandfather served in both world wars, too young for the first, too old for the second. In the Pacific he contracted malaria, from which he never fully recovered. What killed him, though, was emphysema, the result of breathing leather dust (he was a glove cutter) and smoking cigarettes. In other words he was poisoned on the job and also poisoned himself. The last years of his life he spent hooked up to an oxygen tank, gasping, his chest convulsing violently, as if it contained a trip-hammer. When my mother and I moved to Arizona from upstate New York to begin what we imagined would be new lives, I think we both understood that we were absolving ourselves of the duty of being present when his abused heart finally gave out.

He’d bought the house we shared—my mother and I on the top floor, he and my grandmother on the bottom—so we’d have a place to live after my parents split up. Having himself grown up in a disorderly home, he prized order. Our lawn was mowed and edged in summer, our leaves raked and disposed of in autumn, our sidewalks shoveled in winter, our house repainted at the first sign of flaking. The clothes he wore were never expensive or showy, but they were always clean and, thanks to my grandmother, crisply ironed. He always hiked his trouser legs an inch or two at the knee before sitting down, the first human gesture I can recall imitating. Other gestures of his I’ve imitated my whole life and been the better man for it. I loved him with my whole heart and love him still.

That said, I don’t imitate everything about him. During the Civil Rights Movement, I remember him making fun of a young black mother on the news when she complained about “not even having enough money to feed my little babies!” A natural mimic, his impersonation was spot-on and devastating. Had he been asked to explain his lack of sympathy for the woman’s plight, her hungry kids, I doubt he would’ve mentioned her race, and in his defense he was equally merciless in his imitations of white southern lawmen and politicians. But there can be no question he was stereotyping her. There would’ve been no doubt in his mind that the kids in question all had different fathers and that producing more hungry kids was her only life skill. On the basis of this one anecdote, it would be hard to argue that the man I loved and love still was not racist. But I also remember the afternoon he ordered off his front porch a neighbor who was circulating a petition to keep a black family from buying a house on our street, explaining that this was America and we didn’t do things that way here. He must’ve seen how many names were already on the petition and known how many of his neighbors had accepted the man’s specious argument—that it wasn’t about these particular people and whether or not they were decent and hardworking, but rather a question of property values. If you let this family in, where do you draw the line?

Where my grandfather drew it was right there, on our front porch, just one short step from the top.

My father drew lines as well.

“Well,” he said, finally waking up and rubbing his eyes with his fists. “No need to tell me where we are.”

Out late the night before, he’d slept most of the way to Albany. I’d just returned home from the university and next week would start working road construction with him. Before that could happen, I had to check in at the union hall where he and I were members. At the moment we were stopped at a traffic light in a predominantly black section of the city.

“Please,” I begged him, because of course I knew where this was headed. “You’re telling me you can’t smell that?”

On more than one previous occasion my father had claimed he could smell black people. Their blackness. Whether they were clean or dirty made no difference. Race itself, he claimed, had an odor.

“You’re sure it isn’t poverty you’re smelling?” I ventured.

“Yep,” he said. “And so are you. You just won’t admit it.”

Mulignans, he called them, the Italian word for eggplant (“Ever see a white one?”). The irony was that by the end of August, after a summer of working in the hot sun, his own complexion would be darker than most light-skinned blacks. Certainly as dark as Calvin’s. When my father was spouting racist nonsense, I’d often remind him that one of his best friends was black, an incongruity that was not lost on Calvin either. Indeed, in a playful mood he would sometimes put his forearm up next to my father’s for comparison’s sake. “Except for the smell,” he’d say, grinning at me, if I happened to be around, “you can’t tell us apart.” One drunken night my father had apparently shared with Calvin his theory of smell.

Another story. It’s a few years later and my father and I are driving a U-Haul across country from Tucson, Arizona, to Altoona, Pennsylvania, to my first academic job at a branch campus of Penn State. I’m pushing 30, married, a father myself now, and broke. The plan is for my wife, who is pregnant with our second daughter, to join me later in the summer. My father is now in his fifties, but lean and strong from a lifetime of hard labor, his black Brillo Pad hair only just beginning to be flecked with gray. He’s a D-Day guy. Bronze star. A genuine war hero. That he’s not prospered in the peace, as so many returning vets have, doesn’t seem to trouble him. That he’s alive and kicking seems enough. I myself am soft by comparison, soft in so many ways. Thanks to a series of deferments and then a high draft number, I’ve managed to stay out of Vietnam. My father’s opinion of that clusterfuck war was pretty much the same as mine, but I know it troubles him that I stayed home when others of my generation served and came back, like him, profoundly changed. But the “conflict” is finally over and I’m alive and I know he’s glad about that.

His war we’ve never talked about, not due to any lack of interest on my part, but because men like my father and grandfather simply didn’t. Is it the realization that, with Vietnam over, I will probably never experience war first hand that starts him talking today? Or just the fact that we’ve been cooped up in the cab of that U-Haul truck for so long? The worst was the Hedgerows, he begins, surprising me. (Not Normandy? Not the Hürtgen Forest?) Every time you turned around, there were more Germans stepping out from behind the hedges, hands in the air, wanting to surrender. (He slips unconsciously into present tense now, suddenly more in France than here in the cab of the truck.) We’re driving, going flat out, miles and hours ahead of our supply train. Maybe days, for all we know. Here come another seven Germans, hands in the air. Then nine more. Another mile up the road, more hedgerows, more surrendering Germans. A dozen this time, maybe two. Hands in the air and guns on the ground at their feet. What do you do with them? You can’t take them with you. You can’t leave them behind because who knows? Maybe they take up their weapons again, and now they’re behind you, these same guys that have been shooting at you since Utah Beach.

Did I ask the begged question? I don’t remember, but anyway, no need. He’s going to tell me. It’s the point of the story. And maybe the point of my not serving in Vietnam. In any company, he tells me, his voice thick, there’s always one who doesn’t mind taking these guys down the road.

Down the road?

Right, he says. Around the bend. Out of sight. If you don’t see it, it didn’t happen. None of your business. Your business is up ahead.

And that was pretty much all my father had to say about the second world war. And the one I managed to avoid.

“It occurs to me that I am America,” Allen Ginsberg wrote.

The same thing occurs to me. I’m proud, like my grandfather and father, but also ashamed. I write this the week after young white men waving Nazi flags and members of the KKK and conspiracy-theory-stoked militiamen converged on Charlottesville and were not unambiguously denounced by the president of the United States. Is this the country my father and grandfather fought for? I ask because the shame I felt seeing those swastikas on display in Charlottesville was deeply personal, a betrayal of two men I loved, who at their best were brave men and good Americans and at their worst were far, far better than these despicable, pathetic, deluded fuckwads.

My father and grandfather both believed, and not without justification, that America was the light and hope of the world. They also believed, with perhaps less justification, in me. Okay, not me, exactly, but the possibility inherent in my existence, in this time and this place, which, not coincidentally, is how I feel about my own children and grandchildren. Like my grandfather and father, I don’t demand or expect perfection in those I love. But I do hope that when their neighbor climbs the porch steps, petition in hand, my children and grandchildren will say, as my grandfather did, “That’s not the way we do things here. Not in America.” And I want them to know about the day when my father, in an uncharacteristically serious mood, took me aside and said, “Listen up, Dummy.” (Yeah, Dad?) “You’re ever in a tough spot? You need somebody you can trust? Go to Calvin.” I want them to understand that in the final analysis, as far as my father was concerned, Calvin wasn’t black. He was Calvin. I want them to understand that even though you couldn’t talk him out of the idea that black people had an odor and he held the entire race in low esteem, he made exceptions, as many as were necessary, in fact, and there were many. He preferred black men who worked hard to white men who didn’t. Like Whitman, he didn’t trouble himself about contradictions.

by Richard Russo, LitHub |  Read more:
Image: uncredited
[ed. See also: The Risk Pool: Growing Up 'Pretty Near the Edge']

The Best Jarred Pasta Sauce There Ever Was

We have a complicated relationship with jarred pasta sauce. Between the no-cook, quick stovetop, and easy butter-roasted options, making your own tomato sauce is pretty easy. But sometimes, you just can’t bring yourself to make sauce. Your day was horrible. Your week was horrible. Your year was horrible. You don’t need to explain. We get it. In that case, we reach for a jar. But not just any jar. We have a pretty loyal relationship with Rao’s Marinara Sauce and firmly believe that it’s the best jarred pasta sauce around. Everyone that works here has daily Rao’s rituals. Songs. Sayings. Handshakes. It’s a cult.

OK, we’re not really in a cult. But we do think Rao’s is the best. Generally speaking, jarred pasta sauce is pretty terrible. Sometimes that’s because there are added preservatives or coloring. But mostly, it’s because they taste too sweet, too salty, or maybe even a combination of the two.

But Rao’s is none of those things. Rao’s is a pasta sauce that falls in line with the sauces we’d cook, and it starts with the ingredients. Rao’s uses high quality tomatoes and olive oil, without any added preservatives or coloring. The rest of the ingredients won’t surprise you: salt, pepper, onions, garlic, basil, and oregano. You know, the stuff you’d expect to be in tasty marinara. And the biggest omission from that list is added sugar. Which already tells you something about how it tastes. (...)

The flavor tastes...homemade. Yeah, we’d be pretty happy if we made this sauce. It's the kind of sauce we want to spoon over pasta and dip stromboli or calzones in. Compared to other national brands, Rao’s sits on an entirely different plane.

While all of these flavors and ratios are important, there’s one ingredient that sticks out more than the rest. In fact, you can tell it’s there immediately. Rao’s uses a large amount of olive oil. You can see it sitting right on top of the sauce, before you even crack open the jar. We love that commitment to the fatty, olive oil-y side of the sauce. That fat is the key to a fantastic, well-rounded marinara.

by Alex Delany, Bon Appetit/Basically |  Read more:
Image: Mathew Zucker/Rao's
[ed. See also: Welcome to Rao’s, New York’s Most Exclusive Restaurant]