Sunday, December 16, 2018


Andrew Wyeth (1917-2009) The Corner
via:

Am I ‘Old’?

A few years ago at a college reunion, I listened transfixed as the silver-haired philanthropist David Rubenstein urged us “to accelerate” as we entered the last chapters of our lives. Pick up the pace? So many of my contemporaries were stopping — if not stooping — to smell the roses.

With his admonition in mind, I recently spoke with Mr. Rubenstein, now 69, and asked him if he considers himself old. “Sixty-nine seems like a teenager to me,” he replied. Coincidentally, just a few days earlier, a 68-year-old poet I know, in between surgeries to help her mend after a fall, told me point blank, “I am an old lady now.”

What makes one sexagenarian identify as old when another doesn’t? And what is “old,” anyway?

Having turned 61, this is a question very much on my mind — and likely to be on the minds of the 70 million baby boomers who are 50-plus (yes, even the tail end of the boom is now “middle-aged” or “old”). Dinner conversations are now hyper-focused on how to stay young or at least delay old.

Certainly the definition of “old” is changing, as life spans have grown longer. “Someone who is 60 years old today is middle-aged,” said Sergei Scherbov, the lead researcher of a multiyear study on aging. When does old begin? I asked.

Dr. Scherbov says for Americans, it’s roughly 70 to 71 for men and 73 to 74 for women, though, as he has written, “your true age is not just the number of years you have lived.”

“The main idea of the project,” he told me, “is that an old age threshold should not be fixed but depend on the characteristics of people.” Factors such as life expectancy, personal health, cognitive function and disability rates all play a role, he said, and today’s 65-year-old is more like a 55-year-old from 45 years ago.

As with beauty, the meaning of “old” also depends on the person you ask. Millennials, now in their 20s and 30s, say that old starts at 59, according to a 2017 study by U.S. Trust. Gen Xers, now in their 40s — and no doubt with a new appreciation for just how close they are to entering their 50s — say 65 is the onset of old. Boomers and the Greatest Generation pegged 73 as the beginning of old. Clearly, much depends on the perspective of who’s being asking to define “old.”

To that very point, I was curious to see how my friends who are 50-plus defined old — and asked them on Facebook. Among the dozens of responses, two made me smile: “Old is my current age + 4.” And this: “Tomorrow. Always tomorrow. Never today.” Perhaps the one most difficult to hear: “When you get called “ma’am instead of “miss.” (That will never happen to me, although I’m constantly called “sir” these days.)

Other friends pointed to various physical milestones as the visible line in the sand. A colleague posted: “When you can’t jog a 15-minute mile.” Another friend said, “When I have to stop playing tennis.” Others ominously noted cognitive benchmarks: “When you stop being interested in new information and experiences.” Many focused on “memory issues” as defining the onset of old.

The bottom line: “old” is subjective, a moving target.

That’s why David Rubenstein, 69, the board chairman of both the Kennedy Center for the Performing Arts and the Smithsonian Institution and co-founder and co-executive chairman of the Carlyle Group, can claim he’s not old, while my poet friend, a year younger than he is, refers to herself as old. Recently, because of problems getting around, she had to bring in a home health aide for assistance, only deepening her increased dependence on others. Indeed, as Dr. Scherbov discovered, loss of independence and mobility are among the characteristics that define “old.”

For his book “Healthy Aging,” Dr. Andrew Weil, now 76, asked people to list attributes associated with “old.” Among those most frequently cited: ancient, antiquated, dated, dried up, frail, passé, shriveled, used up, useless and withered, worthless and wrinkled. Nice stereotypes, huh?

“Negative ageist attitudes toward older people are widespread,” a 2015 analysis by the World Health Organization confirmed in a survey. Nearly two-thirds of the respondents, 83,000 people of all ages in 57 countries, did not respect older people, with the lowest levels of respect reported in high-income countries like the United States. Even more damning: These views adversely “affect older people’s physical and mental health.”

by Steven Petrow, NY Times |  Read more:
Image: Stuart Bradford
[ed. I still feel like a teenager (although my body tells me otherwise). In Hawaii, younger people often address older folks as "Uncle" (older women as "Auntie"). But when they start calling you "Papa san" well, you know you're probably getting pretty old. See also: Retiring Retirement (Nautilus).]

War on Cash: State and City Governments Push Back

In a Q and A with New York City council member Richie J. Torres, Grub Street notes that in addition to New Jersey, politicians in some eastern cities – including New York City, Philadelphia, and Washington D.C.- are also mulling restrictions on cashless stores. Another recent Grub Street piece, More Restaurants and Cafés Refuse to Accept Cash — That’s Not a Good Thing “Just because you don’t have a piece of plastic, you can’t get a sandwich? ”, describes the cashless trend in more detail.

Torres regards cash bans as both classist and racist:
Why do you think cashless business models “gentrify the marketplace”? 
On the surface, cashlessness seems benign, but when you reflect on it, the insidious racism that underlies a cashless business model becomes clear. In some ways, making a card a requirement for consumption is analogous to making identification a requirement for voting. The effect is the same: It disempowers communities of color. 
These are public accommodations. The Civil Rights Act established a framework for prohibiting discrimination in matters of housing, employment, and public accommodations. If you’re intent on a cashless business model, it will have the effect of excluding lower-income communities of color from what should be an open and free market. 
And we’ll start to attach a certain stigma to people who pay for things with cash? 
Exactly, in the same way that one might stigmatize [Electronic Benefit Transfer (EBT)] cards. When I was growing up, I remember the embarrassment that surrounded the use of food stamps. We live in a society where it’s not enough to stigmatize poverty; we are also going to stigmatize the means with which poor people pay for goods and services.
More Consumers Abandon Cash

Despite these pushback measures, last week the Pew Research Center reported in More Americans are making no weekly purchases with cash that roughly 29% of US adults say they make no purchases using cash during a typical week – up from 25% in 2015. At the same time, the share of those who claim to make all or almost all of their weekly purchases with cash has dropped from 25% in 2015 to 18% today. (...)

As Figure 2 makes clear, declining use of cash correlates heavily with income. So, adults with an annual household income of $75,000 or more are more than twice as likely as those earning less than $30,000 a year to eschew cash purchases in a typical week (41% compared to 18%). Whereas more than four times as many lower-income Americans report they make all or almost all of their purchases using cash, compared to higher income Americans (29% vs. 7%).


Last week, the Washington Post noted in The global cashless movement meets its foe: Local government, that one reason for the higher reliance of lower-income Americans on cash is their restricted access to financial services:
According to FDIC estimates, 6.5 percent of American households were unbanked in 2017, meaning they did not have an account with an insured financial institution. Another 18.7 percent of households in the United States have a checking or savings account but still relied on financial services outside of a traditional bank — such as payday loans or check-cashing businesses — the estimate showed.
Over to Grub Street and Torres again for a trenchant summary of the main issue:
What do you make of the claim, “But these days everyone has a card!” 
People who say that are living in a bubble of privilege — they look around and all their friends have cards. In response I say, “Does it occur to you that your world is pretty unrepresentative?” There are hundreds and thousands of New Yorkers who may have no permanent address or home, and many New Yorkers who are underbanked, either because of poverty or because they lack documentation. Requiring a card is erecting a barrier for low-income New Yorkers — period — and it’s coming from the very communities that claim to be progressive, as if, “Well, I am all for racial justice just so long as it doesn’t come at the expense of my own privilege.” 
I think that many of these places actively want to keep a certain type of person out. 
Of course! Earlier I said that no matter what the intention was, its effect is discriminatory, but I do think that it can also be intentional where the idea is to filter out the deplorables.
Even advocates of cashless transactions concede critics have a point – but reject the stark conclusion that the purpose of cashless policies is to exclude certain types of customers.

by Jerri-Lynn Scofield, Naked Capitalism | Read more:
Image: Pew Research Center
[ed. Not only that, but in an emergency if the electrical grid goes down the only thing that works is cash (no ATMs). Also, it's harder for government (and banks) to do funny things with your money if it's not just in bits and bytes.]

How the Seahawks Dismantled the Legion of Boom and Still Thrived

The Seahawks are this year’s surprise outfit. It feels like a long, long time since Seattle went through the will-they-won’t-they Earl Thomas dating game; since John Schneider and Pete Carroll detonated the Legion of Boom era and kicked Richard Sherman and Michael Bennett to the curb; since Cliff Avril and Kam Chancellor were forced to retire; since Thomas flipped off his own sideline in an act of understandable insubordination.

Seattle entered the season with few expectations. Vegas odds placed their chances at a Super Bowl a hair ahead of the Browns, and any Seahawks discussion elicited a shrug. Unless, of course, you wanted to talk about the glory days and how different (read: boring) this year was going to be.

Except Carroll hasn’t had a blah team in almost two decades and, like Andy Dufresne, the 2018 Seahawks have emerged triumphant on the other side of all the melodrama. They’re 8-5, heading for the playoffs and peaking at the right time. They’re eighth in weighted DVOA, which assesses a team’s most recent performances to indicate how well they are playing right now rather than over the course of the entire season. They’re one of only eight teams with a point differential over 70, ahead of the Patriots, Cowboys, and Steelers, despite playing in seven one-score games.

Carroll and company transitioned the organization from one led by its defense, to one led by Russell Wilson and the offense. It makes sense too: having a long-term franchise quarterback is more stable than consistently fielding an elite defense: players get hurt, free-agency saps talent, age and attrition begin to take over. A very good quarterback – which Russell Wilson is – can overcome some of those problems on his side of the ball.

Carroll doubled down on his belief that a ground-and-pound, power-running game can still succeed in the era of pace-and-space. It’s worked. Seattle are second in the league in power-run success, trailing just the Ravens’ rush-only offense. While Carroll deserves serious coach of the year consideration his supporting cast have been impressive too. Offensive coordinator Brian Schottenheimer has done a brilliant job (stunning, I know) coaching around the limitations on the team’s offense. Mike Solari replaced Tom Cable, a man who makes Brick Tamland look like Jean-Paul Sartre, as offensive-line coach and the unit, predictably, improved (under Cable the Raiders offensive line has submarined, for what it’s worth).

Seattle haven’t relied wholly on their rushing game though. They’ve benefited from Wilson’s rare brand of escape magic to create plays on the fly, and his connection with Tyler Lockett has been the most efficient quarterback-receiver partnership in recent years. Doug Baldwin is the guy who makes the whole thing sing, though. Wilson is a different quarterback when Baldwin is on the field. With Baldwin in 2018, he has a touchdown-to-interception ratio of 11.5 (23-2). Without Baldwin that number collapses to 1.5 (6-4), his completion percentage drops by seven points; his passer rating by 41. Almost as importantly, Wilson’s average yards per target drops from 8.68 to 6.96. To put it simply: without Baldwin Wilson goes from an excellent quarterback to an average one. (...)

Perhaps most importantly, Pete Carroll has reignited the sense of camaraderie that had dissipated in recent years. Fans loved the early bombast of the Legion of Boom; they grew tired of it by the end – and the players grew tired of the organization itself. Meanwhile, Seattle’s 2018 band of upstart free-agent castoffs and young pups seem to be relishing the chance to just play. There’s no drama.

by Oliver Connolly, The Guardian |  Read more:
Image: Joe Nicholson/USA Today Sports
[ed. They are a surprise this year, but one of the main reasons isn't even mentioned in this article: Bobby Wagner. The second most important man on the team and one of the best middle linebackers to ever play the game (and possible future Hall of Famer). Go Hawks!]

Saturday, December 15, 2018

What the Media Gets Wrong About Opioids

After Jillian Bauer-Reese created an online collection of opioid recovery stories, she began to get calls for help from reporters. But she was dismayed by the narrowness of the requests, which sought only one type of interviewee.

“They were looking for people who had started on a prescription from a doctor or a dentist,” says Bauer-Reese, an assistant professor of journalism at Temple University in Philadelphia. “They had essentially identified a story that they wanted to tell and were looking for a character who could tell that story.”

Although this profile doesn’t fit most people who become addicted, it is typical in reporting on opioids. Often, stories focus exclusively on people whose use started with a prescription; take this, from CNN (“It all started with pain killers after a dentist appointment.”), and this, from New York’s NBC affiliate (“He started taking Oxycontin after a crash.”)

Alternatively, reporters downplay their subjects’ earlier drug misuse to emphasize the role of the medical system, as seen in this piece from the Kansas City Star. The story, headlined “Prescription pills; addiction ‘hell,’” features a woman whose addiction supposedly started after surgery, but only later mentions that she’d previously used crystal meth for six months.

The “relatable” story journalists and editors tend to seek—of a good girl or guy (usually, in this crisis, white) gone bad because pharma greed led to overprescribing—does not accurately characterize the most common story of opioid addiction. Most opioid patients never get addicted and most people who do get addicted didn’t start their opioid addiction with a doctor’s prescription. The result of this skewed public conversation around opioids has been policies focused relentlessly on cutting prescriptions, without regard for providing alternative treatment for either pain or addiction.

While some people become addicted after getting an opioid prescription for reasons such as a sports injury or wisdom teeth removal, 80 percent start by using drugs not prescribed to them, typically obtained from a friend or family member, according to surveys conducted for the government’s National Household Survey on Drug Use and Health. Most of those who misuse opioids have also already gone far beyond experimentation with marijuana and alcohol when they begin: 70 percent have previously taken drugs such as cocaine or methamphetamine.

Conversely, a 2016 review published in the New England Journal of Medicine and co-authored by Dr. Nora Volkow, director of the National Institute on Drug Abuse, put the risk of new addiction at less than 8 percent for people prescribed opioids for chronic pain. Since 90 percent of all addictions begin in the teens or early 20s, the risk for the typical adult with chronic pain who is middle aged or older is actually even lower.

This does not in any way absolve the pharmaceutical industry. Companies like Purdue Pharma, the maker of Oxycontin, profited egregiously by minimizing the risks of prescribing in general medicine. Purdue also lied about how Oxycontin’s effects last (a factor that affects addiction risk) and literally gave salespeople quotas to push doctors to push opioids.

The industry flooded the country with opioids and excellent journalism has exposed this part of the problem. But journalists need to become more familiar with who is most at risk of addiction and why—and to understand the utter disconnect between science and policy—if we are to accurately inform our audience.

The Innocent Victim Narrative

The reporters who called Bauer-Reese were not ill-intentioned in seeking the most sympathetic addiction stories; it is genuinely altruistic to want to portray those who are suffering in a way that is most likely to move readers and viewers to act compassionately. But such cases can have an unintended side effect: highlighting “innocent” white people whose opioid addiction seems to have begun in a doctor’s office sets up a clear contrast with the “guilt” of people whose addiction starts on the streets.

This is a result of racist drug policies that began decades ago. The war on drugs declared by Richard Nixon in 1971 was part of the Republican “Southern strategy,” which used code words like “drugs” “crime,” and “urban” to signal racist white voters that the party was on their side. When Ronald Reagan doubled down harsh law enforcement during the crack years, he merely intensified that strategy. (...)

Now that the problem is seen as “white,” however, socioeconomic factors and other reasons that people turn to drugs are more commonly discussed. The result is that today’s white drug users are portrayed as inherently less culpable than the black people who were caught up in the crack epidemic of the ’80s and ’90s.

Craig Reinarman, professor of sociology emeritus at the University of California, Santa Cruz, has documented biased coverage of addiction since before the crack era. “Now that the iconic user is white and middle class, the answer is no longer a jail cell for every addict, it’s a treatment bed,” he says. The biased coverage ends up perpetuating a public perception that some drug use, usually by African Americans, is criminal while other drug use, usually by white people, is not. (...)

It’s important for journalists to understand that criminalization is not some sort of natural fact, and laws are not necessarily made for rational reasons. Our system does not reflect the relative risks of various drugs; legal ones are among the most harmful in terms of their pharmacological effects. With the exception of the legislation that resulted in the creation and maintenance of the FDA, our drug laws were actually born in a series of racist panics that had nothing to do with the relative harms of actual substances.

In order to do better, journalists must recognize that addiction is not simply a result of exposure to a drug, and that “innocence” isn’t at issue. The critical risk factors for addiction are child trauma, mental illness, and economic factors like unemployment and poverty. The “innocent victim” narrative focuses on individual choice and ignores these factors, along with the dysfunctional nature of the entire system that determines a drug’s legal status. (...)

The critical difference between addiction and dependence becomes clear when you look at specific drugs. Crack cocaine, for example, doesn’t cause severe physical withdrawal symptoms, but it’s one of the most addictive drugs known. Antidepressants like Prozac, meanwhile, don’t produce compulsive craving the way cocaine can, but some have severe withdrawal syndromes.

Needing opioids for pain alone, then, doesn’t meet the criteria for addiction. If the consequences of drug use are positive and the benefits outweigh the harm from side effects, then that use is no different from taking any other daily medication. Dependence in and of itself isn’t a problem unless the drug isn’t working or is more harmful than it is helpful.

Unfortunately, while the scientific understanding has changed to reflect these facts, the press hasn’t caught up. The Washington Post conducted a poll of pain patients on opioids that labeled one third of them as addicted after they responded “yes” to a question that asked whether they were “addicted or dependent,” without defining either term. A CBS affiliate in Chicago talked about treating “opioid dependence” when they actually meant “addiction”; this CNN story has the same problem.

This would be a mere semantic issue if it didn’t have such awful effects on policy. Conflating addiction and dependence results in harm to pain patients, children exposed to opioids in utero, and people who take medication to treat addiction.

by Maia Szalavitz, CJR |  Read more:
Image: Pixabay

Springsteen on Netflix

With Netflix’s faithful film version of “Springsteen on Broadway,” there’s no need to re-review the show itself. What my colleague Jesse Green wrote when it opened in October 2017 still stands: “As portraits of artists go, there may never have been anything as real — and beautiful — on Broadway.”

Bruce Springsteen’s solo monologue-plus-concert was sold out far in advance during its entire run, with an average face-value ticket price of around $500. For the last performance (on Saturday, the day before Netflix is releasing the film), resale tickets are currently running from $3,000 to well over $40,000 each. Making the show available for the cost of a streaming subscription is an unqualified boon, a greater contribution to the public good than our civic institutions seem capable of at the moment.

Admittedly, the feeling of being in the audience at the Walter Kerr Theater, sharing the distinct but equally electric currents of an unplugged rock show, a cadenced sermon and a shrewdly theatrical entertainment, can’t be replicated. The live experience is inimitable, and the post-show emotional high as you walk out of the theater probably can’t be duplicated, either.

But the film, directed by Thom Zimny and shot by Joe DeSalvo at two private performances this year, has its own compensations. “Springsteen on Broadway” has sold out on the strength of its star’s connection with his huge fan base, and the opportunity to see him do a clutch of his best-known songs in a relatively small setting. But the show’s revelation — and the reason it actually worked so well — was his ability to take the stagecraft he’d honed in rock clubs and arenas and transfer it so effortlessly to the theater.

It’s a master class in pacing, dynamics, modulation of volume and tone, and the film brings you right up onstage with Springsteen, giving you a more intimate view of his technique — understated, seemingly casual but absolutely controlled — than you could get in the theater. Each expression, gesture, artful hesitation and sly punch line is zeroed in on, framed for our appreciation.

Zimny, who served as his own editor, presents the show unadorned, almost entirely without directorial intervention — it’s just Springsteen onstage, joined for two songs by his wife and fellow E Street Band member, Patti Scialfa. The one noticeable strategy Zimny employs has to do with the audience, which is unseen during the first half of the film, when Springsteen delivers a series of vignettes about his childhood and his beginnings as a musician. Zimny films these highly personal anecdotes, and their accompanying songs, in close-ups and medium shots that don’t stray beyond the stage.

In the show’s second half, as Springsteen’s text opens up (and loses some of its poetic intensity) to encompass themes like fatherhood, relationships and the current political moment, Zimny gradually opens up, too, showing us hints of the audience members. They finally appear in full during the rousing closing performance of “Born to Run,” and the film ends on a note of community, with the Boss reaching across the lights to shake hands with his fans.

by Mike Hale, NY Times |  Read more:
Image: Kevin Mazur/Netflix

Friday, December 14, 2018

Whitey


Yuko Shimizu
via:

A New Connection between the Gut and Brain

It is well known that a high salt diet leads to high blood pressure, a risk factor for an array of health problems, including heart disease and stroke. But over the last decade, studies across human populations have reported the association between salt intake and stroke irrespective of high blood pressure and risk of heart disease, suggesting a missing link between salt intake and brain health.

Interestingly, there is a growing body of work showing that there is communication between the gut and brain, now commonly dubbed the gut–brain axis. The disruption of the gut–brain axis contributes to a diverse range of diseases, including Parkinson’s disease and irritable bowel syndrome. Consequently, the developing field of gut–brain axis research is rapidly growing and evolving. Five years ago, a couple of studies showed that high salt intake leads to profound immune changes in the gut, resulting in increased vulnerability of the brain to autoimmunity—when the immune system attacks its own healthy cells and tissues by mistake, suggesting that perhaps the gut can communicate with the brain via immune signaling.

Now, new research shows another connection: immune signals sent from the gut can compromise the brain’s blood vessels, leading to deteriorated brain heath and cognitive impairment. Surprisingly, the research unveils a previously undescribed gut–brain connection mediated by the immune system and indicates that excessive salt might negatively impact brain health in humans through impairing the brain’s blood vessels regardless of its effect on blood pressure.

This research proposes new therapeutic targets for countering stroke—the second leading cause of death worldwide—and cognitive dysfunction. Reducing salt intake is applicable to people around the globe, as nearly every adult consumes too much salt: on average 9–12 grams per day or around twice the recommended maximum level of intake (5 grams) by the World Health Organization. (...)

The implications of this newly identified gut–brain connection extend toseveral autoimmune disorders, including multiple sclerosis, rheumatoid arthritis, psoriasis, and inflammatory bowel disease, that have been shown to activate the same immune signaling pathway implicated in this study. These autoimmune disorders have a high stroke risk and are linked to poorly functioning blood vessels in the nervous system. This research is also a demonstration that what we eat affects how we think, and that seemingly isolated parts of the body can play vital roles in brain health. These results motivate research on how everyday stressors to our digestive systems and blood vessels might change the brain and, consequently, how we see, and experience, the world.

by Jonathan D. Grinstein, Scientific American | Read more:
Image: Getty

Dave Matthews & Tim Reynolds


[ed. Ok Dave, next time you take the hard part...]

What We Don't See

Breaking News! -- as NBC Nightly News anchor Lester Holt often puts it when beginning his evening broadcast. Here, in summary, is my view of the news that’s breaking in the United States on just about any day of the week:

Trump. Trump. Trump. Trump. Trump.

Or rather (in the president’s style):

Trump! Trump! Trump! Trump! Trump!!!!!!!! (...)

After all, as hard as it may still be to believe, HE looms over our lives, our planet, in a way no other human being ever has, not even a Joseph Stalin or a Mao Zedong, whose images were once plastered all over the Soviet Union and China. Even the staggering attention recently paid to an otherwise less than overwhelming dead president, one George H.W. Bush, could only have occurred because, in his relative diffidence, he seemed like the un-Trump of some long gone moment. The blanket coverage was, in other words, really just another version of Trump! Trump! Trump! Trump! Trump!!!!!!!!

All in all, check off these first two presidential years of his as a bravura performance, which shouldn’t really surprise any of us. What was he, after all, but a whiz of a performer long before he hit the White House? And what are we -- the media and the rest of us -- but (whether we like it or not, whether we care to be or not) his apprentices?

Now, for a little breaking news of another sort! Unbelievably enough, despite all evidence to the contrary, there’s still an actual world out there somewhere, even if Donald Trump’s shambling 72-year-old figure has thrown so much of it into shadow. I’m talking about a world -- or parts of it, anyway -- that doesn’t test well in focus groups and isn’t guaranteed, like this American president, to keep eyes eternally (or even faintly) glued to screens, a world that, in the age of Donald Trump, goes surprisingly unnoted and unnoticed.

So consider the rest of this piece the most minimalist partial rundown on, in particular, an American imperial world of war and preparations for the same, that is, but shouldn’t be, in the shadows; that shouldn’t be, but often is dealt with as if it existed on the far side of nowhere.

What We Don’t See

Let’s start with the only situation I can recall in which Donald Trump implicitly declared himself to be an apprentice. In the wake of the roadside-bomb deaths of three American soldiers in Afghanistan (a fourth would die later) -- neither Donald Trump nor anyone else in Washington gives a damn, of course, about the escalating numbers of dead Afghans, military and civilian -- the president expressed his condolences in an interview with the Washington Post. He then went on to explain why he (and so we) were still in Afghanistan (14,000 or so U.S. military personnel, a vast array of American air power, and nearly 27,000 private contractors). “We’re there,” he said, “because virtually every expert that I have and speak to say[s] if we don’t go there, they’re going to be fighting over here. And I’ve heard it over and over again.”

Those “experts” are undoubtedly from among the very crew who have, over the last 17-plus years, helped fight the war in Afghanistan to what top U.S. commanders now call a “stalemate,” which might otherwise be defined as the edge of defeat. In those years, before Donald Trump entered the Oval Office threatening to dump the longest war in American history, it had largely disappeared from American consciousness. So had much else about this country’s still-spreading wars and the still-growing war state that went with them.

In other words, none of what’s now happening in Afghanistan and elsewhere is either unique to, or even attributable to, the Trumpian moment. This president has merely brought to a head a process long underway in which America’s never-ending war on terror, which might more accurately be thought of as a war to spread terror, had long ago retreated to the far side of nowhere.

Similarly, the war state in Washington, funded in a fashion that no other set of countries on this planet even comes close to, and growing in preeminence, power, and influence by the year, continues to go largely unnoticed. Today, it is noted only in terms of Donald Trump, only to the degree that he blasts its members or former members for their attitudes toward him, only to the degree to which his followers denounce “the deep state." Meanwhile, ex-CIA, ex-NSA, and ex-FBI officials he’s excoriated suddenly morph into so many liberal heroes to be all-but-worshipped for opposing him. What they did in the “service” of their country -- from overseeing torture, warrantless wiretapping, wars, and drone assassination programs to directly intervening for the first time in an American election -- has been largely forgiven and forgotten, or even turned into bestsellerdom.

Yes, American troops (aka “warriors,” aka “heroes”) from the country’s all volunteer force, or AVF, continue to be eternally and effusively thanked for their service in distant war zones, including by a president who speaks of “my generals” and “my military.” However, that military has essentially become the U.S. equivalent of the French Foreign Legion, an imperial police force fighting wars in distant lands while most Americans obliviously go about their business.

And who these days spends any time thinking about America’s drone wars or the assassin-in-chief in the Oval Office who orders “targeted killings” across significant parts of the planet? Yes, if you happened to read a recent piece by Spencer Ackerman at the Daily Beast, you would know that, under President Trump, the already jacked-up drone strikes of the Obama era have been jacked-up again: 238 of them in Yemen, Somalia, and Pakistan alone in the first two years of Trump’s presidency (and that doesn't even include Libya). And keep in mind that those figures also don’t include far larger numbers of drone strikes in Syria, Iraq, and Afghanistan. The numbers of dead from such strikes (civilian as well as terrorist) are essentially of no interest here.

And here’s another crucial aspect of Washington’s militarized global policies that has almost completely disappeared into the shadows. If you read a recent piece by Nick Turse at the Intercept, you would know that, across the continent of Africa, the U.S. now has at least 34 military installations, ranging from small outposts to enormous, still expanding bases. To put this in the context of the much-ballyhooed new great power struggle on Planet Earth, the Chinese have one military base on that continent (in Djibouti near the biggest U.S. base in Africa, Camp Lemonnier) and the Russians none.

In the Greater Middle East, from Afghanistan to Turkey, though it’s hard to come up with a good count, the U.S. certainly has 50 or more significant garrisons (in Afghanistan, Bahrain, Egypt, Iraq, Jordan, Israel, Oman, Qatar, and Turkey, among other places); Russia two (in Syria); and China none. In fact, never has any country garrisoned the planet in such an imperial and global fashion. The U.S. still has an estimated 800 or so military bases spread across the globe, ranging from tiny “lily pads” to garrisons the size of small American towns in what Chalmers Johnson once called its “empire of bases.” And the American high command is clearly still thinking about where further garrisons might go. As the Arctic, for instance, begins to melt big time, guess who’s moving in?

And yet, in the age of Trump, when on any given day the New York Times has scads of employees focused on the president, neither that paper nor any other mainstream media outlet finds it of interest to cover developments in that empire of bases. In other words, for the media as for the American public, one of the major ways this country presents itself to others, weapons in hand, essentially doesn’t exist.

by Tom Englehardt, Tom Dispatch |  Read more:
Image: Wikipedia

Baby It's Cold Outside


How ‘Baby, It’s Cold Outside’ Went From Parlor Act to Problematic (NY Times)

[ed. Great... another stupid, contrived controversy just in time for Christmas. It's a new tradition.] 

Thursday, December 13, 2018

Scientists Crack the CRISPR Code for Precise Human Genome Editing

Scientists at the Francis Crick Institute have discovered a set of simple rules that determine the precision of CRISPR/Cas9 genome editing in human cells. These rules, published in Molecular Cell, could help to improve the efficiency and safety of genome editing in both the lab and the clinic.

Despite the wide use of the CRISPR system, rational application of the technology has been hindered by the assumption that the outcome of genome editing is unpredictable, resulting in random deletions or insertions of DNA regions at the target site.

Before CRISPR can be safely applied in the clinic, scientists need to make sure that they can reliably predict precisely how DNA will be modified.

"Until now, editing genes with CRISPR has involved a lot of guesswork, frustration and trial and error," says Crick group leader Paola Scaffidi, who led the study. "The effects of CRISPR were thought to be unpredictable and seemingly random, but by analysing hundreds of edits we were shocked to find that there are actually simple, predictable patterns behind it all. This will fundamentally change the way we use CRISPR, allowing us to study gene function with greater precision and significantly accelerating our science."

By examining the effects of CRISPR genome editing at 1491 target sites across 450 genes in human cells, the team have discovered that the outcomes can be predicted based on simple rules. These rules mainly depend on one genetic 'letter' occupying a particular position in the region recognized by the 'guide RNA' to direct the molecular scissors, Cas9 . (...)

In this study, the researchers found that the outcome of a particular gene edit depends on the fourth letter from the end of the RNA guide, adjacent to the cutting site. The team discovered that if this letter is an A or a T, there will be a very precise genetic insertion; a C will lead to a relatively precise deletion and a G will lead to many imprecise deletions. Thus, simply avoiding sites containing a G makes genome editing much more predictable. (...)

The team also discovered that how 'open' or 'closed' the target DNA is also affects the outcome of gene editing. Adding compounds that force DNA to open up—allowing Cas9 to scan the genome—led to more efficient editing, which could help when modifications need to be introduced in particularly closed genes.

"The good news is that regardless of the tissue of origin—which influences the degree of DNA 'openness' at specific genes—target regions containing an A or T at the key position show common editing," says Paola. "This means that, if we carefully select the target DNA, we can be pretty confident that we'll see the same effect in different tissues."

by The Francis Crick Institute, PhysOrg |  Read more:
Image: Nigel Hawtin for the Francis Crick Institute

Toxic Philanthropy: The Spirit of Giving While Taking

A new breed of wealthy do-gooders armed with apps and PowerPoints claim they want to change the world. But with their market-oriented values and often-shortsighted prescriptions, are really they going to change it for the better?

Or change it at all?

Anand Giridharadas, who has traveled first-class in the rarefied realm of 21st-century “philanthrocapitalists,” harbors serious doubts. In his acclaimed book, “Winners Take All: The Elite Charade of Changing the World,” the business reporter and former McKinsey consultant exposes the willful blindness of bright-eyed social entrepreneurs and TED-talking executives who, having drunk their own late-stage capitalist Kool-Aid, are now ready to serve us all. Compliments of the house.

Doing Good, Masking Bad

British novelist Anthony Trollope once observed, “I have sometimes thought that there is no being so venomous, so bloodthirsty as a professed philanthropist.”

Legendary short seller Jim Chanos, who teaches business students to spot fraud, understands why: when he scrutinizes a company for signs of shady activity, one of the things he looks for is an uptick in philanthropy— a strategy business ethics professor Marianne Jennings has named as one of the “seven signs of ethical collapse” in organizations. Chanos refers to the ruse as “doing good to mask doing bad.”

Such cynical public relations gambits are familiar enough to New Yorkers using Citi Bike, the public-private bike share system funded by Citigroup, whose misdeeds helped spark the global financial crisis of 2007-8. Or visitors to the Sackler Gallery at the Metropolitan Museum of Art, named for the family whose members own Purdue, the pharmaceutical company that fueled America’s opioid crisis through deceptive marketing of the addictive painkiller OxyContin.

But another sort of deep-pocketed philanthropist is harder to pin down. The harm she causes seems less direct; her motives more lofty. This type is fond of touting “win-win” solutions to social problems and tossing out terms like “impactful” and “scalable” and “paradigm-shifting” —the kind of lingo fed to business school students in lieu of critical thinking. Members of this group nevertheless refer to themselves as “thought leaders.”

These would-be benefactors of humanity tend to like former president Bill Clinton, whose Clinton Global Initiative became the ultimate road show for eager converts to what Giridharadas calls the faith of “win-winnerism,” i.e. “I’m doing great in this racket, and so can you.” Inhabiting Silicon Valley start-ups, venture capital firms, think tanks, and consulting companies in large metropolitan areas, philanthrocapitalists speak reverently of global poverty, but rarely touch down in places like Appalachia or rural Mississippi.

They are people like John Mackey, the chief executive of Whole Foods Market, whose book “Conscious Capitalism” is the bible for those aspiring to the win-win faith. In his formulation, CEOs are not simply the heads of companies, but transcendent beings that find “great joy and beauty in their work, and in the opportunity to serve, lead, and help shape a better future.” Mackey’s philosophy is one in which the beneficiaries of commerce should dedicate themselves to social improvement because they are obviously the best equipped to do the job. The public is meant to humbly follow.

This last bit, as Giridharadas shrewdly points out, may be far more radical than the old trickle-down philosophy of yesterday’s winners, who lobbied the government to get out of their way so that the bounteous by-products of their cutthroat activities could descend unimpeded to the poor. The new winners want something even more audacious: to replace the role of government as guardian of the common good.

Giridharadas presents searching conversations with well-educated, often well-meaning people floating above and apart from the lives of ordinary Americans, wishing to ease their consciences but failing both to clearly see the problems of society and to notice, for more than a nagging moment, the ways in which their own lives are financed by the fruits of injustice. They end up embracing a warm-and-fuzzy vision of changing the world that leaves brutal underlying structures securely in place.

The author has said what few who have traveled in this world have said plainly, lest their passport be revoked: the efforts of philanthrocapitalists are largely disruptive, rather than beneficial, to public life.

You can see it in the kind of ideas they embrace. Lecture slots at Davos don’t get doled out for discussing the need to expand popular, time-tested programs like Social Security and Medicare that are proven to reduce poverty and economic inequality. Such sensible fare is not nearly “innovative” or exotic enough—and besides, it might require the wealthy to pay additional taxes. Better are schemes like universal basic income that tend to favor elite interests (such as continuing to pay workers inadequate wages) or creating technological solutions like the one offered in the book by a young win-winnerist: an app that charges workers to manage the unpredictable cash flow caused by erratic work schedules.

And what of campaigning to outlaw the exploitative business practice that causes the problem in the first place? Notsomuch.

Talking about victims plays well on the philanthrocapitalist circuit, but pointing out perpetrators is largely forbidden. You can wow the crowd by peddling for-profit schemes to help the poor, but you won’t get the same applause by calling to jail criminal executives. Yet, as Giridharadas makes clear, even the fanciest app will not erase the feeling among ordinary people that the system has been captured by a small group of the rich and powerful—a feeling that drives them away in disgust from establishment politics and makes them very angry indeed.

by Lynn Parramore, INET |  Read more:
Image: uncredited
[ed. This has been going on forever, just a different version of those "charity" functions the rich put on each year to network, dress up, and see and be seen by other squillionaires (not to mention corporate enterprises like the PGA, NFL et al. that tout their millions of dollars in charitable contributions while raking in billions).]

The Opportunity Zone

At an Oval Office gathering earlier this year, President Donald Trump began touting his administration’s new real estate investment program, which offers massive tax breaks to developers who invest in downtrodden American communities. He then turned to one of the plan’s strongest supporters.

“Ivanka, would you like to say something?” Trump asked his daughter. “You’ve been pushing this very hard.”

The Opportunity Zone program promoted by Ivanka Trump and her husband Jared Kushner — both senior White House advisers — could also benefit them financially, an Associated Press investigation found.

Government watchdogs say the case underscores the ethical minefield they created two years ago when they became two of the closest advisers to the president without divesting from their extensive real estate investments.

Trump and Kushner jointly own a big stake in a real estate investment firm, Cadre, that recently announced it is launching a series of Opportunity Zone funds that seek to build major projects under the program from Miami to Los Angeles. Separately, the couple owns interests in at least 13 properties held by Kushner’s family firm that could qualify for the tax breaks because they are in Opportunity Zones in New Jersey, New York and Maryland — all of which, a study found, were already coming back.

Six of the Kushner Cos. buildings are in New York City’s Brooklyn Heights area, with views of the Brooklyn Bridge and Manhattan skyline, where a five-bedroom apartment recently listed for $8 million. Two more are in the beach town of Long Branch, N.J., where some oceanfront condos within steps of a white-tablecloth Italian restaurant and a Lululemon yoga shop list for as much as $2.7 million.

There’s no evidence the couple had a hand in selecting any of the nation’s 8,700 Opportunity Zones, and the company has not indicated it plans to seek tax breaks under the new program. But the Kushners could profit even if they don’t do anything — by potentially benefiting from a recent surge in Opportunity Zone property values amid a gold rush of interest from developers and investors. (...)

White House spokesman Hogan Gidley told the AP that individual state governors of both parties nominate communities for Opportunity Zone designation “based on what underserved areas would benefit most. … The White House has nothing to do with those decisions.”

The Investing in Opportunity Act, which became law last December as part of the Republican-sponsored tax overhaul, never gained traction when it was first proposed during the Obama administration, but it quickly found favor in a White House headed and dominated by real estate developers and investors. (...)

Along with the Kushner-tied Cadre Opportunity Zone funds, more than 50 real estate and private equity interests have made plans in recent weeks to create investment funds under the program, including several with ties to the couple and the Trump administration.

Last month, former White House Communications Director Anthony Scaramucci launched an opportunity zone fund tied to his Skybridge Capital investment firm, aiming to build projects worth more than $3 billion. Opportunity Zone funds have also been set up recently by New York-based Normandy Real Estate Partners and Heritage Equity Partners, two firms that have worked with Kushner Cos. on real estate ventures.

They are flocking to what financial analysts say are some of the most generous tax benefits they have ever seen. Investors who plow capital gains from previous investments into Opportunity Zone projects can defer taxes on those gains up to 2026.

If they decide not to cash out their investment for seven years, they get to exclude up to 15 percent of those gains from taxes. And they can permanently avoid paying taxes on any new gains from investment in the zones if they hold onto the investment for a decade. With capital gains taxes as high as 23.8 percent, the savings can easily add up.

Government officials have estimated the program would cost $1.5 billion in lost tax revenue over 10 years, but Treasury Secretary Steve Mnuchin has estimated the zones would attract up to $100 billion in renewal efforts.

While the Opportunity Zone program mostly targets census tracts of high poverty and unemployment, it also allows “contiguous” tracts that might not be low-income, but are close enough to deprived communities to be eligible.

Critics say that could allow developers to cash in by targeting zones already teeming with investment and gentrified neighborhoods. Amazon’s recent decision to locate a new headquarters in the bustling New York City neighborhood of Long Island City, for example, drew rebukes following reports it was in an Opportunity Zone.

A study by the Urban Institute in Washington found that nearly a third of the more than 8,700 Opportunity Zones nationwide — and all 13 of the ones containing Kushner properties — were showing signs of heavy investment and gentrification, based on such factors as rent increases and the percentage of college-educated residents.

by Stephen Braun, Jeff Horowitz and Bernard Condon, AP via TPM| Read more:
Image: TPM
[ed. No wonder it's called the "Opportunity Zone". See also: Let the Looting Begin (TPM)]

[ed. All these beautiful (and expensive) Leica film cameras, now obsolete.]