Wednesday, December 4, 2019

Did We Ever Know the Real Kamala Harris?

Just as California is so often viewed from afar as either glittering paradise or dystopian disaster, so Kamala Harris was crowned as the perfect Democrat for 2020.

Like her state, Senator Harris’s story up close is both more prosaic and more nuanced than the shiny image built in part on misperceptions about California. Now that she has dropped out of the presidential race, the legacy of her campaign may be what the candidacy illustrates about the complexity and reality of politics in the Golden State. (...)

On Tuesday, Senator Kamala Harris, who began her campaign for the Democratic presidential nomination in the top tier, withdrew from the race.California has had, by design, weak political parties, epitomized by the current system that replaced traditional primaries with an election in which voters choose the “top two” candidates, who then face off on the November ballot. San Francisco is an anomaly, the one metropolis where politics is a sport. Political machines have flourished in the city since the late 19th century, when Christopher Buckley, known as the Blind Boss, consolidated power from the back room of his saloon by establishing a patronage system. A century later, Kamala Harris rooted herself in the political establishment and forged connections with help from her longtime mentor and onetime boyfriend Willie Brown, the powerful Assembly speaker and then San Francisco mayor.

Those connections helped the young prosecutor become a boldface name in the society pages and in the copy of the legendary columnist Herb Caen. Ms. Harris won her first race in 2003, unseating the incumbent district attorney, with support from law enforcement unions, The San Francisco Chronicle and the political and social elite of San Francisco.

From the small city with outsize visibility, she built a national profile. In 2008, Ms. Harris was California co-chairwoman for her friend Barack Obama; within days of his historic victory, she announced her candidacy for California attorney general, a race still two years away. Oprah Winfrey put her on O magazine’s “Power List.” A column in USA Today pronounced her “the female Barack Obama,” “destined to become a commanding presence in the political life of this country.”

Perhaps one of the greatest fallacies about California politics is the assumption that its Democratic leaders are by definition die-hard liberals. By necessity, Democrats who win statewide have actually been moderates. That remains true even in an era when no Republican has won statewide since 2006. Last year, for example, Senator Dianne Feinstein trounced her liberal opponent, despite his endorsement by the state Democratic Party.

Even Gavin Newsom, the most liberal governor in decades, got his start in San Francisco by defeating a Green Party candidate for mayor, the same year Ms. Harris unseated the city’s progressive district attorney by running a tough-on-crime campaign. In her 2010 race for attorney general she arguably ran to the right of her Republican opponent on some issues. He championed efforts to ease the state’s three-strikes law and later supported a successful ballot initiative to that end; Ms. Harris, by then attorney general, declined to take a position.

As attorney general, she disappointed California liberals through both actions and the lack of action. That did not hamper her ability to burnish her national credentials. She addressed the 2012 Democratic National Convention in a prime-time slot. Her name was floated as a potential United States attorney general, even a Supreme Court justice.

Yet she remained largely unknown in California — a function of the staggering size of a state of almost 40 million where the principal way to gain exposure requires television ads in a dozen media markets, at a cost of upward of $4.5 million a week. When Ms. Harris ran for the United States Senate in 2016, six out of 10 registered voters had no impression of her, although she had been attorney general for almost six years. In recent polls, about a quarter of voters still had no opinion.

That reality undercut a key argument cited by pundits who labeled her an instant front-runner when she entered the presidential race. Their scenarios assumed she would do well in the delegate-rich California primary, moved up to March to have more impact on the race. (...)

And then there is the role of California in the age of President Trump. His victory coincided with Ms. Harris’s election to the Senate and fueled a sense of inevitability about her candidacy. She was the prosecutor who could take on the president. From the state that had become the heart of the resistance came the candidacy fueled by anti-Trump anger and California glitter.

At her January kickoff in Oakland, a huge crowd of all ages and races waved flags, pumped fists, teared up. They cheered her passion, her toughness and her rhetoric. But above all they were cheering for a woman who would take on the man whose name she never mentioned.

This, too, was not quite what it seemed. It was easy to conflate antipathy to Mr. Trump with support for Ms. Harris. By the time she appeared in Oakland eight months later at a low-key event to open her campaign office, the questions were about polls that showed her running a distant fourth in her home state, fourth even in the Bay Area, where they knew her best.

by Miriam Pawel, NY Times | Read more:
Image: Damon Winter/The New York Times

Tuesday, December 3, 2019

Billie Eilish


The Best Albums of 2019 (The Ringer)

“Duh,” as the kids say. (Do the kids really say “duh”? Are they using “duh” ironically to further mock doddering old people? Shit.) No young artist screams THE FUTURE louder, or whispers YOU ARE PROBABLY NOT THE FUTURE with more alluring malice than this actual teenager on this actual postapocalyptic pop masterpiece.

[ed. Uhmm, well... okay, I guess. We all need to keep up (over 650 million views in 8 months!). See also: Billie Eilish Doesn’t Know Van Halen (Rolling Stone).]

Cookie-Cutter Suburbs Could Help Spread Sustainable Yards

Yards in Austin, Tex., look like most across the country: sprawling expanses of short, uniform grass. But when intense Texas droughts set in, dead brown patches deface the Kelly green monochrome. Instead of repeatedly replanting these patches with the typical sod, the homeowner association of one Austin neighborhood, Travis Country, offers another option: filling in the brown spots with less-thirsty native species. As Cynthia Wilcox, the association’s grounds committee chair, puts it: “When your grass gets big dead spots, stop fighting it.” About 750 homeowners—half of the subdivision—have taken this advice. Roughly 500 of those homes have gone even further, landscaping much of their property with drought-tolerant native species such as long-bladed buffalo grass, slender salvia stalks and mountain laurel trees, which drip with purple blossoms that some people think smell like grape soda.

Cookie-Cutter Suburbs Could Help Spread Sustainable YardsWith homeowner associations often focused on projecting a uniform, ideal suburban image, it is rare for one to suggest—let alone allow—such a landscape shift. But precisely because these groups (usually called HOAs) establish and enforce aesthetic rules for millions of American yards, they could be a way to spread sustainable practices promoted by conservationists—while also helping subdivisions tackle problems ranging from unsightly lawn splotches to polluting fertilizer runoff. Some conservation programs are testing ways to overcome sociological and economic hurdles to get HOAs to embrace such changes, or at least not oppose them.

Residential lawns in the U.S. suck up a lot of water. EPA data show that, on average, 15 percent of residential water use involves lawns—which cover three times more land than crops irrigated for agriculture, according to NOAA and NASA research. Furthermore, grass fertilizer can run off into nearby streams, ponds or other water bodies, sometimes fueling algae blooms. And using homogenous flora such as commercial lawn grass species across many geographic zones dilutes local biodiversity; the practice has been linked to at least one native species decline, as introduced plants replace the native vegetation to which local wildlife has long adapted.

Conservationists have argued that some of these problems could be avoided if people made more diverse landscaping choices that support native species. In arid parts of the West, for example, landscaping a yard with local, drought-tolerant species and opting for mulch over grass can cut household water use by 30 percent. Native turfgrasses (which can replace typical lawn grass species) sprout fewer weeds and grow more slowly, reducing the need for mowing and its associated carbon emissions. Susannah Lerman, an ecologist at the University of Massachusetts Amherst, also found that lawns mowed less frequently supported more bees.

These changes can be hard sells for some residents, though, sometimes because they belong to homeowner associations with strict rules on yard appearance. HOAs are usually run by a handful of elected residents of a subdivision or neighborhood, but long-standing rules—such as grass being kept below a certain height—can come to be at odds with residents’ changing desires. In new subdivisions HOA rules may actually be established by developers, not the residents who ultimately move in. And sometimes, residents within an HOA are surprised by how strict landscaping rules are, or may disagree on yard upkeep standards. HOAs can fine people found in violation of rules, and conflicts over lawn care sometimes escalate to lawsuits. About 80 percent of new U.S. subdivision residents belong to an HOA, according to a July 2019 study in the Journal of Urban Economics.

“It is a little alarming for those of us who work in landscapes and sustainability to know that HOAs have a lot of influence and power over how a lot of our urban areas look,” says Gail Hansen, a professor of environmental horticulture at the University of Florida. But she and others are trying to turn that dynamic of enforced uniformity into an advantage by prodding HOAs to broaden their definition of what an acceptable yard looks like, in order to boost native-friendly landscaping across a community rather than rely on piecemeal efforts by environmentally minded homeowners. Sometimes that includes creating government programs, enacting state or local laws—or simply speaking with HOAs and residents themselves.

For over 30 years, a state government program has encouraged Florida residents to switch to sustainable landscapes with plants that are pest-resistant, drought-tolerant and thrive in most conditions, Hansen says. In northern Florida that could include the silvery pineapple guava shrub, while in the south an evergreen called natal plum can be kept trimmed low serve as groundcover. State law dictates that HOAs cannot prevent residents from planting these “Florida-Friendly” options. However, HOAs can still push back if homeowners choose plants or designs that do not meet neighborhood aesthetic standards. Hansen speaks with residents about how the initiative can work within HOA rules, and she also sometimes persuades board members to rewrite their mandates to accommodate the program. There are about 500 yards certified with the program, with still more properties practicing at least some of the recommended conservation measures. Last year a majority of the more than 220,000 attendees of water conservation workshops taught by University of Florida extension faculty reported scaling back their lawn watering afterwards.

Leslie Nemo, Scientific American |  Read more:
Image: Getty

SNAP Judgement

President Trump, a very rich guy who promised to help not-rich people get ahead but so far hasn’t, is pushing rules that would place new limits on a program that helps poor people buy food.

The push isn’t new, but it’s getting new attention due to an Urban Institute study that concluded the rules, if they’d been in place last year, would have reduced the main federal food aid program’s rolls by 3.7 million people — as well as cut food stamp spending by about $4.2 billion. Remember that number for later.

There’s a lot of wonkery in exactly how the administration’s rules would affect the Supplemental Nutrition Assistance Program — “SNAP” in policy-circle shorthand and “food stamps” for just about everyone else. But the gist is that they would require more people to work a certain number of hours in order to get food stamps, place limits on how long some people can remain on the program, and change the rules for enrollment. There are also a lot of experts arguing over whether the people losing benefits really deserved to have them to begin with.

Image result for poor people snapBut if you have your own work to do and life to live and don’t have time for a policy deep dive, here are the basics of the situation: Some, probably very small fraction of the people who would lose food stamps probably don’t need need them. Instead, they are getting small payments that help them get enough to eat in the richest country on earth while also paying rent and maybe even (horrors!) buying some stuff that wasn’t absolutely necessary. But some of the people — likely a far greater number of people — who’d lose food stamp payments really do need those benefits to get themselves and their families enough to eat. Without those benefits, they’ll either go hungry or make other, painful sacrifices that rich people have never thought about in their lives.

There are two main arguments conservatives have marshalled in support of food stamp cuts, and they’re both dishonest. Work requirements are often touted as an effort to nudge (starve) people into self-sufficiency. This is based on the assumption that people are sitting contentedly on their food stamp payments and skipping out on all sorts of employment opportunities that would vault them into prosperity. That’s not a great assumption. National unemployment may be low, but that doesn’t mean all these people have the required experience for jobs that they can find, and get to, and afford adequate childcare while they attend.

Supporters of the requirements often claim to be helping poor people access “the dignity of work,” meaning that it’s inherently more satisfying to get paid for work than to depend on public support. That may well be true, provided your boss isn’t abusive and your work conditions are safe and, at the end of the month, your paycheck covers your expenses and maybe even leaves a bit left to save for the future. But you won’t hear those people talking about the “dignity of work” when it comes to organizing workers into unions that protect them from abusive management or unsafe conditions. Nor do they pipe up in favor of the “the dignity of a living wage.” In fact, the people who favor work requirements near universally oppose unions and minimum wage hikes. Go figure.

The other argument for making it harder for poor people to buy food is somehow even flimsier. And that’s the need for fiscal responsibility when it comes to the federal budget. The United States government this year will spend nearly $1 trillion more than it takes in, financing the rest by borrowing money. By now, all that borrowing has added up to, by most calculations, something north of $20 trillion. And so to deal with that debt, many conservatives say, we need to cut spending on “entitlements” — a term for helping people buy food or make ends meet or access health care through Medicaid.

This isn’t a particularly credible argument, given that the deficit could be shrunk by raising taxes on the wealthy and upper middle class, or on the corporations whose poverty wages leave millions of working Americans dependent on government programs. But it’s revealed to be a comically disingenuous argument once you remember the Trump administration’s signature domestic “achievement”: tax breaks that will add at least $1.5 trillion to the deficit over 10 years, according to Congress’s official, nonpartisan accounting agency. The tab will actually be far higher if, as Republicans promise, Congress extends tax cuts for individuals that are currently set to expire. But at a baseline, that works out to about $150 billion annually, which, if math isn’t your thing, is approximately a fuckton more than the $4.2 billion they want to “save” on food stamps.

by Patrick Reas, Rolling Stone |  Read more:
Image: John Moore/Getty
[ed. Or they could cut the military's $750 billion annual budget by about half of one percent. See also: New SNAP Rules Will Cause a National Public Health Crisis (Jacobin). Update: Yes, they did it (NY Times).]

Why Can’t Internet Companies Stop Awful Content?

For the first two decades of the commercial Internet, we celebrated the Internet as one of society's greatest inventions. After all, the Internet has led to truly remarkable outcomes: it has helped overthrow repressive political regimes, made economic markets more efficient, created safe spaces for otherwise marginalized communities to find their voices, and led to the most exquisite cat videos ever seen.

Don't abandon the Internet yet!But in the last few years, public perceptions of the Internet have plummeted. We've lost trust in the Internet giants, who seem to have too much power and make missteps daily. We also are constantly reminded of all of the awful and antisocial ways that people interact with each other over the Internet. We are addicted to the Internet—but we don't really love it any more.

Many of us are baffled by the degradation of the Internet. We have the ingenuity to put men on the Moon (unfortunately, only men so far), so it defies logic that the most powerful companies on Earth can't fix this. With their wads of cash and their smart engineers, they should nerd harder.

So why does the Internet feel like it's getting worse, not better? And, more importantly, what do we do about it?

It was always thus

Let's start with the feeling that the Internet is getting worse. Perhaps this reflects an overly romantic view of the past. The Internet has always had low-value content. Remember the Hamster Dance or the Turkish "I Kiss You" memes?

More generally, though, this feeling reflects our overly romantic view of the offline world. People are awful to each other, both online and off. So the Internet is a mirror of our society, and as the Internet merges into everyday life, it will reflect the many ways that people are awful to each other. No amount of nerding harder will change this baseline of antisocial behavior.

Furthermore, the Internet reflects the full spectrum of human activity, from great to awful. With the Internet's proliferation—and with its lack of gatekeepers—we will inevitably see more content at the borders of propriety, or content that is OK with some audiences but not with others. We've also seen the rise of weaponized political content, including from state-sponsored entities, designed to propagandize or to pit communities against each other.

There is no magical way to eliminate problematic content or ensure it reaches only people who are OK with it. By definition, this content reflects edge cases where mistakes are most common, and it often requires external context to properly understand. That context won't be available to either the humans or the machines assessing its propriety. The result is those infamous content moderation blunders, such as Facebook's removal of the historic "Napalm Girl" photo or YouTube's misclassification of fighting robot videos as animal abuse. And even if the full amount of necessary context were available, both humans and machines are susceptible to biases that will make their decisions seem wrong to at least one audience segment.

There's a more fundamental reason why Internet companies can never successfully moderate content for a mass audience. Content moderation is a zero-sum game. With every content decision, the Internet companies make winners and losers. The winners get the results they wanted; the losers don't. Hence, there's no way to create win-win content-moderation decisions. Internet companies can—and are trying to—improve their content moderation efforts. But dissatisfaction with that process is inevitable regardless of how good a job the Internet companies do.

So given that Internet companies can never eliminate awful content, what should regulators do?

The downside of “getting tough”

One regulatory impulse is to crack down harder on Internet companies, forcing them to do more to clean up the Internet. Unfortunately, tougher laws are unlikely to achieve the desired outcomes for three reasons.

First, because of its zero-sum nature, it's impossible to make everyone happy with the content moderation process. Worse, if any law enables lawsuits over content moderation decisions, this virtually ensures that every decision will be "litigation bait."

Second, tougher laws tend to favor incumbents. Google and Facebook are OK with virtually any regulatory intervention because these companies mint money and can afford any compliance cost. But the companies that hope to dethrone Google and Facebook may not survive the regulatory gauntlet long enough to compete.

Third, some laws expect Internet companies to essentially eliminate antisocial behavior on their sites. Those laws ignore the baseline level of antisocial behavior in the offline world, which effectively makes Internet companies liable for the human condition.

The logical consequence of "tougher" Internet laws is clear but chilling. Google and Facebook will likely survive the regulatory onslaught, but few other user-generated content services will. Instead, if they are expected to achieve impossible outcomes, they will shut down all user-generated content.

In its place, some of those services will turn to professionally generated content, which has lower legal exposure and is less likely to contain antisocial material. These services will have to pay for professionally generated content, and ad revenue won't be sufficient to cover the licensing costs. As a result, these services will set up paywalls to charge users for access to their databases of professionally licensed content. We will shift from a world where virtually everyone has global publication reach to a world where most readers will pay for access to a much less diverse universe of content.

In other words, the Internet will resemble the cable industry circa the mid-1990s, where cable subscribers paid monthly subscription fees to access a large but limited universe of professionally produced content. All of the other benefits we currently associate with user-generated content will just be fond memories of Gen Xers and millennials.

by Eric Goldman and Jess Miers, Ars Technica | Read more:
Image: Aurich Lawson/Getty

Monday, December 2, 2019

What Counts As Work?

Employment contracts are by their nature asymmetrical. Although in principle contracts are made between two free and equal parties, when an employee signs one they enter into an unequal relationship. Work can be a source of identity, a prerequisite for social inclusion, and a marker of status and independence; historically, the employment contract has been contrasted with slavery, bondage and other forms of servitude. But workers’ movements have long argued that waged labour in general implies a kind of wage-slavery: it dominates as well as exploits. At the very least, it sets up a hierarchical relationship: having a job means being under the authority of an employer. Struggles for better working conditions – for proper remuneration, trade union representation, protection against discrimination, the right to time off for leisure, parenting or sickness – aim to mitigate the essential inequity of the employment contract and limit the power of the boss.

According to the sociologist Colin Crouch, the gig economy provides a new way of concealing employers’ authority. People who work for such online platforms as Uber, Lyft and Deliveroo are classed not as employees but as self-employed. They are supposedly flexible entrepreneurs, free to choose when they work, how they work and who they work for. In practice, this isn’t the case. Unlike performers in the entertainment industry (which gives the ‘gig’ economy its name), most gig workers don’t work for an array of organisations but depend for their pay on just one or two huge companies. The gig worker doesn’t really have much in common with the ideal of the entrepreneur – there is little room in their jobs for creativity, change or innovation – except that gig workers also take a lot of risks: they have no benefits, holiday or sick pay, and they are vulnerable to the whims of their customers. In many countries, gig workers (or ‘independent contractors’) have none of the rights that make the asymmetry of the employment contract bearable: no overtime, no breaks, no protection from sexual harassment or redundancy pay. They don’t have the right to belong to a union, or to organise one, and they aren’t entitled to the minimum wage. Most aren’t autonomous, independent free agents, or students, part-timers or retirees supplementing their income; rather, they are people who need to do gig work simply to get by.

What is new about the gig economy isn’t that it gives workers flexibility and independence, but that it gives employers something they have otherwise found difficult to attain: workers who are not, technically, their employees but who are nonetheless subject to their discipline and subordinate to their authority. The dystopian promise of the gig economy is that it will create an army of precarious workers for whose welfare employers take no responsibility. Its emergence has been welcomed by neoliberal thinkers, policymakers and firms who see it as progress in their efforts to transform the way work is organised.

‘Standard employment’ is the formal name given to a non-temporary, full-time job secured by a contract. Today, the share of ‘non-standard employment’ in the labour market is growing. There are many kinds of non-standard and informal work, from self-employment to the unstable, unregulated and illegal work of the shadow economy. It takes different forms in different countries. In the UK, on-call contracts (whereby workers are on standby and can be called in to work at any time, even for short stints) and zero-hours contracts (whereby employers aren’t obliged to guarantee even a minimum number of working hours) are popular: an estimated 900,000 people worked under such arrangements in 2017. Across Europe, too, there has been an increase in ‘marginal jobs’ and in the use of contracts that expire before workers acquire full rights, like Germany’s ‘minijobs’ and ‘midijobs’ (which provide short hours and low pay, but are enough to disqualify workers from claiming unemployment benefits). At the same time, in advanced economies, the rights of ‘standard employees’ have been steadily eroded. Insecurity is the general condition of modern work. (...)

Even if standard employment turns out to be the historical exception, the erosion of workers’ security doesn’t mark a return to the capitalism of the Gilded Age. Modern precarity takes a distinctive form, which is a result of the major political and economic changes of the 1970s. As Crouch sees it, three of these changes are especially significant. First, the shift from a manufacturing to a service economy, characterised as deindustrialisation or as the transition from Fordism to post-Fordism. As manufacturing declined, the enriched standard employment associated with it began to disappear. Second, the rise of digital and data technologies, which has made possible the intensification of workplace discipline and surveillance as well as new ways of working from home – a modern ‘putting-out’ system. The internet has enabled monopolies, but it has also decentralised work as well as deindustrialised it. Third, workers’ loss of power after the deregulation of finance. Under Thatcher and Reagan, corporations were reorganised to benefit shareholders, and finance was given the freedom to move to more advantageous regimes. Workers had no such freedom, and neither did conventional firms, whose buildings, equipment and working populations were settled in a particular place. All this changed the distribution of risk, to the detriment of workers. (...)

The very existence of a precarious workforce makes it possible for steady jobs to be undercut, work contracted out and workers set against one another. Crouch believes that the gig economy has a wider significance too. He picks out two contradictory models of capitalism. Market fundamentalists believe that the aim of capitalism is to achieve perfect markets; they reject oligopoly and propagate the the myth of the equal contract. Corporate capitalists, by contrast, are in favour of oligopoly and don’t care so much about ideal markets; they see the relation between employer and employee as closer to a master-servant dynamic (much modern labour law legally enshrines obedience to managerial authority). What the gig does, in Crouch’s view, is to ease the tension between the two models. It promises to fulfil the fantasy that we are all free in the marketplace, even in the labour market. When workers are no longer defined as employees, their interests are pushed outside the corporation altogether. They are also removed from union jurisdiction. Unions have, as one would expect, campaigned against the encroachments of the gig economy and the erosion of workers’ rights it entails. But they have sometimes been reluctant to organise the precarious workforce, even though precarious workers are among those most in need of union representation. It’s no surprise that, as a consequence, many precarious workers see ordinary workers – however poorly paid – as privileged; in some countries, Italy for example, they even support further labour market deregulation. Unions are starting to address this – by looking for ways to organise workers outside workplaces, by supporting new organisations led by precarious workers, by bringing legal cases to win rights for those workers – but progress is slow.

by Katrina Forrester, LRB |  Read more:
Image: The Wealth of the Nation, Seymour Fogel

Gougers ‘R’ Us: Pirate Equity

Once a month, National Public Radio highlights medical horror stories for its series, Bill of the Month. The series is crowdsourced by patients who have been gouged by the medical industry, and they have swamped NPR with testimonies. Taken together, their accounts are a devastating indictment of the monopolization of the American medical industry.

A recent story highlighted Dr Naveen Khan, a 35-year-old radiologist from Southlake, Texas, who had his arm crushed by an all-terrain vehicle. An air ambulance took him to Fort Worth, Texas. The company promptly called him while he was in the hospital to let him know that the brief flight cost a total of $56,603. His insurer paid $11,972, which is about what the flight actually costs, while the air ambulance company billed Dr. Khan for the remaining $44,631, which he would have to pay out of pocket.

Dr. Khan’s bill is one among thousands of extortionate charges from air ambulances. Nationally, the average helicopter bill has now reached $40,000, according to a report by the Government Accountability Office. That is more than double what it was only nine years ago.

It would be tempting to conclude that higher prices are due to a shortage of helicopters or pilots to ferry wounded patients. In fact, the U.S. air-ambulance fleet has doubled in size over the past 15 years.

The laws of economics dictate that when demand is flat and supply increases substantially, prices should go down not up. What is preventing the laws of supply and demand from operating here?

The reason is the emergence of a national oligopoly. After a series of mergers, the air ambulance industry is highly concentrated and controlled by private equity groups. Two thirds of medical helicopters operating in 2015 belonged to three for-profit providers, according to the GAO.

Private equity have been active consolidating the industry, as they have so many other industries. In fact, researchers have argued that keeping hidden monopolies private is part of the attraction of private equity. The helicopter company that transported Dr. Khan was Air Medical Group Holdings, which is owned by the private equity group Kohlberg Kravis and Roberts (KKR). Other private equity groups are also active. American Securities LLC bought Air Methods for $2.5 billion in March 2017.

“It’s the same people who have bought out all the emergency room practices, who’ve bought out all the anesthesiology practices,” said James Gelfand, senior vice president of health policy for the ERISA Industry Committee. He added, “They have a business strategy of finding medical providers who have all the leverage, taking them out of network, and essentially putting a gun to the patient’s head.”

Gouging consumers with surprise bills is how private equity groups operate. According to an article in the Harvard Business Review, “Private equity firms have been buying and growing the specialties that generate a disproportionate share of surprise bills: emergency room physicians, hospitalists, anesthesiologists, and radiologists.”

A study by Stanford University shows that surprise billing has been rapidly rising from about a third of visits in 2010 to almost 43 percent in 2016. Much of this is driven by private equity groups, which are rolling up large parts of the medical industry. (...)

Private equity groups have been busy buying free-standing emergency rooms, where prices can reach up to 22 times higher than at a physician’s office. They have been very active with mergers, buying orthopedic, ophthalmology, and gastroenterology practices. As humans treat pets like family members, private equity are even buying veterinary hospitals. Federal Trade Commissioner Rohit Chopra has called out roll-up transactions as an area of potential concern. (...)

The term private equity is highly misleading. There is very little equity involved in most deals, and companies are generally loaded with debt. In the 1980s, the industry was more appropriately called the Leveraged Buyout (LBO) industry, due to the high degree of debt (leverage) involved in deals. When a wave of LBOs went bankrupt in the late 1980s and early 1990s, the industry rebranded and became known as “private equity.”

Pirate equity is more appropriate. It is an extractive industry that takes as much as possible from the companies it buys through endless fees and special dividends. Acquired companies are loaded with debt, which they can only pay down by hiking prices on customers and cutting costs. There is no new equity added in almost all acquired companies. Every time the term private equity is used, it obscures the true nature of the beast.

Unlike venture capital, which injects equity into companies and funds new ventures, or initial public offerings, which raise actual equity, private equity is purely extractive. Studies have shown that private equity leads to higher defaults, its claims to increase productivity are a sham, and it causes higher rates of job losses and firings in target companies as well as lower wages. (Unsurprisingly, industry-funded studies are much more sanguine.) But perhaps the most damning indictment of the private equity model is that their returns are overstated and their performance simply not as good as they lead investors to believe.

You do not need an MBA from Wharton to know that loading up companies with debt will lead to bankruptcy. Research shows that private equity funds acquire healthy firms and increase their probability of default by a factor of 10. They are the antithesis of conservative management.

Private equity groups have been behind most of the recent bankruptcies in local newspapers, retail, and grocery stores. In fact, analysis by FTI Consulting found that two thirds of the retailers that filed for Chapter 11 in 2016 and 2017 were leveraged buyouts. The pirate equity groups load debt onto the companies and dividend out the cash to themselves, which often leads to bankruptcy and a trail of job losses and underfunded pensions. Heads they win, tails the company, employees, and suppliers lose.

by Jonathan Tepper, The American Conservative | Read more:
Image: Ledomstock//Shutterstock

Friday, November 29, 2019

Being Asexual

Asexuality isn’t a complex. It’s not a sickness. It’s not an automatic sign of trauma. It’s not a behaviour. It’s not the result of a decision. It’s not a chastity vow or an expression that we are ‘saving ourselves’. We aren’t by definition religious. We aren’t calling ourselves asexual as a statement of purity or moral superiority.

We’re not amoebas or plants. We aren’t automatically gender-confused, anti-gay, anti-straight, anti-any-sexual-orientation, anti-woman, anti-man, anti-any-gender or anti-sex. We aren’t automatically going through a phase, following a trend, or trying to rebel. We aren’t defined by prudishness. We aren’t calling ourselves asexual because we failed to find a suitable partner. We aren’t necessarily afraid of intimacy. And we aren’t asking for anyone to ‘fix’ us.

From the book ‘The Invisible Orientation’ (2014) by Julie Sondra Decker, asexual writer and activist

Definitions sometimes reveal more by what they don’t say than what they do. Take asexuality for example. Asexuality is standardly defined as the absence of sexual attraction to other people. This definition leaves open the possibility that, free from contradiction, asexual people could experience other forms of attraction, feel sexual arousal, have sexual fantasies, masturbate, or have sex with other people, not to mention nurture romantic relationships.

Far from being a mere academic possibility or the fault of a bad definition, this is exactly what the lives of many asexual people are like. The Asexuality Visibility and Education Network (AVEN), for example, describes some asexual people as ‘sex-favourable’, which is an ‘openness to finding ways to enjoy sexual activity in a physical or emotional way, happy to give sexual pleasure rather than receive’. Similarly, only about a quarter of asexual people experience no interest in romantic life and identify as aromantic.

These facts haven’t been widely understood, and asexuality has yet to be taken seriously. But if we attend to asexuality, we arrive at a better understanding of both romantic love and sexual activity. We see, for example, that romantic love, even in its early stages, need not involve sexual attraction or activity, and we are also reminded that sex can be enjoyed in many different ways.
Before looking at the relationship between asexuality and love, it is useful to clarify what asexuality is and what it isn’t. The following distinctions are widely endorsed in asexual communities and the research literature.

Asexual people make up approximately 1 per cent of the population. Unlike allosexuals, who experience sexual attraction, asexual people don’t feel drawn towards someone/something sexually. Sexual attraction differs from sexual desire, sexual activity or sexual arousal. Sexual desire is the urge to have sexual pleasure but not necessarily with anyone in particular. Sexual activity refers to the practices aimed at pleasurable sensations and orgasm. Sexual arousal is the bodily response in anticipation of, or engagement in, sexual desire or activity. (...)

It might be surprising to some that many asexual people do experience sexual desire, and some have sex with partners and/or masturbate. Yet this is the case. Sexual attraction to people is not a prerequisite of sexual desire. (...)

Since some asexual people experience sexual desire, albeit of an unusual kind, and do have sex, asexuality should not be confused with purported disorders of sexual desire, such as hypoactive sexual desire disorder where someone is distressed by their diminished sexual drive. Of course, this is not to say that no asexual people will find their lack of sexual attraction distressing, and no doubt some will find it socially inhibiting. But as the researcher Andrew Hinderliter at the University of Illinois at Urbana-Champaign notes: ‘a major goal of the asexual community is for asexuality to be seen as a part of the “normal variation” that exists in human sexuality rather than a disorder to be cured’.

Asexuality is often thought of as a sexual orientation due to its enduring nature. (It should not be considered an absence of orientation since this would imply that asexuality is a lack, which is not how many asexual people would like to be seen.) To be bisexual is to be sexually attracted to both men and women; to be asexual is to be sexually attracted to no one. There is empirical evidence that, like bisexuality, asexuality is a relatively stable, unchosen feature of someone’s identity. As Bogaert notes, people are usually defined as asexual only if they say that they have never felt sexual attraction to others. Someone who has a diminished libido or who has chosen to abstain from sex is not asexual. Because asexuality is understood as an orientation, it is not absurd to talk of an asexual celibate, or an asexual person with a desire disorder. To know that someone is asexual is to understand the shape of their sexual attractions; it’s not to know whether they have sexual desire, or have sex. The same is true of knowing anyone’s sexual orientation: in itself, it tells us little about their desire, arousal or activity.

Knowing someone’s sexual orientation also tells us little about their wider attitudes to sexuality. Some asexual people might not take much pleasure in sexual activity. Some asexual people, like some allosexual people, find the idea of sex generally repulsive. Others find the idea of themselves engaging in sex repulsive; some are neutral about sex; still others will engage in sex in particular contexts and for particular reasons, eg, to benefit a partner; to feel close to someone; to relax; to benefit their mental health, and so on. For example, the sociologist Mark Carrigan, now at the University of Cambridge, quotes one asexual, Paul, who told him in interview:
Assuming I was in a committed relationship with a sexual person – not an asexual but someone who is sexual – I would be doing it largely to appease them and to give them what they want. But not in a begrudging way. Doing something for them, not just doing it because they want it and also because of the symbolic unity thing.
by Natasha McKeever, Aeon | Read more:
Image: Ante Badzim/Getty
[ed. It's all so complicated. All I can think of is Todd on Bojack Horseman.]

Everyone Hates the Boomers, OK?

Things come and go so quickly these days. Or is it just that some of us are slower on the uptake? Whatever: No sooner do we—does one—become aware of a meme or a trend or a catchphrase than it is unofficially declared done, over, kaput; the shark is judged to have been well and truly jumped.

“‘OK Boomer’?” said my editor, looking slightly alarmed at my choice of topic. “Is that still a thing?”

“Still?” I said.

“Only to Boomers,” a precocious colleague chimed in, unhelpfully.

Mayfly-like though the life cycle of a contemporary meme is, there are discrete phases to it. The meme emerges from some dim, untraceable nativity; to this day, for instance, no one can account for the origins of “OK Boomer.” The meme whistles around social media, imparting a glow of knowing cleverness to the first dozen users to post it to TikTok or Instagram or Twitter; for a quarter hour or more, these trailblazers feel themselves united in an unapproachable freemasonry of cool. Hours tick by, sometimes days. Soon, everyone wants in on the act, and the meme is everywhere. Samantha Bee uses it as a punch line. Incipient signs of exhaustion appear: Mo Rocca plans to build a six-minute segment around it for CBS Sunday Morning next weekend.

The death rattle of a meme is heard when some legit news outlet—The New York Times, NPR—takes notice and spies in the meme a cultural signifier, perhaps even a Larger Metaphor. Deep thinkers hover like vultures. The world surrounds the meme, engulfs it, suffocates it, drains it, ingests it. By the end of the week, Elizabeth Warren is using it as the subject line of an email fundraiser, next to a winking emoji. The shark, jumped, recedes ever deeper into the distance. Rocca’s segment airs. The meme is finished.

The particulars of the downward spiral change from meme to meme, of course. The end came for “OK Boomer” mid-month, when it was reported that Fox was trying to trademark the phrase for the title of a TV show.

TV—as in cable and broadcast? Fox? With this news, “OK Boomer” was immediately rendered as exciting and cutting-edge as the Macarena. You might as well freeze it in amber. Google Trends charted the ascent and the quick decline.

Having peaked nearly two weeks ago, the meaning of OK Boomer may have already been forgotten by its millions of users. Dictionary.com, the meme reliquary, is here to remind us: OK Boomer was a “slang phrase” used “to call out or dismiss out of touch or close-minded opinions associated with the Baby Boomer generation and older people more generally.” The essential document was a split-screen video, seen in various versions on YouTube and TikTok. On one side, a Baby Boomer, bearded, bespectacled, and baseball-capped (natch), lectured the camera on the moral failings of Millennials and members of Generation Z; on the other side, as the Boomer droned on in a fog of self-satisfaction, a non-Boomer (different versions exist) could be seen making a little placard: ok boomer.

In an irony-soaked era, a word is often meant to be taken for its opposite, and so it was with OK Boomer. OK means “not okay”—OK here means (borrowing a meme with a longer shelf life) “STFU.” Many Boomers were thus quick to take offense, since taking offense is now a preapproved response to any set of circumstances at any time. One Boomer even objected to the plain word Boomer, calling it the “N-word of ageism.” Once again, Boomers are getting ahead of themselves. No one has yet begun referring to the “B-word” as a delicate alternative to the unsayable obscenity Boomer. My guess is that it will take a while.

Other Boomers, if you’ll pardon the expression, insisted that the national disgrace of “OK Boomer” would require the intervention of the heavy hand of the law, lest the injustice go uncorrected. A writer for Inc. magazine, a self-described Gen Xer, earnestly advised employers of whatever age to keep an ear open around the workplace. Casual use of the phrase, she wrote, could be a “serious problem.” (...)

Millennials dislike Boomers for all the same reasons Gen Zers dislike them. Gen Xers, for their part, are growing increasingly unhappy because it’s dawning on them that they are about to be leapfrogged in the scheme of national succession. The Boomers stubbornly cling to power as the clock runs out: There’s as little chance a Gen Xer will become president of the United States as Prince Charles will succeed his mum without bumping her off. This seems to have increased the bad feeling the Xers have toward Millennials, who, as a generation, seem to have otherwise borne the brunt of many Boomer misfires (the Iraq War, the Great Recession). Meanwhile, the Millennials are quite happy to dismiss their youngsters as pampered and unworldly groundlings– snowflakes, to use the meme fist popularized in the novel Fight Club, written, of course, by a Baby Boomer.

What “OK Boomer” made plain is that the only thing all these age cohorts agree on is that as bad as everybody else is, the Boomers are worse. There’s justice here. Boomers invented the generational antagonism that the “OK Boomer” meme thrived on and enlarged. For self-hating Boomers like me, this made the “OK Boomer” episode unusually clarifying and rewarding, and we should remain forever grateful to whatever whining, resentful non-Boomer thought it up. I’m sorry to see it go—especially because our elders never had a chance to use it.

These were the generations whose spawn we were, called the Greatest Generation and the Silent Generation. Their silence was one of the things that made them great. Still, a snappy comeback would have been handy 40 years ago, as we sanctimoniously hectored them with the many great truths we thought we had discovered, and with which we began our long cultural domination: “The Viet Cong are agrarian reformers!” “Condoms aren’t worth the trouble!” “Yoko Ono is an artist!”

How much vexation might have been avoided if they had just raised a hand and said, with a well-earned eye-roll: “OK Boomer.”

by Andrew Ferguson, The Atlantic | Read more:
Image: New Yorker
[ed. Same as it ever was...]

The Dark Psychology of Social Networks


The Dark Psychology of Social Networks (The Atlantic)
Image: Mark Pernice

Thursday, November 28, 2019

The True Story of ‘The Irishman’

This article contains spoilers for the events depicted in “The Irishman.”

“The first words Jimmy ever spoke to me were, ‘I heard you paint houses,’” said the man now known as “The Irishman” shortly before his death.

The man was Frank Sheeran, and besides being an Irishman, he was also a bagman and hit man for the mob. Jimmy was James Riddle Hoffa, the Teamsters union president whose 1975 disappearance has never been solved, and the paint was not paint at all.

“The paint is the blood that supposedly gets on the floor when you shoot somebody,” Sheeran helpfully explained in the book “I Heard You Paint Houses" (2004), written by a lawyer and former prosecutor, Charles Brandt, based on deathbed interviews with Sheeran and released posthumously.

Al Pacino as Hoffa, who was convicted of attempted bribery and fraud in 1964.With the long-awaited arrival of the Martin Scorsese drama “The Irishman” on Netflix on Wednesday, it’s a good time to explain who’s who in the crowded story and to try to answer a question Sheeran himself asks in the film:

“How the hell did this whole thing start?”

The book’s account of Hoffa’s demise has been challenged by experts on the mob and Hoffa, and by journalists who have written about the case. It has been speculated that Sheeran enlarged his role for the sake of a last payday for his family, although most agree that Sheeran’s telling of the buildup to the climax is credible.

Robert De Niro plays Sheeran, a World War II veteran working as a truck driver in the 1960s, with a side job diverting the beef and chicken he was supposed to be delivering and selling it directly to restaurants. When his truck breaks down at a gas station in Pennsylvania, he is approached by a stranger named Russell Bufalino (Joe Pesci), who knows his way around an engine enough to get it running again.

The real Bufalino, born in Sicily, kept a low profile in Kingston, Pa., and although frequently charged with crimes, “has never been convicted of anything but traffic offenses,” according to a 1973 article following an arrest. He was once deported, but when his native Italy refused to accept him, he was allowed to stay in the United States.

He was perhaps best known as an organizer of what became known as the Apalachin Conference in 1957, when leaders from several Mafia families gathered in a rural home in upstate New York to hash out disagreements. State troopers, suspicious of the sudden activity in the area, raided the home. The incident was a blow to the mob, putting the Mafia on the radar of law enforcement and the F.B.I. director J. Edgar Hoover in particular. But Bufalino rose in the ranks in the years that followed.

Players in the world of Philadelphia organized crime, a less glamorized lot than their New York City counterparts, were known to hang out in the Friendly Lounge, described in later years as something like college for young mobsters. Its owner, known by the nickname Skinny Razor (played by Bobby Cannavale in the film), was like a mentor to the up-and-comers, journalists later wrote. Another regular face in the neighborhood was Angelo Bruno (Harvey Keitel), a powerful boss of a Pennsylvania and southern New Jersey crime family. (Keitel’s Bruno, known in life as the “Docile Don” for his low-key demeanor, sees relatively little screen time in a story more interested in the middlemen.)

The book’s account of Hoffa’s demise has been challenged by experts on the mob and Hoffa, and by journalists who have written about the case. It has been speculated that Sheeran enlarged his role for the sake of a last payday for his family, although most agree that Sheeran’s telling of the buildup to the climax is credible.

Robert De Niro plays Sheeran, a World War II veteran working as a truck driver in the 1960s, with a side job diverting the beef and chicken he was supposed to be delivering and selling it directly to restaurants. When his truck breaks down at a gas station in Pennsylvania, he is approached by a stranger named Russell Bufalino (Joe Pesci), who knows his way around an engine enough to get it running again.

In his prime as shown in the film, Hoffa (Al Pacino) was a larger-than-life leader of the International Brotherhood of Teamsters, the country’s most powerful union, and one with ties to major Mafia families and bosses. This was not the idealistic, socialist unions of Woody Guthrie songs. In those midcentury years, the Teamsters and other unions carried out bombings, murders, arsons and all manner of violent crime to maintain and grow power. Hoffa, like Bufalino, took Sheeran under his wing, putting him to work.

Sheeran’s relationship with Bufalino and Hoffa introduces several of the film’s memorable supporting characters, and they’re all drawn from real life.

by Michael Wilson, NY Times | Read more:
Image: Netflix
[ed. Now available on Netflix. The acting, directing, cinematography, everything... really first rate. One of Scorsese's best. I only wish they hadn't moved so quickly through the mob's connection/influence vis-a-vis the elder (Joseph) and younger Kennedys, and connections to Cuba. See also: Michael Woods Reviews: The Irishman (LRB).]
Birth of the Pearl, from The Kingdom of the Pearl, Edmund Dulac

Edmund Dulac, Birth of the Pearl, from The Kingdom of the Pearl