Monday, December 23, 2024

Cutting Government Is Easy... If You Go After McKinsey

"What the extreme socialist favors because of his creed,” wrote New Deal Antitrust Division chief Robert Jackson in 1938, “the extreme capitalist favors, because of his greed."

Jackson was FDR’s favorite lawyer, and he later ended up on the Supreme Court, after a stint leading prosecutions of Nazi officials at Nuremberg. And his fear, like that of most populists for hundreds of years, was the conjoined power of the state and corporations, the centralization of control in the hands of distant masters. Whether done for well-meaning reasons or for lusty greed was less important than the concentration itself.

Suspicion of Big Government and Big Business, or their combination, is about this singular dynamic. As Jackson’s heir, current antitrust chief Jonathan Kanter echoed this view in his recent farewell speech, “When companies larger, wealthier and more powerful than most world governments threaten individual liberty with coercive private taxation and regulation,” he said “it threatens our way of life.” It doesn’t matter if it’s government or business engaged in tyranny, the moral consequence of a malevolent governing power is the same.

In many ways, the anger that Donald Trump is bringing to bear, with the President-elect asking Elon Musk to cut $2 trillion of government spending through the “Department of Government Efficiency,” is taking advantage of this fear. Now, the reason I’m writing about DOGE is because a few days ago, Congress was on the verge of passing legislation to fund the government, and as part of that legislation, to restrict pharmacy benefit managers and block junk fees. Musk and Trump, however, alleged government waste, and thwarted these disruptive new laws.

This move, though it will be framed as shaking things up, is just a rehash of what we’ve seen for decades. One of the games we’ve seen from conservatives since the “New Right” elections of 1978, and through the Tea Party, and now under Trump this week, is masquerading as a disruptor, while enacting standard pro-Wall Street policy. But I think a reasonable question is as follows. What would it actually look like to take on this fusion of corporate and governing power that Americans despise?

After all, while the failure to pass populist laws was disappointing, it is worth considering why Americans think there’s massive waste and tyranny in the institutions that govern them, and why they elected someone to cut through it.

So here’s the simple way to slice through the bloat we see all around us. It’s not easy, because it would require a genuine commitment to taking on the enormously powerful people who benefit from the status quo. But it is simple to understand, and in explaining it, I hope it will also explain what Americans really want done by their new leaders.

The Collusion Tax

There are plenty of studies showing massive government waste, but it’s not in the Federal workforce. It’s in procurement, the place where the government buys from the private sector, everything from pencils to software to nuclear submarines. The government spends about $750 billion a year on contracts. How much of this money is wasted?

It turns out, the answer is a lot.

The Organization for Economic Co-operation and Development (OECD) points out that about a fifth of it is stolen by contractors through bid rigging, the so-called “Collusion tax.” Collusion is when contractors get together in groups and conspire on their bids so that the government overpays for goods and services. According to the OECD, “The elimination of bid rigging could help reduce procurement prices by 20% or more.”

If you take $750 billion, just in Federal procurement spending, that’s $150 billion a year of pure overpayment, due to this one form of crime. There are other boring reports saying something similar. Earlier this year, for instance the Government Accountability Office published a report on fraud, showing the government loses between $233 billion to $521 billion on fraud.

That’s a lot of money. There are plenty of ways to get at it, two of them being to increase penalties against fraud and collusion, and shift law enforcement resources from silly things like disinformation monitoring to investigating contractor fraud. I don’t think that’ll happen, the Republicans and Donald Trump are dead set on defunding the tax cops, aka the IRS, and that’s a key agency in taking on this kind of fraud. But they could.

Still, there’s an even easier approach to taking on the problem. Go after McKinsey and management consultants throughout the Federal government.

5% of Government Spending Goes to Consultants

Why take on management consultants? Well, for starters, the government spends far too much on people giving it advice. In it's 2024 budget, the Biden administration requested $70 billion for management consulting, aka “professional services,” which is 5% of all discretionary spending. The Defense Department alone asked for $32.9 billion. So just cutting all management consulting would be a big chunk of savings.

by Matt Stollar, BIG |  Read more:
Image: uncredited

[ed. "The total payroll of the federal government is about $110 billion a year [ed. Personnel]. Federal government spending was $6.1 trillion. You cannot meaningfully shrink the federal government by firing “unelected bureaucrats.”

What is money spent on? Medicare, Medicaid and Social Security are 45%. Defense and debt payments are 28%. The VA, education and transportation are 15%. SNAP, UI, child nutrition, and the earned income tax credit are 7.5%. The remainder is stuff like military pensions. (...)

What this means is that if you want to save money, you need to be talking about *how to provide important benefits more efficiently.* How can we provide similar quality healthcare at lower cost? NOT, “we are going to get rid of a bunch of stuff no one wants in the first place.” 
~ Jason Abaluck, Professor of Economics/Yale University. See: here and here.

[UPDATE]: [ed. From the silver lining dept, here's a DOGE benefit I hadn't thought about but that might actually be useful. Employing AI to analyze regulatory intent vs. regulatory implementation vs. regulatory evolution/ application, vs. regulatory law definitions/challenges/impediments. From Jennifer Pahlkah's Eating Policy substack -
Um, Congress, you might want to take a look at this:
***
"Last Friday’s post got a lot of very insightful comments. In response to one comment about the potential of using AI to break through the wall of complex law and policy, darulharb writes:
That is, I suspect, exactly what they're doing right now [meaning DOGE]: spinning up the A.I. systems that will be tasked with taking a comprehensive and detailed look at both the legal and regulatory structure, and the expenditures. This is something that previous reform commissions never had the technical capability to attempt before, because the technology didn't exist. The most shocking thing I believe we'll see greater public awareness of because of DOGE is the degree to which even Congress doesn't know what is going on. 
[ed. they pass laws, agencies implement them, then nobody follows up on effectiveness (or damage)...until maybe decades later.]
This is likely spot on. The reality is that in many domains, the regulatory and spending complexity is such that it’s very hard for anyone to know what’s going on. You might think it’s Congress’s job to understand how the laws they’ve written have been operationalized, but that’s one of their chief complaints — that they don’t really understand what happens within the agency and they don’t always think it's consistent with their intentions. And the agencies themselves are dealing with the accretive nature of what comes down from Congress — new laws naturally reference and amend old laws, creating one confusing web of language. Then there’s the web of the regulations previous staff have written, not to mention the policies, forms, and processes that have been born from those regulations that seem to carry the weight of the law but are really somewhat arbitrary expressions of one way they could be operationalized. (...)

It may also significantly curtail their ability to understand their own work, both legislative and oversight, and act quickly, right as a potentially adversarial actor is emerging. Most commentary on DOGE has pitted it against the agencies it has vowed to drastically cut, but Congress is going to want to have a say in what they propose to do (...). In other words, if the budget passed by Congress says we’re spending this, we’re spending this, DOGE or Trump be damned. I have no idea how that is going to play out legally, but I do suspect that if DOGE has the tools my commenter thinks they probably already have at their disposal, one party in this brewing fight is going to have some significant advantages. Marci’s pacing problem frames the fast pace of change in society at large against a slow pace in government, but we may be about to see a massive pacing problem — a dramatic speed asymmetry — within government itself.] 

Lyle Mays


[ed. Lyle. Gone too soon. Can you imagine someone with his gifts deciding to become a software development manager? People (and their motivations) are complex. Listen to him here (especially around the 5:00 mark).]

The Eddie, 2024

Barry Sweet has a front seat to the mass of humanity that descends on the North Shore of Oahu, Hawaii, for the Eddie Aikau Big Wave Invitational.

“If you watch from early morning until early afternoon, it’s like a pilgrimage,” he said of the crowds for the surf competition, better known as the Eddie.

Alongside his wife, Janelle, and her sister, Deann Sakuoka, he watches from the vantage point of Pupukea Grill, a food truck run by his family that is parked off the two-lane Kamehameha Highway, a 10-minute walk from Waimea Bay and one of the few restaurants within miles.

When the Eddie is called some 48 hours before the contest is set to begin, a prestigious list of invitees — 45 competitors and 25 alternates — begins scrambling. Surfers from Australia, Brazil, France, Italy, South Africa and Tahiti had a host of logistics to work out to make it to Waimea Bay in time on Sunday.

They are joined by tens of thousands of spectators who crowd a small strip of beach and the surrounding cliffs, many camping out as soon as the ubiquitous event is called. Kamehameha Highway, which hugs the bay, is clogged long before the sun comes up. It is the only road to and from the bay.

Like many big-wave competitions, the Eddie has a holding period that lasts for a few months, between mid-December and mid-March, meaning it could run at any point in that period if the conditions are right. But unlike most such events, the Eddie rarely happens, giving rise to the slogan “the bay calls the day.”

The face of the wave, the part of the wave that can be surfed, must reach heights of 40 feet, or the size of a four-story building. That’s unusual, and it’s rarer for those conditions to sustain a full day of competition.

This year’s conditions were created by a big storm that formed in the west Pacific Ocean, east of Japan, late on Thursday, said Kevin Wallis, director of forecasting at the surf forecasting website Surfline. It’s a Goldilocks-type scenario: If the storm had been too far away, the waves would have been too small. If the storm came too close, it could have brought bad wind and weather, he said.

The event was last held in January 2023, weeks after a false start sent dozens scrambling to the North Shore of Oahu before the competition was canceled because of changing conditions. In 2016, it was called off the morning of the event because of a swell change, and was eventually held weeks later.

The big-wave surfer Felicity Palmateer decided to begin her long journey from Perth, Australia before the event was even called. She has long chased unpredictable swells but she didn’t want to risk missing this event.

“It’s so much more than a surf contest,” she said.

It’s a sentiment echoed by surfers, like Ms. Palmateer, who are stepping into their first Eddie, and by veterans of the event like Peter Mel, a big-wave surfer who will be surfing his ninth Eddie, a remarkable accomplishment considering this is only the 11th time the contest is running.

“It’s a celebration of not just surfing itself but of the culture, of life-saving, of watermen, and the heritage of Hawaii,” Mr. Mel said.

The event was founded in 1984 to honor Eddie Aikau, a surfer from Hawaii and the first lifeguard on the North Shore of Oahu. He was revered as a surfer who would paddle into waves no one else would attempt, and he saved more than 500 people as a lifeguard.

In 1978, Mr. Aikau joined the crew of a canoe voyage retracing the ancient Polynesian migration route between Hawaii and Tahiti. The vessel, the Hokulea, capsized off the coast of Lanai hours after setting sail. Mr. Aikau took his surfboard and paddled toward shore to get help. The rest of the crew was rescued, but Mr. Aikau was never seen again.

Being invited to the event is a sign of respect and recognition from the Aikau family, and for many big wave surfers, it’s the pinnacle of their careers. Even if the event doesn’t run, an Aikau nod is equivalent to a trip to the Super Bowl.

by Talya Minsberg, NY Times |  Read more:
Image: Brian Bielmann/Agence France-Presse/Getty Images
[ed. See also (for a good sense of the vibe): Landon McNamara Wins the 2024 Eddie Aikau Big Wave Invitational (Yahoo News).]

John Steinbeck On Helicopter Pilots

On January 7, 1967, John Steinbeck was in Pleiku, where he boarded a UH-1 Huey helicopter with D Troop, 1st Squadron, 10th Cavalry. He wrote the following about the helicopter pilots:

“I wish I could tell you about these pilots. They make me sick with envy. They ride their vehicles the way a man controls a fine, well-trained quarter horse. They weave along stream beds, rise like swallows to clear trees, they turn and twist and dip like swifts in the evening. I watch their hands and feet on the controls, the delicacy of the coordination reminds me of the sure and seeming slow hands of (Pablo) Casals on the cello. They are truly musicians’ hands and they play their controls like music and they dance them like ballerinas and they make me jealous because I want so much to do it. Remember your child night dream of perfect flight free and wonderful? It’s like that, and sadly I know I never can. My hands are too old and forgetful to take orders from the command center, which speaks of updrafts and side winds, of drift and shift, or ground fire indicated by a tiny puff or flash, or a hit and all these commands must be obeyed by the musicians hands instantly and automatically. I must take my longing out in admiration and the joy of seeing it.

[ed. I've flown hundreds of hours in helicopters and while most pilots have been extremely competent, a few were exceptionally so (and I always requested them if I could). Man and machine perfectly in sync. For example, landing on a rocky outcropping barely larger than the vessel itself, blades a hands-length from sheer rock wall; "skiing" down a miles-long glacier, 10 ft. above the undulating ice surface, going a hundred miles and hour; half-landing on cliffs, with skids balanced and hanging on the edge, engine powered up to keep from tumbling over backwards (that one was close). Just the sheer joy of feeling like a bird (or a bumble bee). And seeing some of the most beautiful and remote country at variable altitudes and pace (like an ice field stretching from horizon to horizon). Never got complacent about the exhilaration of it all, and at times wondered (like Mr. Steinbeck) if it would be possible to learn myself. But it's not a cheap (or even relatively affordable) undertaking and most pilots I knew came up through the military.]

Regulatory Capture

Cory Doctorow’s Vision for a Just Tech Revolution (Jacobin)

[ed. Covers a lot of territory, so here's just one example - regulatory capture. Obviously, many industries favor regulations if they can benefit from them, as most big players/monopolies do, especially if they're barriers to competition. Well worth a read for an overview of some technological issues we're facing today, not necessarily about what might be coming in the future (eg. AI). Also, to keep this from turning into another tech blog (yuk!), hopefully this'll be the last post of this type for awhile. Lots of things churning at the moment.]

Regulatory Capture

I think the first thing we need to understand is the relationship between market concentration and regulatory capture. The term regulatory capture has got a funny history. It comes out of some of the most unhinged elements of neoliberal economics. It was coined by public choice theorists who operate in a world of perfectly uniform cows that are perfectly spherical and have uniform density and move around on frictionless surfaces that never make any contact with the world.

According to public choice theory, because the state is the most powerful actor in most polities, successful firms will be those that are the most determined actors in seeking to usurp that power. They will go to incredible lengths to seize and harness that power, which other market participants won’t be able to match in terms of force and motivation. And so they’ll always win. And then they’ll use the power of the state to exclude new market entrants who would otherwise offer consumers a better deal. And so, we should just get rid of the state, right? The only way to prevent regulatory capture is to have no regulation. That’s the conclusion that the public choice people come to. But very clearly, this is not true.

When dining at a restaurant, we can reasonably expect not to fall victim to food poisoning, and the structural components supporting the roof above us usually remain intact. This suggests that we possess the knowledge and expertise to determine the appropriate types of steel, alloys, and construction methods for constructing a structurally sound building. Similarly, our antilock braking systems don’t routinely experience catastrophic failures when we apply the brakes, ensuring our safety while driving. In other words, we know how to make good regulation. It’s not some lost art like embalming pharaohs.

Regulatory capture primarily occurs in concentrated markets characterized by a small number of firms that, instead of competing, become very cozy with each other. And that means that they can extract lots of money. They have giant margins, and so they have lots of excess capital that they can spend on lobbying. And then — this is very crucial –—because they’re few in number, they can agree on what those regulations should be. So, they don’t sabotage each other. They have class solidarity. They solve the collective action problem that they would otherwise be plagued by.

As the tech sector has become increasingly concentrated — largely due to the abandonment of antitrust enforcement four decades ago— its tendency to monopoly has become more and more pronounced. At the same time, every presidential administration has become more lenient about the tech sector’s monopoly (until the current one, which has made a sharp reversal of it, which is very important). So, because we let them become very concentrated, they captured the regulators, and they were able to make policies that do two things.

The first is they were able to forestall policy that prevents them from exploiting the wonderful flexibility of digital technologies to do bad things to us. They’ve effectively exempted themselves from labor, consumer protection, and privacy law. And we’re all familiar with this, right? “It’s not a labor violation if you do it with an app.” On the other hand, they have managed to stop any of us or any new market entrant — whether that’s workers or co-ops or nonprofits or startups or large firms — from using that same digital flexibility. They have managed to protect themselves from all the gimmicks that the tech sector is able to use to abuse us as consumers, as workers, and as citizens.

It’s a perfect storm: an intensely concentrated tech sector and regulatory capture. Because the tech sector captures its regulators, it is able to enjoy wide latitude in using the flexibility of digital tools to do us harm. And because it’s captured those regulators, it’s able to prevent anyone, be they fellow members of the ruling class, capitalists or would be feudalist, but also workers, consumers, and activists, from using those same flexibilities to resist them. That’s the airtight bubble they’ve built. (...)

Deb Chachra is a leftist material scientist. She has a book coming out in mid-November called How Infrastructure Works. And it’s a very good book about what infrastructure means, because infrastructure never amortizes over the life of the people who build it. That’s kind of one of the defining characteristics of infrastructure. Infrastructure is always an act of solidarity with people who aren’t born yet, and infrastructure always requires planning that goes beyond what markets can accomplish. 

by David Moscrop and Cory Doctorow, Jacobin |  Read more:
Image: Peter Dazeley/Getty Images

The Line

The emergence of technologically-created artificial entities marks a moment where society must defend or redefine "the line" that distinguishes persons and non-persons.

There is a line. It is the line that separates persons— entities with moral and legal rights— from nonpersons, things, animals, machines— stuff we can buy, sell, or destroy. In moral and legal terms, it is the line between subject and object. If I have a chicken, I can sell it, eat it, or dress it in Napoleonic finery. It is, after all, my chicken. Even if eating meat were banned for moral reasons, no one would think the chicken should be able to vote or own property. It is not a person. If I choose to turn off Apple’s digital assistant Siri, we would laugh if “she” pleaded to be allowed to remain active on my phone. The reason her responses are “cute” is because they sound like something a person would say, but we know they come from a machine. We live our lives under the assumption of this line. Even to say “we” is to conjure it up. But how do we know, and how should we choose, what is inside and what is outside? 

This book is about that line and the challenges that this century will bring to it. I hope to convince you of three things. First, our culture, morality, and law will have to face new challenges to what it means to be human, or to be a legal person— and those two categories are not the same. A variety of synthetic entities ranging from artificial intelligences to genetically engineered human- animal hybrids or chimeras are going to force us to confront what our criteria for humanity and also for legal personhood are and should be. 

Second, we have not thought adequately about the issue, either individually or as a culture. As you sit there right now, can you explain to me which has the better claim to humanity or personhood: a thoughtful, brilliant, apparently self- aware computer or a chimp- human hybrid with a large amount of human DNA? Are you even sure of your own views, let alone what society will decide? 

Third, the debate will not play out in the way that you expect. We already have “artificial persons” with legal rights— they are called corporations. You probably have a view on whether that is a good thing. Is it relevant here? And what about those who claim that life begins at conception? Will the pro- life movement embrace or reject an Artificial Intelligence or a genetic hybrid? Will your religious beliefs be a better predictor of your opinions, or will the amount of science fiction you have watched or read? 

For all of our alarms, excursions, and moral panics about artificial intelligence and genetic engineering, we have devoted surprisingly little time to thinking about the possible personhood of the new entities this century will bring us. We agonize about the effect of artificial intelligence on employment, or the threat that our creations will destroy us. But what about their potential claims to be inside the line, to be “us,” not machines or animals but, if not humans, then at least persons, deserving all the moral and legal respect that any other person has by virtue of their status? Our prior history in failing to recognize the humanity and legal personhood of members of our own species does not exactly fill one with optimism about our ability to answer the question well off- the- cuff. 

In the 1780s, the British Society for the Abolition of Slavery had as its seal a picture of a kneeling slave in chains, surrounded by the words “Am I not a man and a brother?” Its message was simple and powerful. Here I am, a person, and yet you treat me as a thing, as property, as an animal, as something to be bought, sold, and bent to your will. What do we say when the genetic hybrid or the computer- based intelligence asks us the very same question? Am I not a man— legally, a person— and a brother? And yet what if this burst of sympathy takes us in exactly the wrong direction, leading us to anthropomorphize a clever chatbot, or think a genetically engineered mouse is human because it has large amounts of human DNA? What if we empathetically enfranchise Artificial Intelligences who proceed to destroy our species? Imagine a malicious, superintelligent computer network, Skynet, interfering in, or running, our elections. It would make us deeply nostalgic for the era when all we had to worry about was Russian hackers. 

The questions run deeper. Are we wrong even to discuss the subject, let alone to make comparisons to prior examples of denying legal personality to humans? Some believe that the invocation of “robot rights” is, at best, a distraction from real issues of injustice, mere “First World philosophical musings, too disengaged from actual affairs of humans in the real world.” Others go further, arguing that only human interests are important and even provocatively claiming that we should treat AI and robots as our “slaves.” In this view, extending legal and moral personality to AI should be judged solely on the effects it would have on the human species, and the costs outweigh the benefits. 

If you find yourself nodding along sagely, remember that there are clever moral philosophers lurking in the bushes who would tell you to replace “Artificial Intelligence” with “slaves,” the phrase “human species” with “white race,” and think about what it took to pass the Thirteenth, Fourteenth, and Fifteenth Amendments to the Constitution. During those debates there were actually people who argued that the idea of extending legal and moral personality to slaves should be judged solely on the effects it would have on the white race and the costs outweighed the benefits. “What’s in it for us?” is not always a compelling ethical position. (Ayn Rand might have disagreed. I find myself unmoved by that fact.) From this point of view, moral arguments about personality and consciousness cannot be neatly confined by the species line; indeed they are a logical extension of the movements defending both the personality and the rights of marginalized humans. Sohail Inayatullah describes the ridicule he faced from Pakistani colleagues after he raised the possibility of “robot rights” and quotes the legal scholar Christopher Stone, author of the famous environmental work Should Trees Have Standing?, in his defense: “[T]hroughout legal history, each successive extension of rights to some new entity has been theretofore, a bit unthinkable. We are inclined to suppose the rightlessness of rightless ‘things’ to be a decree of Nature, not a legal convention acting in support of the status quo.”

As the debate unfolds, people are going to make analogies and comparisons to prior struggles for justice and, because analogies are analogies, some are going to see those analogies as astoundingly disrespectful and demeaning. “How dare you invoke noble X in support of your trivial moral claim!” Others will see the current moment as the next step on the march that noble X personified. I feel confident predicting this will happen— because it has. The struggle with our moral future will also be a struggle about the correct meaning to draw from our moral past. It already is. 

In this book, I will lay out two broad ways in which the personhood question is likely to be presented. Crudely speaking, you could describe them as empathy and efficiency, or moral reasoning and administrative convenience. 

The first side of the debate will revolve around the dialectic between our empathy and our moral reasoning. As our experiences of interaction with smarter machines or transgenic species prompt us to wonder about the line, we will question our moral assessments. We will consult our syllogisms about the definition of “humanity” and the qualifications for personhood— be they based on simple species- membership or on the cognitive capacities that are said to set humans apart, morally speaking. You will listen to the quirky, sometimes melancholy, sometimes funny responses from the LaMDA- derived emotional support bot that keeps your grandmother company, or you will look at the genetic makeup of some newly engineered human- animal chimera and begin to wonder: “Is this conscious? Is it human? Should it be recognized as a person? Am I acting rightly toward it?” 

The second side of the debate will have a very different character. Here the analogy is to corporate personhood. We did not give corporations legal personhood and constitutional rights because we saw the essential humanity, the moral potential, behind their web of contracts. We did it because corporate personality was useful. It was a way of aligning legal rights and economic activity. We wanted corporations to be able to make contracts, to get and give loans, to sue and be sued. Personality was a useful legal fiction, a social construct the contours of which, even now, we heatedly debate. Will the same be true for Artificial Intelligence? Will we recognize its personality so we have an entity to sue when the self- driving car goes off the road or a robotic Jeeves to make our contracts and pay our bills? And is that approach also possible with the transgenic species, engineered to serve? Or will the debate focus instead on what makes us human and whether we can recognize those concepts beyond the species line and thus force us to redefine legal personhood? The answer, surely, is both. 

The book will sometimes deal with moral theory and constitutional or human rights. But this is not the clean- room vision of history in which all debates begin from first principles, and it is directed beyond an academic audience. I want to understand how we will discuss these issues as well as how we should. We do not start from a blank canvas, but in medias res. Our books and movies, from Erewhon to Blade Runner, our political fights, our histories of emancipation and resistance, our evolving technologies, our views on everything from animal rights to corporate PACs, all of these are grist to my mill. The best way to explain what I mean is to show you. Here are the stories of two imaginary entities. Today, they are fictional. Tomorrow? That is the point of the book.

by James Boyle, The Line (full book) |  Read more:
Image: The Line
[ed. This was also a central theme in Issac Asimov's I Robot series with the robot R. Daneel Olivaw, who was almost indistinguishable from humans. See also: James Boyle's new book The Line explores how AI is challenging our concepts of personhood (Duke Law):]

"A longtime proponent of open access, Boyle, the William Neal Reynolds Distinguished Professor of Law, is a founding board member of Creative Commons, an organization launched in 2001 to encourage the free availability of art, scholarship, and cultural materials through licenses that individuals and institutions can attach to their work. Boyle has made The Line accessible to all as a free download under such a license. It is also available in hardcover or digital formats.

In The Line, Boyle explores how technological developments in artificial intelligence challenge our concept of personhood, and of "the line" we believe separates our species from the rest of the world – and that also separates "persons" with legal rights from objects – and discusses the possibility of legal and moral personhood for artificially created entities, and what it might mean for humanity’s concept of itself."

Sunday, December 22, 2024

Time's Up For AI Policy

AI that exceeds human performance in nearly every cognitive domain is almost certain to be built and deployed in the next few years.

AI policy decisions made in the next few months will shape how that AI is governed. The security and safety measures in place for safeguarding that AI will be among the most important in history. Key upcoming milestones include the first acts of the Trump administration, the first acts of the next US congress, the UK AI bill, and the EU General-Purpose AI Code of Practice.

If there are ways that you can help improve the governance of AI in these and other countries, you should be doing it now or in the next few months, not planning for ways to have an impact several years from now.

The announcement of o3 today makes clear that superhuman coding and math are coming much sooner than many expected, and we have barely begun to think through or prepare for the implications of this (see this thread) – let alone the implications of superhuman legal reasoning, medical reasoning, etc. or the eventual availability of automated employees that can quickly learn to perform nearly any job doable on a computer.

There is no secret insight that frontier AI companies have which explains why people who work there are so bullish about AI capabilities improving rapidly in the next few years. The evidence is now all in the open. It may be harder for outsiders to fully process this truth without living it day in and day out, as frontier company employees do, but you have to try anyway, since everyone’s future depends on a shared understanding of this new reality.

It is difficult to conclusively demonstrate any of these conclusions one way or the other, so I don’t have an airtight argument, and I expect debate to continue through and beyond the point of cross-domain superhuman AI. But I want to share the resources, intuitions, and arguments I find personally compelling in the hopes of nudging the conversation forward a tiny bit.

This blog post is intended as a starter kit for what some call “feeling the AGI,” which I defined previously as:
  • Refusing to forget how wild it is that AI capabilities are what they are
  • Recognizing that there is much further to go, and no obvious "human-level" ceiling
  • Taking seriously one's moral obligation to shape the outcomes of AGI as positively as one can
(I will focus on the first two since the third follows naturally from agreement on the first two and is less contested, though of course what specifically you can do about it depends on your personal situation.)

How far we’ve come and how it happened

It has not always been the case that AI systems could understand and generate language fluently – even just for chit chat, let alone for solving complex problems in physics, biology, economics, law, medicine, etc. Likewise for image understanding and generation, audio understanding and generation, etc.

This all happened because some companies (building on ideas from academia) bet big on scaling up deep learning, i.e. making a big artificial neural network (basically just a bunch of numbers that serve as “knobs” to fiddle with), and then tweaking those knobs a little bit each time it gets something right or wrong.

Language models in particular first read a bunch of text from the Internet (tweaking their knobs in order to get better and better at generating “text that looks like the Internet”), and then they get feedback from humans (or, increasingly, from AI) on how well they’re doing at solving real tasks (allowing more tweaking of the knobs based on experience). In the process, they become useful general-purpose assistants.

It turns out that learning to mimic the Internet teaches you a ton about grammar, syntax, facts, writing style, humor, reasoning, etc., and that with enough trial and error, it’s possible for AI systems to outperform humans at any well-defined task. (...)

The fact that this all works so well — and so much more easily and quickly than many expected — is easily one of the biggest and most important discoveries in human history, and still not fully appreciated.

Here are some videos that explain how we got here, and some other key things to know about the current trajectory of AI.  [ed. ..yikes]

Here are some other long reads on related topics. As with the videos, I don’t endorse all of the claims in all of these references, but in the aggregate I hope they give you some 80/20 version of what people at the leading companies know and believe, though I also think that regularly using AI systems yourself (particularly on really hard questions) is critical in order to build up an intuition for what AI is capable of at a given time, and how that is changing rapidly over time.

There is no wall and there is no ceiling

There is a lot of “gas left in the tank” of AI’s social impacts even without further improvements in capabilities — but those improvements are coming. (...)

Note that it is not just researchers but also the CEOs of these companies who are saying that this rate of progress will continue (or accelerate). I know some people think that this is hype, but please, please trust me — it’s not.

We will not run out of ideas, chips, or energy unless there’s a war over AI or some catastrophic incident that causes a dramatic government crackdown on AI. By default we maybe would have run out of energy but it seems like the Trump administration and Congress are going to make sure that doesn’t happen. We’re much more likely to run out of time to prepare.

by Miles Brundage, Mile's Substack |  Read more:
Images: uncredited
[ed. Can't help but wonder how my kid's, and especially my grandkid's, lives will go. See also: Why I’m Leaving OpenAI and What I’m Doing Next (MS):]

Who are you/what did you do at OpenAI?

Until the end of day this Friday, I’m a researcher and manager at OpenAI. I have been here for over six years, which is pretty long by OpenAI standards (it has grown a lot over those six years!). I started as a research scientist on the Policy team, then became Head of Policy Research, and am currently Senior Advisor for AGI Readiness. Before that I was in academia, getting my PhD in Human and Social Dimensions of Science and Technology from Arizona State University, and then as a post-doc at Oxford, and I worked for a bit in government at the US Department of Energy.

The teams I’ve led (Policy Research and then AGI Readiness) have, in my view, done a lot of really important work shaping OpenAI’s deployment practices, e.g., starting our external red teaming program and driving the first several OpenAI system cards, and publishing a lot of influential work on topics such as the societal implications of language models and AI agents, frontier AI regulation, compute governance, etc.

I’m incredibly grateful for the time I’ve been at OpenAI, and deeply appreciate my managers over the years for trusting me with increasing responsibilities, the dozens of people I’ve had the honor of managing and from whom I learned so much, and the countless brilliant colleagues I’ve worked with on a range of teams who made working at OpenAI such a fascinating and rewarding experience.

Why are you leaving?


I decided that I want to impact and influence AI's development from outside the industry rather than inside. There are several considerations pointing to that conclusion:
  • The opportunity costs have become very high: I don’t have time to work on various research topics that I think are important, and in some cases I think they’d be more impactful if I worked on them outside of industry. OpenAI is now so high-profile, and its outputs reviewed from so many different angles, that it’s hard for me to publish on all the topics that are important to me. To be clear, while I wouldn’t say I’ve always agreed with OpenAI’s stance on publication review, I do think it’s reasonable for there to be some publishing constraints in industry (and I have helped write several iterations of OpenAI’s policies), but for me the constraints have become too much.
  • I want to be less biased: It is difficult to be impartial about an organization when you are a part of it and work closely with people there everyday, and people are right to question policy ideas coming from industry given financial conflicts of interest. I have tried to be as impartial as I can in my analysis, but I’m sure there has been some bias, and certainly working at OpenAI affects how people perceive my statements as well as those from others in industry. I think it’s critical to have more industry-independent voices in the policy conversation than there are today, and I plan to be one of them.
  • I’ve done much of what I set out to do at OpenAI: Since starting my latest role as Senior Advisor for AGI Readiness, I’ve begun to think more explicitly about two kinds of AGI readiness–OpenAI’s readiness to steward increasingly powerful AI capabilities, and the world’s readiness to effectively manage those capabilities (including via regulating OpenAI and other companies). On the former, I’ve already told executives and the board (the audience of my advice) a fair amount about what I think OpenAI needs to do and what the gaps are, and on the latter, I think I can be more effective externally.
It’s hard to say which of the bullets above is most important and they’re related in various ways, but each played some role in my decision.

So how are OpenAI and the world doing on AGI readiness?

In short, neither OpenAI nor any other frontier lab is ready, and the world is also not ready.

To be clear, I don’t think this is a controversial statement among OpenAI’s leadership, and notably, that’s a different question from whether the company and the world are on track to be ready at the relevant time (though I think the gaps remaining are substantial enough that I’ll be working on AI policy for the rest of my career).

Whether the company and the world are on track for AGI readiness is a complex function of how safety and security culture play out over time (for which recent additions to the board are steps in the right direction), how regulation affects organizational incentives, how various facts about AI capabilities and the difficulty of safety play out, and various other factors.

Introducing Act-One

A multi-cam dialogue scene edited together using a single actor and camera set-up to drive the performance of two unique generated characters. ~ Introducing Act-One
Driving performance and generated output for Characters A &B.

At Runway, our mission is to build expressive and controllable tools for artists that can open new avenues for creative expression. Today, we're excited to release Act-One, a new state-of-the-art tool for generating expressive character performances inside Gen-3 Alpha.

Act-One can create compelling animations using video and voice performances as inputs. It represents a significant step forward in using generative models for expressive live action and animated content. 
 ~ Introducing Act-One

[ed. Whoa. We've been speculating about digital actors replacing real ones for a long time. Looks like it's finally here.]

Saturday, December 21, 2024

A Cobrahawk Christmas

[ed. My golfing buddy Matt on bass. Nice version of this old favorite. He told me they just recorded another Christmas tune for this year. Check out some of their other stuff...here and here.]

The Ghosts in the Machine: Spotify's Plot Against Musicians

I first heard about ghost artists in the summer of 2017. At the time, I was new to the music-streaming beat. I had been researching the influence of major labels on Spotify playlists since the previous year, and my first report had just been published. Within a few days, the owner of an independent record label in New York dropped me a line to let me know about a mysterious phenomenon that was “in the air” and of growing concern to those in the indie music scene: Spotify, the rumor had it, was filling its most popular playlists with stock music attributed to pseudonymous musicians—variously called ghost or fake artists—presumably in an effort to reduce its royalty payouts. Some even speculated that Spotify might be making the tracks itself. At a time when playlists created by the company were becoming crucial sources of revenue for independent artists and labels, this was a troubling allegation.

At first, it sounded to me like a conspiracy theory. Surely, I thought, these artists were just DIY hustlers trying to game the system. But the tips kept coming. Over the next few months, I received more notes from readers, musicians, and label owners about the so-called fake-artist issue than about anything else. One digital strategist at an independent record label worried that the problem could soon grow more insidious. “So far it’s happening within a genre that mostly affects artists at labels like the one I work for, or Kranky, or Constellation,” the strategist said, referring to two long-running indie labels.* “But I doubt that it’ll be unique to our corner of the music world for long.”

By July, the story had burst into public view, after a Vulture article resurfaced a year-old item from the trade press claiming that Spotify was filling some of its popular and relaxing mood playlists—such as those for “jazz,” “chill,” and “peaceful piano” music—with cheap fake-artist offerings created by the company. A Spotify spokesperson, in turn, told the music press that these reports were “categorically untrue, full stop”: the company was not creating its own fake-artist tracks. But while Spotify may not have created them, it stopped short of denying that it had added them to its playlists. The spokesperson’s rebuttal only stoked the interest of the media, and by the end of the summer, articles on the matter appeared from NPR and the Guardian, among other outlets. Journalists scrutinized the music of some of the artists they suspected to be fake and speculated about how they had become so popular on Spotify. Before the year was out, the music writer David Turner had used analytics data to illustrate how Spotify’s “Ambient Chill” playlist had largely been wiped of well-known artists like Brian Eno, Bibio, and Jon Hopkins, whose music was replaced by tracks from Epidemic Sound, a Swedish company that offers a subscription-based library of production music—the kind of stock material often used in the background of advertisements, TV programs, and assorted video content.

For years, I referred to the names that would pop up on these playlists simply as “mystery viral artists.” Such artists often had millions of streams on Spotify and pride of place on the company’s own mood-themed playlists, which were compiled by a team of in-house curators. And they often had Spotify’s verified-artist badge. But they were clearly fake. Their “labels” were frequently listed as stock-music companies like Epidemic, and their profiles included generic, possibly AI-generated imagery, often with no artist biographies or links to websites. Google searches came up empty.

In the years following that initial salvo of negative press, other controversies served as useful distractions for Spotify: the company’s 2019 move into podcasting and eventual $250 million deal with Joe Rogan, for example, and its 2020 introduction of Discovery Mode, a program through which musicians or labels accept a lower royalty rate in exchange for algorithmic promotion. The fake-artist saga faded into the background, another of Spotify’s unresolved scandals as the company increasingly came under fire and musicians grew more emboldened to speak out against it with each passing year.

Then, in 2022, an investigation by the Swedish daily Dagens Nyheter revived the allegations. By comparing streaming data against documents retrieved from the Swedish copyright collection society STIM, the newspaper revealed that around twenty songwriters were behind the work of more than five hundred “artists,” and that thousands of their tracks were on Spotify and had been streamed millions of times.

Around this time, I decided to dig into the story of Spotify’s ghost artists in earnest, and the following summer, I made a visit to the DN offices in Sweden. The paper’s technology editor, Linus Larsson, showed me the Spotify page of an artist called Ekfat. Since 2019, a handful of tracks had been released under this moniker, mostly via the stock-music company Firefly Entertainment, and appeared on official Spotify playlists like “Lo-Fi House” and “Chill Instrumental Beats.” One of the tracks had more than three million streams; at the time of this writing, the number has surpassed four million. Larsson was amused by the elaborate artist bio, which he read aloud. It described Ekfat as a classically trained Icelandic beat maker who graduated from the “Reykjavik music conservatory,” joined the “legendary Smekkleysa Lo-Fi Rockers crew” in 2017, and released music only on limited-edition cassettes until 2019. “Completely made up,” Larsson said. “This is probably the most absurd example, because they really tried to make him into the coolest music producer that you can find.”

Besides the journalists at DN, no one in Sweden wanted to talk about the fake artists. In Stockholm, I visited the address listed for one of the ghost labels and knocked on the door—no luck. I met someone who knew a guy who maybe ran one of the production companies, but he didn’t want to talk. A local businessman would reveal only that he worked in the “functional music space,” and clammed up as soon as I told him about my investigation.

Even with the new reporting, there was still much missing from the bigger picture: Why, exactly, were the tracks getting added to these hugely popular Spotify playlists? We knew that the ghost artists were linked to certain production companies, and that those companies were pumping out an exorbitant number of tracks, but what was their relationship to Spotify?

For more than a year, I devoted myself to answering these questions. I spoke with former employees, reviewed internal Spotify records and company Slack messages, and interviewed and corresponded with numerous musicians. What I uncovered was an elaborate internal program. Spotify, I discovered, not only has partnerships with a web of production companies, which, as one former employee put it, provide Spotify with “music we benefited from financially,” but also a team of employees working to seed these tracks on playlists across the platform. In doing so, they are effectively working to grow the percentage of total streams of music that is cheaper for the platform. The program’s name: Perfect Fit Content (PFC). The PFC program raises troubling prospects for working musicians. Some face the possibility of losing out on crucial income by having their tracks passed over for playlist placement or replaced in favor of PFC; others, who record PFC music themselves, must often give up control of certain royalty rights that, if a track becomes popular, could be highly lucrative. But it also raises worrying questions for all of us who listen to music. It puts forth an image of a future in which—as streaming services push music further into the background, and normalize anonymous, low-cost playlist filler—the relationship between listener and artist might be severed completely. (...)

According to a source close to the company, Spotify’s own internal research showed that many users were not coming to the platform to listen to specific artists or albums; they just needed something to serve as a soundtrack for their days, like a study playlist or maybe a dinner soundtrack. In the lean-back listening environment that streaming had helped champion, listeners often weren’t even aware of what song or artist they were hearing. As a result, the thinking seemed to be: Why pay full-price royalties if users were only half listening? It was likely from this reasoning that the Perfect Fit Content program was created.

After at least a year of piloting, PFC was presented to Spotify editors in 2017 as one of the company’s new bets to achieve profitability. According to a former employee, just a few months later, a new column appeared on the dashboard editors used to monitor internal playlists. The dashboard was where editors could view various stats: plays, likes, skip rates, saves. And now, right at the top of the page, editors could see how successfully each playlist embraced “music commissioned to fit a certain playlist/mood with improved margins,” as PFC was described internally.

In a Slack channel dedicated to discussing the ethics of streaming, Spotify’s own employees debated the fairness of the PFC program. “I wonder how much these plays ‘steal’ from actual ’normal’ artists,” one employee asked. And yet as far as the public was concerned, the company had gone to great lengths to keep the initiative under wraps. Perhaps Spotify understood the stakes—that when it removed real classical, jazz, and ambient artists from popular playlists and replaced them with low-budget stock muzak, it was steamrolling real music cultures, actual traditions within which artists were trying to make a living. Or perhaps the company was aware that this project to cheapen music contradicted so many of the ideals upon which its brand had been built. Spotify had long marketed itself as the ultimate platform for discovery—and who was going to get excited about “discovering” a bunch of stock music? Artists had been sold the idea that streaming was the ultimate meritocracy—that the best would rise to the top because users voted by listening. But the PFC program undermined all this. PFC was not the only way in which Spotify deliberately and covertly manipulated programming to favor content that improved its margins, but it was the most immediately galling. Nor was the problem simply a matter of “authenticity” in music. It was a matter of survival for actual artists, of musicians having the ability to earn a living on one of the largest platforms for music. PFC was irrefutable proof that Spotify rigged its system against musicians who knew their worth.

by Liz Pelley, Harper's |  Read more:
Image: Yoshi Sodeoka

via:

Radicalized

A short story about health care, and desperation.

Just because you’ve decided to die of cancer, that doesn’t stop everyone you know from consuming your last months on this Earth by sending you links to miracle cures. They deleted these and politely told everyone—even their parents—to cut that shit out, but people can’t help themselves.

Lacey’s mom found the link to adoptive cell transfer therapy.

It wasn’t woo: the US National Cancer Institute was part of the NIH, and they had gotten multiple papers on the therapy published in Nature, with huge numbers of citations. Joe and Lacey read the papers as best as they could, and Lacey talked about them with her dying Facebook friends, and they all decided that maybe this was worth a shot.

The way it worked was, they sequenced the genome of your tumor and looked for traits that your own white blood cells could target, then they sorted out your own white blood cells until they found some that targeted those traits, and grew 100 billion or so of those little soldiers in a lab and injected them into you. It was just a way of speeding up the slow and inefficient process by which your own body tuned its own white blood cell population, giving it a computational boost that could outrace even the fastest-mutating tumor.

Joe and Lacey even found a private doc, right there in Phoenix, who’d do the procedure. He had an appointment at Arizona State University, had published some good papers on the procedure himself, and all he needed was $1.5 million from their health insurer.

You know what happened next. Their insurer told Lacey that it was time for her to die now. If she wanted chemo and radiation and whatever, they’d pay it (reluctantly, and with great bureaucratic intransigence), but “experimental” therapies were not covered. Which, you know, OK, who wants to spend $1.5 mil on some charlatan’s miracle-cure juice cleanse or crystal therapy? But adaptive cell transfer wasn’t crystal healing and the NIH wasn’t the local shaman.

They underwent—Joe underwent—a weird transformation after her last call with the supervisor’s supervisor’s supervisor at their health insurer. Lacey had been so good about it all, finding peace and calm and determining to make her death a good death. She’d dragged Joe out of his anger at cancer and back into his love of her and a mutual understanding that they’d make their last days together good ones, for them and for Madison.

But after the insurer turned them down, the rage came back. Maybe the therapy wouldn’t have worked, but it was a chance, and a realistic one, not a desperate one, a real possibility that his daughter would have a mother and that he would have a wife and best friend to grow old with. (...)

There are lots of support forums online and the best ones perform an incredible, nearly magical service for their participants, proving the aphorism that “shared pain is lessened, shared joy is increased,” and making the lives of everyone who contributes to them better.

Fuck Cancer Right In Its Fucking Face was not one of those forums.

Fuck Cancer Right In Its Fucking Face was a forum for very angry people whose loved ones were dying or dead. Some of the denizens of FCRIIFF got better, maybe even partially due to the chance to vent in the forums, but also because they were surrounded by people who loved them and brought them back from the brink, people who shared their grief but had better coping skills.

In a forum for ex-drunks, there’s a big group of elder statespeople who’ve been sober for years and years. They’re a wise, moderating voice, and they are the existence of proof of life after addiction. Whenever someone on the forums went on a bender and was recriminating with themselves, there was a dried-out elder who could tell a story to top theirs, about being put out on the street, losing their kids, losing their limbs, even, and coming back from it.

Fuck Cancer Right In Its Fucking Face did not have those people. The people who got over their furious grief left FCRIIFF, chased away by its rage culture. The people who stayed were really into their anger, clinging to it like a drunk refusing to let go of a bottle.

If your anger took you to a place you couldn’t handle, a place that scared you, the elders of FCRIIFF would help you all right: they’d explain to you that this was the right reaction, the only reaction, and it was never, ever going to get better. This was your life from here on in. (...)

He stayed on the forum.

He was ready to quit FCRIFF—which old-timers like him called Fuckriff, or Ruck Fiff when they wanted to sound polite—when LisasDad1990 joined. His first message:
Lisa is six years old. This is what she looks like. I have put her to bed every night since she stopped breast feeding. I used to read her Hand, Hand, Fingers, Thumb and then we graduated to Green Eggs and now we’re reading Harry Potter. That’s right, a six-year-old. She’s SMART.

Last year, Lisa started falling down a lot, bumping into things. Her teachers said she wasn’t concentrating in school and I saw it too. Her mom’s not in the picture. I took her to the doc’s and they said she had a brain tumor. I can go into details later, but it’s not a good brain tumor. It’s not little or cute. It’s an aggressive little fucker, and it’s growing.

Lisa can only see out of one eye now, and she walks with a walker, or I wheel her in her chair.

But the good news is that it’s treatable. Not like 100% but the oncologist says he can whack that bastard straight out of there and blast her with some rads and give her some poison and she’ll live. She’ll always have some problems, but she’s young and she’s full of life and she’ll figure that shit out.

But our insurance? Not so much. I was working for a customs broker when it hit, my first real full time job, with insurance and everything. Paid so much into that insurance.

SO MUCH. But they say that the kind of surgery the doc wants to do, it’s experimental. They say it’s not covered.

Guys, I’m 28 years old, a single dad. My parents haven’t given me a dime since I told them to go fuck themselves and moved out at 17. If my ex had a dollar to spare, it’d go to oxys, before the student-debt collectors could get it.

I have a GoFundMe, but that only works if you know a million people or one millionaire. My kid is the greatest thing in the world, but everyone thinks that about their kid, and from all the evidence so far, I’m the only one who can see it.

The thing is, my daughter Lisa is going to die.

I mean, I can kid myself about it, but that’s what it’s about. My six-year-old kid is going to die even though she doesn’t have to (or at least she has a chance she won’t get to take).

It’s because some random asshole earning half a million dollars in an office at the top of a tower full of random assholes earning less than me decided she should die. He doesn’t know her and he won’t ever know her but he knows that there are so many kids like Lisa that are going to die because of his choices.

I’ve been sad, I’ve been angry, I’ve been worried. I hold Lisa so much that she tells me, dad stop it, but some day I’m going to hold her and she won’t say anything because she’ll be dead. That’s my truth and my life and I live that truth every day.

When Lisa goes, I’m going to go too. I never said that out loud but I’ll write it here because you guys know what I’m going through. I’m dead fucking serious. With Lisa I had everything to live for. Now I got nothing. Can’t even afford to bury her, not after all the out of pockets. Red bills every day, every credit card wants to send a guy around with a bat to break my knees. Maybe I’ll buy a gun and shoot the first one that comes to the door, then stick it in my mouth…
by Cory Doctorow, The American Prospect |  Read more:
Image: Cory Doctorow. Gregory Katsoulis/Creative Commons
[ed. Readers will recall how we ended up with Obamacare - typical bait and switch. Republicans and insurance industry lobbyists drew a red line on Medicare For All/Single Payer Healthcare, refusing to even discuss a national healthcare system unless those options were off the table (and also that everyone be required to sign up through private insurance companies). They then proceeded to vote against the compromise anyway (and have been trying to kill it ever since). See also: Cory Doctorow’s prescient novella about health insurance and murder: ‘They’re going to be afraid’ (The Guardian); and, How AARP Shills for UnitedHealthcare (TAP):]
***
I had assumed that UnitedHealth’s business model was to lowball premiums and then more than make up the profit by denying claims. But it’s even worse than that.

In Massachusetts, where I live, a supplemental Medicare policy from UnitedHealth costs $251 a month. An identical policy from Blue Cross, which has the state’s best record in not denying care, costs $212.

Why on earth would consumers buy such a flawed insurance product? It helps if they are captive customers, steered to UnitedHealth by a trusted source.

That would be AARP.

AARP has just under 38 million members. But AARP is basically an insurance marketing scheme masquerading as an advocacy group for the elderly.

For 27 years, UnitedHealth has been the co-branded choice of AARP. If you are looking for a supplemental policy to conventional Medicare, or a Medicare Advantage product, or a Medicare drug insurance policy, AARP will steer you to UnitedHealth. And only to UnitedHealth.

The reason is shameful. UnitedHealth kicks back 4.95 percent of premium income from AARP subscribers to AARP. And the numbers are staggering. According to AARP’s audited financial report, AARP made $289.3 million from member dues, but $1.134 billion from kickbacks from insurers, of which the lion’s share, $905 million, was from health insurers. AARP delicately refers to these as royalties.

And somehow, because it is a nonprofit, AARP manages to avoid income taxes on this kickback income. Despite Congress’s efforts over the years to make nonprofits pay taxes on commercial income, AARP paid only about $3 million in federal income taxes on “royalties” of well over a billion. ~ How AARP Shills for UnitedHealthcare

***

A February 2020 study published in The Lancet found that the proposed Medicare for All Act would save 68,000 lives and $450 billion in national healthcare expenditure annually. According to a 2022 study published in the Proceedings of the National Academy of Sciences of the United States of America, a single payer universal healthcare system would have saved 212,000 lives and averted over $100 billion in medical costs during the COVID-19 pandemic in the United States in 2020 alone. Roughly 16% of all COVID-19 deaths occurred in the US, despite having only 4% of the world's population.  ~ Wikipedia

Friday, December 20, 2024

via:

The Social Media Discourse of Engaged Partisans is Toxic Even When Politics are Irrelevant

Significance Statement

Political discourse on social media is infamously uncivil. Prevailing explanations argue that such incivility is driven by differences in ideological or social-identity conflict—partisans are uncivil because the political stakes are so high. This report considers a different (albeit not contradictory) possibility—that online political discourse tends to be uncivil because the people who opt into such discourse are generally uncivil. Indeed, people who opt into political discourse tend to be especially toxic, even when discussing nonpolitical topics in nonpartisan contexts. Such individuals disproportionately dominate political discourse online, thereby undermining the public sphere as a venue for inclusive debate.

Abstract

Prevailing theories of partisan incivility on social media suggest that it derives from disagreement about political issues or from status competition between groups. This study—which analyzes the commenting behavior of Reddit users across diverse cultural contexts (subreddits)—tests the alternative hypothesis that such incivility derives in large part from a selection effect: Toxic people are especially likely to opt into discourse in partisan contexts. First, we examined commenting behavior across over 9,000 unique cultural contexts (subreddits) and confirmed that discourse is indeed more toxic in partisan (e.g. r/progressive, r/conservatives) than in nonpartisan contexts (e.g. r/movies, r/programming). Next, we analyzed hundreds of millions of comments from over 6.3 million users and found robust evidence that: (i) the discourse of people whose behavior is especially toxic in partisan contexts is also especially toxic in nonpartisan contexts (i.e. people are not politics-only toxicity specialists); and (ii) when considering only nonpartisan contexts, the discourse of people who also comment in partisan contexts is more toxic than the discourse of people who do not. These effects were not driven by socialization processes whereby people overgeneralized toxic behavioral norms they had learned in partisan contexts. In contrast to speculation about the need for partisans to engage beyond their echo chambers, toxicity in nonpartisan contexts was higher among people who also comment in both left-wing and right-wing contexts (bilaterally engaged users) than among people who also comment in only left-wing or right-wing contexts (unilaterally engaged users). The discussion considers implications for democratic functioning and theories of polarization. (...)

Discussion

Taken together, the results provide strong and consistent support for the troll hypothesis: (i) people who are especially toxic in partisan contexts are also especially toxic in nonpartisan contexts, and (ii) engaged partisans (especially the bilaterally engaged) are more toxic than the nonengaged when discussing nonpolitical content in nonpartisan contexts. Such effects are specific to uncivil behaviors (rather than to negativity in general) and do not result from some sort of socialization process in partisan subreddits. They emerge regardless of political lean, and they apply to users whose partisan comments take place in contexts that are explicitly political or ostensibly nonpolitical—although they are especially strong for users with activity in explicitly political contexts. The effects, which emerge in virtually all nonpartisan subreddits, help to explain why political contexts tend to be more toxic than nonpolitical contexts. We conclude that just as people tend to be consistent in their online and offline political behavior, they are also consistent in their political and nonpolitical behavior.

Future research will be required to test how strongly these results generalize beyond Reddit. That said, a strength of the present study is that it investigates hundreds of millions of unique behaviors from millions of people across thousands of cultural contexts (subreddits). As such, the results are not subject to the typical concerns about a limited range of cultures or topics of discourse. In addition, social-media environments (e.g. Twitter, Facebook, Reddit) have become a core nexus for political discourse, increasingly functioning as democracy's public square. Reddit is a major context where political ideas get introduced and debated—where people of diverse backgrounds and ideologies discuss and argue about which ideas and policies are best.

The present findings have important implications for theories of political polarization. They suggest that discourse in partisan contexts is uncivil in large part because the people who opt into it are uncivil. This incivility distorts the public square. People's reluctance to contribute to political discourse—to contribute their views to the marketplace of ideas—is driven less by substantive disagreement than by the tenor of the discourse; they opt out when discourse gets heated. It is no wonder that people who are lower in trait hostility tend to opt out of online political discourse. The overrepresentation of dispositionally uncivil people in our political discourse is especially troubling because it promotes combative partisanship at the expense of deliberation and leads observers (those who also participate and those who do not) to conclude that the state of our politics is far more toxic than it really is.

There is little reason to believe that dispositionally uncivil people have better political ideas than those who are more dispositionally civil, and there is good reason to believe that the uncivil are less prone to compromise, to seek win–win solutions, or to assume that their interlocutors are people of goodwill. Consequently, the disproportionate representation of uncivil people in partisan contexts may be a significant contributor to the democratic backsliding afflicting the United States and many other nations in recent years. Theories of polarization must engage seriously with the fact that society has built a new megaphone that amplifies the voices of people whose discourse tendencies are disproportionally characterized by toxicity, moral outrage, profanity, anger, impoliteness, and low prosociality.

Past research has demonstrated that passive exposure to social-media posts from opposing partisans can exacerbate polarization, but the present study is the first to test whether people who opt into partisan discourse on one vs. both sides of the political divide tend to be especially toxic. Reddit offers its users the opportunity to join multiple communities across the political spectrum, and it gives space for constructive conversations on controversial topics. Nevertheless, our results suggest that this opportunity is exploited by people with especially uncivil tendencies. These findings contribute to an emerging sense of skepticism about whether breaking down echo chambers will reduce polarization or toxicity—at least in a straightforward way. The use of observational data allowed us to identify selection effects related to the behavior of the engaged, but further research is required to establish causal effects. (...)

Democracy requires conflict. People with differing ideological and policy preferences must compete in the marketplace of political ideas, seeking to persuade others that their own ideas are best. The present research suggests, however, that the voices that are most amplified on social media are dispositionally toxic, an arrangement that seems unlikely to cultivate the sort of constructive discussion and debate that democracies require. The incivility that the engaged partisans exhibit in contexts that are irrelevant to politics raises the concern that toxic behavior in partisan contexts might masquerade as righteousness or advocacy, but it is actually due in large part to these specific people's tendency to be uncivil in general. Consequently, an urgent priority for societies riven by polarization and democratic backsliding is to develop a means of making the public square a congenial environment not only for the dispositionally uncivil but also for people who would be willing to enter the debate if only the tenor of the discourse were less toxic.

by Michalis Mamakos, Eli J. Finkel, PNAS/National Academy of Sciences |  Read more:
[ed. Wherever they are, toxic people will always be toxic. I think we knew this.]

Optical Delusions: Widows on the Prowl

The onslaught of holiday parties only makes me miss more than ever the matchless company of my husband and soulmate for four exuberant decades, the swashbuckling British newspaper editor Sir Harry Evans. In 2002, he was voted best newspaper editor of all time by his peers. (“What took them so long?” he wondered.) Now that he’s been gone for four years, friends have started to urge me with sly supportive smiles to “put myself out there” and find a romantic replacement. The trouble is, I honestly cannot think of anyone but Harry—a man who shared so many of my passions, my idiosyncrasies, and my absolute indifference to domestic life—who would be able to put up with me and always find me irresistible.


During the weeks in Manhattan, we lived in the full-flash intensity of the media arena, vibrating with a succession of salons and book parties at our apartment on East 57th Street. (Harry called his dinner jacket his “working clothes.”) But alone on winter weekends at our house in Quogue, we pulled up the drawbridge and vanished into our cocoon. As I ran through my magazine editorships and wrote my books, while Harry served as ringmaster of Random House and penned best-selling histories, the sounds of industry that emanated from our back-to-back studies—the whir of fax machines, the tap-tap of keyboards, the phone calls wrangling writers—were the music of our marriage.

Now that I’m solo, I wonder what other people do in their free time. After so long holed up in the word factory with Harry, I don’t have a clue who the neighbors are in Quogue. Harry never cared that I can’t cook. Nor could he. We were always too engrossed in discussing the day’s headlines to notice that we were dining, yet again, on a stuffed baked potato. Returning home after Park Avenue parties, he would crash around the kitchen, making himself sardines on toast and regaling me with the best gossip or the most preposterous highlights from his own circuit of the revelers. I have come to realize that our blissful, singular focus on writing and editing has made me eccentric. What, for instance, is a hobby?

Forays to dinner parties in the Hamptons this summer yielded age-appropriate geezers who bang on about their golf swings and congregate together with booming, bald-headed laughter. Couples talk about their elaborate travel plans, doing inconceivable things like motoring through Loire Valley vineyards or taking extended treks to see a pile of ruins in Tibet. Holidays with Harry were usually helter-skelter, last-minute trips to overpriced Caribbean resorts with an inconvenient layover somewhere that neither of us had noticed on the travel agenda.

I realize I have forgotten—and can't really be bothered to relearn—how to feign the eye-batting fascination that is the sine qua non of romantic appeal to late-stage widowers.

I am also a realist. I can’t help but note there’s a pileup around me of surgically enhanced, widowed blondes. The Times obituary page unleashes a new one every day: power wives who once swirled through Manhattan drawing rooms on the arm of some titan and now prowl affluent, Viagra-circuit cocktail receptions at the Council on Foreign Relations. They are battle-tested and battle-ready with one senses, unlike me, an infinite capacity and willingness to adapt. Captious, hostessy, and primed for action, they seem undaunted at the prospect of being jumped on for one last inning. 

by Tina Brown, Fresh Hell |  Read more:
Images: uncredited
[ed. How the other half lives (or, more precisely, the upper half of the upper 10 percent). I guess Tina Brown, former editor-in- chief at Vanity Fair and the New Yorker has a substack now, which I just stumbled upon. Wish her good luck, I'm sure she'll be fine.]

Ship of State
via:
[ed. See also: A "diplomatic clown car" - ‘stunningly unqualified’ diplomatic team shapes up at breakneck speed (Guardian).]

via:
[ed. Some AI art is ok.]