Monday, October 14, 2024

SB 1047: Our Side Of The Story

(In case you’re just joining us - SB 1047 is a California bill, recently passed by the legislature but vetoed by the governor - which forced AI companies to take some steps to reduce the risk of AI-caused existential catastrophes. See here for more on the content of the bill and the arguments for and against; this post will limit itself to the political fight.) (...)

On some level, I don’t mind having a bad governor. I actually have a perverse sort of fondness for Newsom. He reminds me of the Simpsons’ Mayor Quimby, a sort of old-school politician’s politician from the good old days when people were too busy pandering to special interests to talk about Jewish space lasers. California is a state full of very sincere but frequently insane people. We’re constantly coming up with clever ideas like “let’s make free organic BIPOC-owned cannabis cafes for undocumented immigrants a human right” or whatever. California’s representatives are very earnest and will happily go to bat for these kinds of ideas. Then whoever would be on the losing end hands Governor Newsom a manila envelope full of unmarked bills, and he vetoes it. In a world of dangerous ideological zealots, there’s something reassuring about having a governor too dull and venal to be corrupted by the siren song of “being a good person and trying to improve the world”.

But sometimes you’re the group trying to do the right thing and improve the world, and then it sucks.

I think these people beat us because they’ve been optimizing for political clout for decades, and our side hasn’t even existed that long, plus we care about too many other things to focus on gubernatorial recall elections.

Newsom is good at politics, so he’s covering his tracks. To counterbalance his SB 1047 veto and appear strong on AI, he signed several less important anti-AI bills, including a ban on deepfakes which was immediately struck down as unconstitutional. And with all the ferocity of OJ vowing to find the real killer, he’s set up a committee to come up with better AI safety regulation. (...)

We’ll see whether Newsom gets his better regulation before or after OJ completes his manhunt. (...)

A frequent theme was that some form of AI regulation was inevitable. SB 1047 - a light-touch bill designed by Silicon-Valley-friendly moderates - was the best deal that Big Tech was ever going to get, and they went full scorched-earth to oppose it. Next time, the deal will be designed by anti-tech socialists, it’ll be much worse, and nobody will feel sorry for them.

Dean Ball wrote:
In response to the veto, some SB 1047 proponents seem to be threatening a kind of revenge arc. They failed to get a “light-touch” bill passed, the reasoning seems to be, so instead of trying again, perhaps they should team up with unions, tech “ethics” activists, disinformation “experts,” and other, more ambiently anti-technology actors for a much broader legislative effort. Get ready, they seem to be warning, for “use-based” regulation of epic proportions. As Rob Wiblin, one of the hosts of the Effective Altruist-aligned 80,000 Hours podcast put it on X:
» “Having failed to get up a narrow bill focused on frontier models, should AI x-risk folks join a popular front for an Omnibus AI Bill that includes SB1047 but adds regulations to tackle union concerns, actor concerns, disinformation, AI ethics, current safety, etc?”

This is one plausible strategic response the safety community—to the extent it is a monolith—could pursue. We even saw inklings of this in the final innings of the SB 1047 debate, after bill co-sponsor Encode Justice recruited more than one hundred members of the actors’ union SAG-AFTRA to the cause. These actors (literal actors) did not know much about catastrophic risk from AI—some of them even dismiss the possibility and supported SB 1047 anyway! Instead, they have a more generalized dislike of technology in general and AI in particular. This group likes anything that “hurts AI,” not because they care about catastrophic risk, but because they do not like AI.

The AI safety movement could easily transition from being a quirky, heterodox, “extremely online” movement to being just another generic left-wing cause. It could even work.

But I hope they do not. As I have written consistently, I believe that the AI safety movement, on the whole, is a long-term friend of anyone who wants to see positive technological transformation in the coming decades. Though they have their concerns about AI, in general this is a group that is pro-science, techno-optimist, anti-stagnation, and skeptical of massive state interventions in the economy (if I may be forgiven for speaking broadly about a diverse intellectual community).

I hope that we can work together, as a broadly techno-optimist community, toward some sort of consensus. One solution might be to break SB 1047 into smaller, more manageable pieces. Should we have audits for “frontier” AI models? Should we have whistleblower protections for employees at frontier labs? Should there be transparency requirements of some kind on the labs? I bet if the community put legitimate effort into any one of these issues, something sensible would emerge.

The cynical, and perhaps easier, path would be to form an unholy alliance with the unions and the misinformation crusaders and all the rest. AI safety can become the “anti-AI” movement it is often accused of being by its opponents, if it wishes. Given public sentiment about AI, and the eagerness of politicians to flex their regulatory biceps, this may well be the path of least resistance.

The harder, but ultimately more rewarding, path would be to embrace classical motifs of American civics: compromise, virtue, and restraint.

I believe we can all pursue the second, narrow path. I believe we can be friends. Time will tell whether I, myself, am hopelessly naïve.
Last year, I would have told Dean not to worry about us allying with the Left - the Left would never accept an alliance with the likes of us anyway. But I was surprised by how fairly socialist media covered the SB 1047 fight. For example, from Jacobin:
The debate playing out in the public square may lead you to believe that we have to choose between addressing AI’s immediate harms and its inherently speculative existential risks. And there are certainly trade-offs that require careful consideration.

But when you look at the material forces at play, a different picture emerges: in one corner are trillion-dollar companies trying to make AI models more powerful and profitable; in another, you find civil society groups trying to make AI reflect values that routinely clash with profit maximization.
In short, it’s capitalism versus humanity.

Current Affairs, another socialist magazine, also had a good article, Surely AI Safety Legislation Is A No-Brainer. The magazine’s editor, Nathan Robinson, openly talked about how his opinion had shifted:
One thing I’ve changed some of my opinions about in the last few years is AI. I used to think that most of the claims made about its radically socially disruptive potential (both positive and negative) were hype. That was in part because they often came from the same people who made massively overstated claims about cryptocurrency. Some also resembled science fiction stories, and I think we should prioritize things we know to be problems in the here and now (climate catastrophe, nuclear weapons, pandemics) than purely speculative potential disasters. Given that Silicon Valley companies are constantly promising new revolutions, I try to always remember that there is a tendency for those with strong financial incentives to spin modest improvements, or even total frauds, as epochal breakthroughs.

But as I’ve actually used some of the various technologies lumped together as “artificial intelligence,” over and over my reaction has been: “Jesus, this stuff is actually very powerful… and this is only the beginning.” I think many of my fellow leftists tend to have a dismissive attitude toward AI’s capabilities, delighting in its failures (ChatGPT’s basic math errors and “hallucinations,” the ugliness of much AI-generated “art,” badly made hands from image generators, etc.). There is even a certain desire for AI to be bad at what it does, because nobody likes to think that so much of what we do on a day-to-day basis is capable of being automated. But if we are being honest, the kinds of technological breakthroughs we are seeing are shocking. If I’m training to debate someone, I can ask ChatGPT to play the role of my opponent, and it will deliver a virtually flawless performance. I remember not too many years ago when chatbots were so laughably inept that it was easy to believe one would never be able to pass a Turing Test. Now, ChatGPT not only aces the test but is better at being “human” than most humans. And, again, this is only the start.

The ability to replicate more and more of the functions of human intelligence on a machine is both very exciting and incredibly risky. Personally I am deeply alarmed by military applications of AI in an age of great power competition. The autonomous weapons arms race strikes me as one of the most dangerous things happening in the world today, and it’s virtually undiscussed in the press. The conceivable harms from AI are endless. If a computer can replicate the capacities of a human scientist, it will be easy for rogue actors to engineer viruses that could cause pandemics far worse than COVID. They could build bombs. They could execute massive cyberattacks. From deepfake porn to the empowerment of authoritarian governments to the possibility that badly-programmed AI will inflict some catastrophic new harm we haven’t even considered, the rapid advancement of these technologies is clearly hugely risky. That means that we are being put at risk by institutions over which we have no control.
I don’t want to gloss this as “socialists finally admit we were right all along”. I think the change has been bi-directional. Back in 2010, when we had no idea what AI would look like, the rationalists and EAs focused on the only risk big enough to see from such a distance: runaway unaligned superintelligence. Now that we know more specifics, “smaller” existential risks have also come into focus, like AI-fueled bioterrorism, AI-fueled great power conflict, and - yes - AI-fueled inequality. At some point, without either side entirely abandoning their position, the very-near-term-risk people and the very-long-term-risk people have started to meet in the middle.

But I think an equally big change is that SB 1047 has proven that AI doomers are willing to stand up to Big Tech. Socialists previously accused us of being tech company stooges, harping on the dangers of AI as a sneaky way of hyping it up. I admit I dismissed those accusations as part of a strategy of slinging every possible insult at us to see which ones stuck. But maybe they actually believed it. Maybe it was their real barrier to working with us, and maybe - now that we’ve proven we can (grudgingly, tentatively, when absolutely forced) oppose (some) Silicon Valley billionaires, they’ll be willing to at least treat us as potential allies of convenience. [ed. they (I) actually believed it; your problem that you didn't.]

Dean Ball calls this strategy “an unholy alliance with the unions and the misinformation crusaders and all the rest”, and equates it to selling our souls. I admit we have many cultural and ethical differences with socialists, that I don’t want to become them, that I can’t fully endorse them, and that I’m sure they feel the same way about me. But coalition politics doesn’t require perfect agreement. The US and its European allies were willing to form an “unholy alliance” with some unsavory socialists in order to defeat the Nazis, they did defeat the Nazis, and they kept their own commitments to capitalism and democracy intact.

As a wise man once said, politics is the art of the deal. We should see how good a deal we’re getting from Dean, and how good a deal we’re getting from the socialists, then take whichever one is better.

Dean says maybe he and his allies in Big Tech would support a weaker compromise proposal that broke SB 1047 into small parts. But I feel like we watered down SB 1047 pretty hard already, and Big Tech just ignored the concessions, lied about the contents, and told everyone it would destroy California forever. Is there some hidden group of opponents who were against it this time, but would get on board if only we watered it down slightly more? I think the burden of proof is on him to demonstrate that there are.

I respect Dean’s spirit of cooperation and offer of compromise. But the socialists have a saying - “That already was the compromise” - and I’m starting to respect them too. (...)

VII.

Some people tell me they wish they’d gotten involved in AI early. But it’s still early! AI is less than 1% of the economy! In a few years, we’re going to look back on these days the way we look back now on punch-card computers.

Even very early, it’s possible to do good object-level work. But the earlier you go, the less important object-level work is compared to shaping possibilities, coalitions, and expectations for the future. So here are some reasons for optimism.

First, we proved we can stand up to (the bad parts of) Big Tech. Without sacrificing our principles or adopting any rhetoric we considered dishonest, we earned some respect from leftists and got some leads on potential new friends.

Second, we registered our beliefs (AI will soon be powerful and potentially dangerous) loudly enough to get the attention of the political class and the general public. And we forced our opponents to register theirs (AI isn’t scary and doesn’t require regulation) with equal volume. In a few years, when the real impact of advanced AI starts to come into focus, nobody will be able to lie about which side of the battle lines they were on.

Third, we learned - partly to our own surprise - that we have the support of ~65% of Californians and an even higher proportion of the state legislature. It’s still unbelievably, fantastically early, comparable to people trying to build an airplane safety coalition when da Vinci was doodling pictures of guys with wings - and we already have the support of 65% of Californians and the legislature. So one specific governor vetoed one specific bill. So what? This year we got ourselves the high ground / eternal glory / social capital of being early to the fight. Next year we’ll get the actual policy victory. Or if not next year, the year after, or the year after that. “Instead of planning a path to victory, plan so that all paths lead to victory”. We have strategies available that people from lesser states can’t even imagine!

by Scott Alexander, Astral Codex Ten |  Read more:
Image: Pixabay via
[ed. Well worth a full read. Very grateful for the people actively (and passionately) fighting for AI security.]