“We don’t know” sounds like a modest claim, but in this case, where it refers to something that we do in fact know about an effect size that is extremely large, it’s a really big claim.
It’s also completely wrong. The article drags its audience into the author’s preferred state of epistemic helplessness by dancing around the data rather than explaining it. And Zipper got many of the numbers wrong; in some cases, I suspect, as a consequence of a math error.
There are things we still don’t know about Waymo crashes. But we know far, far more than Zipper pretends. I want to go through his full argument and make it clear why that’s the case.
In many places, Zipper’s piece relied entirely on equivocation between “robotaxis” — that is, any self-driving car — and Waymos. Obviously, not all autonomous vehicle startups are doing a good job. Most of them have nowhere near the mileage on the road to say confidently how well they work.
But fortunately, no city official has to decide whether to allow “robotaxis” in full generality. Instead, the decision cities actually have to make is whether to allow or disallow Waymo, in particular.
Fortunately, there is a lot of data available about Waymo, in particular. If the thing you want to do is to help policymakers make good decisions, you would want to discuss the safety record of Waymos, the specific cars that the policymakers are considering allowing on their roads.
Imagine someone writing “we don’t know if airplanes are safe — some people say that crashes are extremely rare, and others say that crashes happen every week.” And when you investigate this claim further, you learn that what’s going on is that commercial aviation crashes are extremely rare, while general aviation crashes — small personal planes, including ones you can build in your garage — are quite common.
It’s good to know that the plane that you built in your garage is quite dangerous. It would still be extremely irresponsible to present an issue with a one-engine Cessna as an issue with the Boeing 737 and write “we don’t know whether airplanes are safe — the aviation industry insists they are, but my cousin’s plane crashed just three months ago.”
The safety gap between, for example, Cruise and Waymo is not as large as the safety gap between commercial and general aviation, but collapsing them into a single category sows confusion and moves the conversation away from the decision policymakers actually face: Should they allow Waymo in their cities?
Zipper’s first specific argument against the safety of self-driving cars is that while they do make safer decisions than humans in many contexts, “self-driven cars make mistakes that humans would not, such as plowing into floodwater or driving through an active crime scene where police have their guns drawn.” The obvious next question is: Which of these happens more frequently? How does the rate of self-driving cars doing something dangerous a human wouldn’t compare to the rate of doing something safe a human wouldn’t?
This obvious question went unasked because the answer would make the rest of Bloomberg’s piece pointless. As I’ll explain below, Waymo’s self-driving cars put people in harm’s way something like 80% to 90% less often than humans for a wide range of possible ways of measuring “harm’s way.”
Zipper acknowledged that data on Waymo operations suggested they are about 10 times less likely to seriously injure someone than a human driver, but he then suggested that this data could be somehow misleading: “It looks like the numbers are very good and promising,” he cited one expert, Henry Liu, as saying. “But I haven’t seen any unbiased, transparent analysis on autonomous vehicle safety. We don’t have the raw data.”
I was confused by this. Every single serious incident that Waymos are involved in must be reported. You can download all of the raw data yourself here (search “Download data”). The team at Understanding AI regularly goes through and reviews the Waymo safety reports to check whether the accidents are appropriately classified — and they have occasionally found errors in those reports, so I know they’re looking closely. I reached out to Timothy Lee at Understanding AI to ask if there was anything that could be characterized as “raw data” that Waymo wasn’t releasing — any key information we would like to have and didn’t.
“There is nothing obvious that I think they ought to be releasing for these crash statistics that they are not,” he told me.
[ed. I expect we'll see Waymos (and Teslas) everywhere in the near future (if we have a future that is... see AI posts below).]