Saturday, April 25, 2026

We Absolutely Do Know That Waymos Are Safer Than Human Drivers

In a recent article in Bloomberg, David Zipper argued that “We Still Don’t Know if Robotaxis Are Safer Than Human Drivers.” Big if true! In fact, I’d been under the impression that Waymos are not only safer than humans, the evidence to date suggests that they are staggeringly safer, with somewhere between an 80% to 90% lower risk of serious crashes.

“We don’t know” sounds like a modest claim, but in this case, where it refers to something that we do in fact know about an effect size that is extremely large, it’s a really big claim.

It’s also completely wrong. The article drags its audience into the author’s preferred state of epistemic helplessness by dancing around the data rather than explaining it. And Zipper got many of the numbers wrong; in some cases, I suspect, as a consequence of a math error.

There are things we still don’t know about Waymo crashes. But we know far, far more than Zipper pretends. I want to go through his full argument and make it clear why that’s the case.
***
In many places, Zipper’s piece relied entirely on equivocation between “robotaxis” — that is, any self-driving car — and Waymos. Obviously, not all autonomous vehicle startups are doing a good job. Most of them have nowhere near the mileage on the road to say confidently how well they work.

But fortunately, no city official has to decide whether to allow “robotaxis” in full generality. Instead, the decision cities actually have to make is whether to allow or disallow Waymo, in particular.

Fortunately, there is a lot of data available about Waymo, in particular. If the thing you want to do is to help policymakers make good decisions, you would want to discuss the safety record of Waymos, the specific cars that the policymakers are considering allowing on their roads.

Imagine someone writing “we don’t know if airplanes are safe — some people say that crashes are extremely rare, and others say that crashes happen every week.” And when you investigate this claim further, you learn that what’s going on is that commercial aviation crashes are extremely rare, while general aviation crashes — small personal planes, including ones you can build in your garage — are quite common.

It’s good to know that the plane that you built in your garage is quite dangerous. It would still be extremely irresponsible to present an issue with a one-engine Cessna as an issue with the Boeing 737 and write “we don’t know whether airplanes are safe — the aviation industry insists they are, but my cousin’s plane crashed just three months ago.”

The safety gap between, for example, Cruise and Waymo is not as large as the safety gap between commercial and general aviation, but collapsing them into a single category sows confusion and moves the conversation away from the decision policymakers actually face: Should they allow Waymo in their cities?

Zipper’s first specific argument against the safety of self-driving cars is that while they do make safer decisions than humans in many contexts, “self-driven cars make mistakes that humans would not, such as plowing into floodwater or driving through an active crime scene where police have their guns drawn.” The obvious next question is: Which of these happens more frequently? How does the rate of self-driving cars doing something dangerous a human wouldn’t compare to the rate of doing something safe a human wouldn’t?

This obvious question went unasked because the answer would make the rest of Bloomberg’s piece pointless. As I’ll explain below, Waymo’s self-driving cars put people in harm’s way something like 80% to 90% less often than humans for a wide range of possible ways of measuring “harm’s way.”

by Kelsey Piper, The Argument |  Read more:
Image: Justin Sullivan/Getty Images
[ed. I'd take one any time (if reasonably priced), and expect to see them everywhere soon. See also: I Was Promised Flying Self Driving Cars (Zvi):]
***
A Tesla Model S drove itself from Los Angeles to New York with zero disengagements. Full reverse cannonball run.
Mike P: I don’t mean to say this in a way that discredits what they’ve done, but ngl, this stuff isn’t even surprising to me anymore like ya, makes total sense. I went from Philly to Raleigh NC to Tennessee and back to Philly and the only thing I had to do was re park the car at 2 charging stops when the car parked in the wrong place.
Tesla did the thing
There’s still a difference between full self-driving (FSD) that can take you across the country, and the point when you can sleep while it drives.

A Waymo moving 17mph hits the breaks instantly upon seeing a child step in front of it from a blind spot, hits the child at 6mph and dialed 911. If a human had been driving, the child would likely have been struck at 14mph and be dead.

What did some headlines call this, of course?
TechCrunch: Waymo robotaxi hits a child near an elementary school in Santa Monica

Samuel Hammond: A more accurate headline would be “Waymo saves child’s life thanks to superhuman reaction time”
This was another good time to notice that almost all the AI Safety people are strongly in favor of Waymo and self-driving cars.
Rob Miles: Seems worthwhile for people to hear AI Safety people saying: No, self driving cars are not the problem, they have the potential to be much safer than human drivers, and in this instance it seems like a human driver would have done a much worse job than the robot