Google, a leader in efforts to create driverless cars, has run into an odd safety conundrum: humans.
Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.
Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.
It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. “The real problem is that the car is too safe,” said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles.
“They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”
Traffic wrecks and deaths could well plummet in a world without any drivers, as some researchers predict. But wide use of self-driving cars is still many years away, and testers are still sorting out hypothetical risks — like hackers — and real world challenges, like what happens when an autonomous car breaks down on the highway.
For now, there is the nearer-term problem of blending robots and humans. Already, cars from several automakers have technology that can warn or even take over for a driver, whether through advanced cruise control or brakes that apply themselves. Uber is working on the self-driving car technology, and Google expanded its tests in July to Austin, Tex.
Google cars regularly take quick, evasive maneuvers or exercise caution in ways that are at once the most cautious approach, but also out of step with the other vehicles on the road.
“It’s always going to follow the rules, I mean, almost to a point where human drivers who get in the car and are like ‘Why is the car doing that?’” said Tom Supple, a Google safety driver during a recent test drive on the streets near Google’s Silicon Valley headquarters.
Since 2009, Google cars have been in 16 crashes, mostly fender-benders, and in every single case, the company says, a human was at fault.
Last month, as one of Google’s self-driving cars approached a crosswalk, it did what it was supposed to do when it slowed to allow a pedestrian to cross, prompting its “safety driver” to apply the brakes. The pedestrian was fine, but not so much Google’s car, which was hit from behind by a human-driven sedan.
Google’s fleet of autonomous test cars is programmed to follow the letter of the law. But it can be tough to get around if you are a stickler for the rules. One Google car, in a test in 2009, couldn’t get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward, looking for the advantage — paralyzing Google’s robot.
It is not just a Google issue. Researchers in the fledgling field of autonomous vehicles say that one of the biggest challenges facing automated cars is blending them into a world in which humans don’t behave by the book. “The real problem is that the car is too safe,” said Donald Norman, director of the Design Lab at the University of California, San Diego, who studies autonomous vehicles.
“They have to learn to be aggressive in the right amount, and the right amount depends on the culture.”
Traffic wrecks and deaths could well plummet in a world without any drivers, as some researchers predict. But wide use of self-driving cars is still many years away, and testers are still sorting out hypothetical risks — like hackers — and real world challenges, like what happens when an autonomous car breaks down on the highway.
For now, there is the nearer-term problem of blending robots and humans. Already, cars from several automakers have technology that can warn or even take over for a driver, whether through advanced cruise control or brakes that apply themselves. Uber is working on the self-driving car technology, and Google expanded its tests in July to Austin, Tex.
Google cars regularly take quick, evasive maneuvers or exercise caution in ways that are at once the most cautious approach, but also out of step with the other vehicles on the road.
“It’s always going to follow the rules, I mean, almost to a point where human drivers who get in the car and are like ‘Why is the car doing that?’” said Tom Supple, a Google safety driver during a recent test drive on the streets near Google’s Silicon Valley headquarters.
Since 2009, Google cars have been in 16 crashes, mostly fender-benders, and in every single case, the company says, a human was at fault.
by Matt Richtel and Conor Dougherty, NY Times | Read more:
Image: Gordon De Los Santos/Google