Google’s driverless cars run into problems with human drivers
Since 2009, Google self-driving cars have been in 16 crashes, mostly fender-benders. Google claims a human was at fault in every single case. While quick to blame human drivers and even to label them as "idiotic" Google admits the need for "smoothing out" the relationship between the car's software and humans. Google cars regularly take quick, evasive maneuvers or exercise caution in ways that are both cautious, but also out of step with the other vehicles on the road, clashing with actual human behavior.
One Google car, in a test in 2009, couldn't get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward - paralyzing Google's robot. Humans and machines, it seems, are an imperfect mix. A 2012 insurance industry study that surprised researchers found that cars with lane departure warning systems experienced a slightly higher crash rate than cars without them. That highlights a clash between the way humans actually behave and how the cars wrongly interpret that behavior.
On a recent outing with New York Times journalists, the Google driverless car took two evasive maneuvers that simultaneously displayed how the car errs on the cautious side, but also how jarring that experience can be. In one maneuver, it swerved sharply in a residential neighborhood to avoid a car that was poorly parked. Then as it approached a red light the laser system sensed that a vehicle coming the other direction was approaching the red light at higher-than-safe speeds. The Google car immediately jerked to the right in case it had to avoid a collision. In the end, the oncoming car did stop well in time.
One Google car, in a test in 2009, couldn't get through a four-way stop because its sensors kept waiting for other (human) drivers to stop completely and let it go. The human drivers kept inching forward - paralyzing Google's robot. Humans and machines, it seems, are an imperfect mix. A 2012 insurance industry study that surprised researchers found that cars with lane departure warning systems experienced a slightly higher crash rate than cars without them. That highlights a clash between the way humans actually behave and how the cars wrongly interpret that behavior.
On a recent outing with New York Times journalists, the Google driverless car took two evasive maneuvers that simultaneously displayed how the car errs on the cautious side, but also how jarring that experience can be. In one maneuver, it swerved sharply in a residential neighborhood to avoid a car that was poorly parked. Then as it approached a red light the laser system sensed that a vehicle coming the other direction was approaching the red light at higher-than-safe speeds. The Google car immediately jerked to the right in case it had to avoid a collision. In the end, the oncoming car did stop well in time.
I agree with your sentiment, its a little too early to be putting these on the road. There has not been enough background in the field (compared to, say aviation design) to find the ghost-in-the-machine kinks. Great, nothing major has happened so far, but its just a matter of time before we get a story along the lines of "google car makes sudden left for cheaper gas on 4 lane highway - 12 dead." or, "car mistakes other vehicle's racing stripes for road stripes, crashing head-on" :)