What Google didn't say about its self-driving cars - Los Angeles Times
Advertisement

Opinion: What Google didn’t say about its self-driving cars

A row of Google self-driving Lexus cars are lined up at a Google event outside the Computer History Museum in Mountain View, Calif., on May 13, 2014.

A row of Google self-driving Lexus cars are lined up at a Google event outside the Computer History Museum in Mountain View, Calif., on May 13, 2014.

(Eric Risberg / Associated Press)
Share via

Google’s revelation about the (seemingly admirable) safety record of its self-driving cars may not actually further the cause of driverless vehicles in California.

In a blog post Monday, Chris Urmson, the director of Google’s self-driving-car program, said that the 20-plus autonomous cars in his fleet had been involved in 11 collisions since testing began six years and 1.7 million miles ago. In every case, he said, the damage was minor and no injuries were sustained.

He went on to outline some of the lessons learned during the tests, none of which would be news to anyone who’s actually driven a car. The main non-jaw-dropping insights: Intersections are risky. Turns are tricky. And lots of drivers are inattentive.

Advertisement

Pressed for more, Urmson probably could also have noted that cars with human drivers have blind spots, wet roads make for lousy braking, the 405 is a parking lot, and the vast majority of human drivers will never sit behind the wheel of a Maserati.

We know all these things! But what we don’t know is how, exactly, Google’s 11 accidents happened. And those details could have been far more enlightening.

Urmson did assert that “not once was the self-driving car the cause of the accident.†He went on, “[W]e’ve been hit from behind seven times, mainly at traffic lights but also on the freeway. We’ve also been side-swiped a couple of times and hit by a car rolling through a stop sign. And as you might expect, we see more accidents per mile driven on city streets than on freeways; we were hit eight times in many fewer miles of city driving.â€

Advertisement

The post indicated that the cars were driving themselves for nearly 1 million of the 1.7 million miles they were on the road, which is close to 60% of the miles traveled. But there’s no telling whether the 11 accidents occurred when the car was in self-driving mode or being manually operated.

Nor did Urmson say whether the collisions tipped Google off to a weakness in the cars’ awareness of the world around them, leading to a software update that should help cut down on future collisions. There was no indication whether the number of accidents per mile traveled increased or decreased over time. We don’t know whether the human driver inside Google’s cars ever had to take control to avert an accident. And Urmson provided no clues whether the accidents were correlated with particular traffic conditions -- say, for example, urban rush hours.

One thing Google could do to clear up some of the mystery would be to release the accident reports it has filed confidentially with the California Department of Motor Vehicles (and other states, if applicable). That’s what Consumer Watchdog has requested, and though that organization tends to criticize everything Google does, it has a point here.

Advertisement

“You want to eliminate the most basic safeguard, a licensed driver able to take control, in your proposed driverless vehicles,†John Simpson, director of Consumer Watchdog’s privacy project, wrote in a letter to Google’s top executives. “This aim makes it even more important for the public to understand any accidents that occur involving your vehicles during the testing phase.â€

Advocates of self-driving cars have to overcome at least two high hurdles. They have to convince safety authorities that their technology can handle everything that idiotic human drivers, bicyclists and pedestrians can throw at it, and do so as reliably as the mechanical systems on the average car.

In fact, governments may have less tolerance for software bugs than they do for tires that go flat and accelerator pedals that get stuck on floor mats. And they have to come up with an acceptable framework for deciding who’s at fault when there’s a wreck caused by an autonomous vehicle -- the software developer, the maker of the sensors and radar systems, the owner of the car or the passenger that didn’t seize control in time to avert the collision.

If it’s going to surmount either of those hurdles, Google will probably have to have to reveal much, much more about how its cars get banged up -- and also the things they do to avoid collisions. Urmson’s post actually offers some intriguing details on the latter issue, such as this tidbit: “[W]e’ve programmed our cars to pause briefly after a light turns green before proceeding into the intersection -- that’s often when someone will barrel impatiently or distractedly through the intersection.â€

Of course, Google has never been one to over-share. It’s worth noting that Urmson’s post didn’t come out of the blue; instead, it was published shortly after the Associated Press reported that Google’s cars were involved in three accidents in California since September. (That’s when a new law went into effect, requiring companies that obtain self-driving test permits to file such notices.)

By the way, 11 accidents in 1.7 million miles, or 6.4 per million miles driven, seems right in line with the typical American driver’s experience. Using data from the National Highway Transportation Safety Administration from 2008-2009, the AAA Foundation for Traffic Safety estimated that U.S. drivers got into wrecks 4.2 times per million miles traveled on average. The AAA figure understates the average, though, because it’s based only on the accidents reported to police, which represent a little less than half of all collisions. Factoring in unreported collisions, the average increases to 9.3 per million miles.

Advertisement

Not bad, Google car bots! Now, Google, how about pulling the curtain back a bit wider?

Follow Healey’s intermittent Twitter feed: @jcahealey

Advertisement