Dumb autonomous cars can save more lives than brilliant ones
Perfect is the enemy of good – RAND Corp think tank
Autonomous cars only need to be good enough to reduce the number of road deaths to be worth permitting: eliminating fatal accidents can wait until later.
That's the result of an analysis from military-industrial complex darling RAND Corporation.
That idea isn't entirely new, since it was put forward by America's National Highway Transportation Safety Administration last year, in a speech that sparked RAND's analysis project.
The organisation set researchers Nidhi Kalra and David Groves on the question, and has announced the result of their work.
The analysis claims that in an America, where the 2016 road toll reached over 35,000 fatalities, if a self-driving car is merely better than a human driver then the technology would save hundreds of thousands of lives over 30 years.
To arrive at that estimate, the researchers modelled two scenarios: allowing autonomous vehicles on the road when they're merely 10 per cent better than humans, versus waiting until “their safety performance is 75 or 90 per cent better than that of average human drivers”.
The headline finding is straightforward: a permissive policy saves more lives, more quickly than stricter policies. That premise seems to hold true no matter how the researchers tweaked their models: “under none of the conditions we explored does waiting for significant safety gains result in fewer fatalities.”
“There is good reason to believe that reaching significant safety improvements may take a long time and may be difficult prior to deployment,” the researchers wrote. “Therefore, the number of lives lost while waiting for significant improvements prior to deployment may be large.”
The study notes that “accurately predicting safety outcomes is fraught with complications because the factors that will govern road safety in the coming decades are impossible to predict given the disruptive nature of the technology”.
The problem is, and always will be, people, because we're more likely to shrug off a road accident as inevitable when it's a human driver, but try to sheet home blame to the presumably-deep-pocketed-and-well-insured maker of a self-driving car.
Tesla, for example, finds itself on the receiving end of a lot of criticism any time one of its cars is involved in an accident.
The report therefore notes “a potentially negative social response to HAV crashes may have profound implications for the technology” adding that “HAVs would still cause many crashes, injuries, and fatalities—albeit fewer than their human counterparts. This may not be acceptable to society … Humans have shown nearly zero tolerance for injury or death caused by flaws in a machine”. ?