Technology

New Self-Driving Cars Testing Method Saves 99.9% of Validation Costs

autonomous cars

A new way to test autonomous vehicles has been developed by mobility researchers at the University of Michigan. This method eliminates the billions of miles of driving that is normally needed before consumers would accept a car as being ready for the road.

General Motors has already been testing their autonomous vehicles on public roads in Scottsdale since June 2016.

The researchers note that the new process could save 99.9% of testing costs and time. They used data from more than 25 million miles of real world driving to develop the method and estimate that it will cut the time needed to evaluate robotic vehicles’ handling situations that are potentially dangerous by between 300 and 100,000 times.

The approach is outlined in a new white paper published by Mcity, a public / private partnership led by U-M. Mcity’s aim is to speed up advanced mobility vehicles and technologies.

Huei Peng, the Roger L. McCarthy Professor of Mechanical Engineering at U-M and director of Mcity noted that even large scale and advanced efforts to test automated vehicles currently fall miserably short of what is required to thoroughly test the self-driving cars.

The new accelerated evaluation process basically separates difficult real world driving scenarios into smaller modules that can then be simulated or tested repeatedly. This will expose automated vehicles to a compressed set of the most challenging driving situations they are likely to encounter. A mere 1,000 miles of testing can produce the equivalent of between 300,000 and 100 million miles of real world driving in this manner.

Although 100 million miles may sound like an overkill, it is still not nearly enough for researchers to obtain enough data to be able to confidently certify the safety of a driverless vehicle. This is because difficult scenarios that need to zeroed in on are extremely rare. A fatal crash happens only once in every 100 million miles of driving.

The researchers estimate that before consumers will accept driverless vehicles, tests will need to show with 80% confidence that they’re 90% safer than human drivers. To reach that level of confidence, test vehicles would have to be driven in the real world, or with simulated settings for 11 billion miles. In typical urban conditions, it would take nearly 10 years of 24/7 testing to reach a mere 2 million miles.

Fully automated driverless vehicles also need a very different type of validation than what can be provided by the dummies commonly used on crash sleds for today’s cars. The questions researchers have to ask are also more complicated. They can’t simply determine what happens in a crash, but need to determine how well they can prevent one from happening.

Ding Zhao, assistant research scientist in the U-M Department of Mechanical Engineering and co-author of the white paper, likens test methods for traditionally driven cars to a doctor taking a patient’s heart rate or blood pressure, while testing for automated vehicles is more akin to measuring someone’s IQ.

The U-M researchers analyzed data gathered from 25.2 million miles of real world driving to develop the four step accelerated approach. This data was collected by two U-M Transportation Research Institute projects – Integrated Vehicle Based Safety Systems and Safety Pilot Model Deployment. The two projects combined involved approximately 3,000 vehicles and was collected by volunteers over a two year period.

From this data, the researchers then:

  • Identified events that contained “meaningful interactions” between a vehicle driven by a human and an automated one. A simulation was then created replacing all the miles without events, with the meaningful interactions.
  • The simulation was programmed to consider human drivers as a major threat to automated vehicles and human drivers were randomly placed throughout.
  • Mathematical tests were conducted to determine the probability and risks of specific outcomes, including injuries, crashes and near misses.
  • The accelerated test results were interpreted by making use of a technique called “importance sampling.” This showed how the automated vehicle would perform in everyday driving situations on a statistical basis.

The accelerated evaluation process thus developed could be performed for numerous maneuvers that are potentially dangerous. Researchers evaluated the two situations that they would most commonly expect to result in serious accidents. In the first, an automated car followed a human driver and in the second, a human driver would merge in front of an automated car. The evaluation’s accuracy was determined by conducting and comparing both accelerated, and real world simulations. More research involving numerous additional driving situations is however needed.