Skip to content

How do we know if an autonomous vehicle is safe?

Industry experts highlight some of the key problem areas in testing an automated vehicle. By Freddie Holmes

Gauging how safe an autonomous vehicle (AV) is when left to its own devices is extremely challenging. How accurately can its sensors describe the world around the vehicle? What if something goes wrong, and how do you pinpoint the fault in a network of complex systems? These are some of the questions the industry needs to address.

The task is not becoming any easier. The electrical and electronic (E&E) architecture of new vehicles is becoming ever more complex; since 2010, the number of lines of code in a new car has increased by more than 30 times. This has created more opportunity for coding errors in the software, which in a vehicle hurtling along at 60 miles per hour could have devastating results.

Armen Mkrtchyan is an Engagement Manager at McKinsey & Company. He has been closely involved in the testing and validation of automated systems, not only in automotive but in other industries as well. As part of his opening statement at M:bility | California, a two-day conference hosted by Automotive World, he explained that the industry faces a challenge in handling the sheer number of road scenarios required for an AV to operate completely independently.

You could take a car to a test track and drive it round and round for a billion miles. Does that mean it is a safe self-driving car?

So-called ‘edge cases’ have become a familiar industry expression, and for good reason. AVs will encounter unfamiliar situations, and developers are making a concerted effort to find as many as possible in order to perform repeat tests. The idea is that should those edge cases eventually come to pass, the AV will be able to cope. The task of putting all that together is as challenging as it sounds.

“We are trying to figure out what the ‘unknown unknowns’ are, and thus reduce the number of these cases,” explained Atul Acharya, Director of Autonomous Vehicle Strategy, AAA Northern California, Nevada & Utah. “Some of these become apparent from testing in simulation mode, and others from testing on public roads. You want to prove that the vehicle can actually pass a set of known scenarios, but also handle variations.”

The risk of not covering all the bases was made evident in the now infamous Uber Advanced Technologies Group crash in 2018. Elaine Herzberg was struck and killed by a test car operating in ‘AV mode’ as she wheeled her bicycle across a four-lane intersection at night.

If you don’t define what a pedestrian looks like whilst walking a bike across the road with a bag of groceries hanging from the handlebar, the vehicle will not understand it

“We saw in the Tempe, Arizona incident that you can define what a pedestrian looks like and what a bicycle looks like. However, if you don’t define what a pedestrian looks like whilst walking a bike across the road with a bag of groceries hanging from the handlebar, the vehicle will not understand,” observed Chuck Brokish, Director of Automotive Business Development, Green Hills Software. “We need to do more simulation, and we need it to be as close to real world inputs as possible to account for shading, light sources and angles, and other aspects. The biggest challenge moving forward will be understanding how many miles we need to drive in order to prove the safety of an AV system.”

‘Miles tested’ has been a touchy subject for many in the industry, who view it as an inaccurate representation of an AV developer’s overall capability. “You could take a car to a test track and drive it round and round for a billion miles. Does that mean it is a safe self-driving car?” asked Dr. Henning Lategahn, Founder and Managing Director of Atlatec. “It is more about the mix of different scenarios that a software stack is exposed to. This is something we really need to think about.”

It is like looking for a specific needle in a haystack made of more needles

Another challenge is working with what Brokish described as “a system of systems.” In simple terms, he explained, various systems within a vehicle are constantly interacting, which can prove problematic if there is a software error that affects more than one system at a time.

“A transmission controller may be programmed so that if anything goes wrong, it goes into neutral, which is better than going into the wrong gear,” he said. “But how does that work for an autonomous vehicle when it has a transmission problem and it wants to go into neutral, but the car needs to be in gear in order to move itself off the road to safety?

With numerous systems intertwined, finding out what went wrong and why can be a tough ask. As Jeremy Bennington, Solutions & Technical Strategy Lead, Spirent Communications summarised: “It is like looking for a specific needle in a haystack made of more needles.”

Welcome back , to continue browsing the site, please click here