Skip to content

How can automakers make AI safe for drivers and pedestrians?

A mixture of international cooperation and local solutions could help AI developers overcome unsolved safety problems. By Jacob Moreton

It is widely assumed that artificial intelligence (AI) will soon take its place at the centre of automotive development, but is that future certain? The path towards autonomous, intelligent vehicles remains littered with obstacles and impediments, not least demands for safety.

How can automakers and software developers ensure that safety is at the heart of future innovations in automotive AI?

Free of errors?

Building safe AI systems is “clearly a very difficult problem,” said Stan Boland, Chief Executive of Five, speaking at The Autonomous Main Event, a virtual conference for self-driving stakeholders. The key issue, he explained, is that the systems on which AI is built would always contain some element of error. “That is always going to give rise to the challenge of how we build systems, and how we can confidently meet a level of safety criteria that allows us to put them on the road,” he said. One possible solution could be a so-called “perception system contract,” he argued. If developers can recognise the existence of an error, they can determine what level of mistake is acceptable, and design a system resilient enough to counteract errors within a certain threshold.

Jan Becker, Chief Executive of Apex AI, agreed that an error-free system was essentially impossible. As AI is a learned model that teaches a system how to react given certain inputs, there would always be unknowns that were not available during the training process, he said. But the issue was not just technical; what level of error could the industry—and the public—accept? “There will never be a perfectly safe solution. But it will still be much safer than humans are today,” said Becker.

Learning and development

But AI systems are not simply trained once and released. According to AI development company Appen, autonomous systems “develop understanding, make decisions, and evaluate their confidence from the training data they are given. The better the training data is, the better the model performs.”

For that reason, innovations like over-the-air (OTA) updates are crucial, said Georges Massing, Vice President of Digital Vehicle and Mobility at Mercedes-Benz. It was a matter of scale, he said: the more data a vehicle receives, the better it can understand the environment surrounding it. OTA updates also let automakers develop and update safety features on many units at once—a June 2021 update from BMW upgraded the software on 1.3 million vehicles.

Gary Hicok from Nvidia recommends extensive simulation and track testing before introducing AVs to public roads

A continuous flow of data to the car can also help systems adapt to new environments, which is particularly important in securing high levels of safety for companies operating across different markets with disparate expectations for safety. In Germany for example, it is not considered safe to experience a car or bicycle travelling towards the vehicle, Massing said, while in China it is considered normal. If an autonomous vehicle (AV) developed in Germany was brought to China, it would perceive certain behaviours, considered normal by other drivers, as unsafe.

There are even important variations in safety between Western European locations. Actions considered unsafe in Germany are unacceptable even in close neighbour Austria, Massing said. “If you drive to the Eiffel Tower in France, refusing to give way is normal. For us in Germany, it is a mess. So the AI has to adapt to different cultural standpoints.”

Local decisions

Those differences in local expectations mean there is no solid definition of what exactly is acceptable or safe. For that reason, said Boland, the industry needs to formally define rules for driving, whether in terms of safety, comfort or technology. That would then allow automakers and suppliers to measure the results, with accurate data, and use that information to refine regulations. A framework could be global, Boland said, but specific local rules would still be needed.

On the other hand, Tatjana Evas, Legal and Policy Officer at the European Commission, said the European Commission was prioritising international discussion—at least within the bloc—with its April 2021 proposals for an Artificial Intelligence Act. This act would regulate AI systems across the EU’s Single Market through product safety rules on AI systems, including AVs. Evas said the Commission was also working with standardisation bodies to define the meaning of human oversight on AI.

Simulating safety

Embedding safety in AI requires extensive testing. But automakers also need to road test vehicles without compromising the safety of pedestrians or fellow drivers. How should the industry approach the testing process?

We need to enrich simulation, and therefore build systems that include intelligent agents that can explore the behavioural elements of a system

For Gary Hicok, Senior Vice President at Nvidia, the answer lies in simulation: “Everything has to be tested in simulation before we road test. Then we go out to test tracks to make sure the vehicle, the operating environment and software are all good. Then finally we put test drivers in the vehicle. During the testing phase, while we are checking it out and making sure that it works well, there is no reason to take risks.”

Meanwhile, Boland cited testing projects in the US, like those of Waymo, Cruise, Aurora and Nvidia, as examples to follow. In fact, Waymo developed a second virtual testing system—Simulation City—after it discovered gaps in its capabilities. Simulation has to be “eventful”, said Boland. “We need to enrich it, and therefore build systems that include intelligent agents that can explore the behavioural elements of a system.”

Combining forces

In the end, neither the automotive industry nor computer science has a complete answer to the problem of safe AI, Boland argued. While automotive is fantastic at designing and testing safe systems, and making resilient choices, computer science has its own strengths in building cloud native systems and applying machine learning to problems. The two will have to be brought together.

But putting that into practice in Europe is still very much a work in progress, Boland added, because the industry is conflicted about what approach it will eventually take to software development. Some companies seek to emulate Apple’s example by developing all hardware and software under one roof, while others intend to integrate capabilities from a variety of sources.

Many industry leaders are not sure which path to take, or even feel equipped to make any changes at all: according to data from McKinsey, only 40% of those Research & Development leaders who view software as a major disruptor feel prepared to make the necessary changes to operational models. Ultimately, the car industry has to decide on which development path it will take towards safety in AI, Boland said. “Once it decides, it will get easier.”

Welcome back , to continue browsing the site, please click here