Skip to content

Automakers and regulators must educate consumers on mobility AI

The future of the mobility is dependent on AI, but without greater understanding among consumers, trust could be hard to build. By Xavier Boucherat

The mobility sector is keen to realise the full benefits of artificial intelligence (AI), not least to open up the revenues which data-driven connected services could offer. But moving forward, it must balance these opportunities with the rights of drivers, passengers and pedestrians. A number of concerns have already surfaced, all of which will become more pressing as the technology is further embedded into vehicles, mobility services and infrastructure.

Special report: Is artificial intelligence the ultimate mobility value driver?

Privacy and liability are two of the major challenges. As Christian Theissen, Partner, White & Case explains, mobility has become inherently connected to consumer habits and behavioural patterns, much like the e-commerce and social media industries. “The access, ownership, storage and transmission of personal data, such as driving patterns, must be taken into consideration by both lawmakers and companies gathering and using data,” he says. Meanwhile, in a world of AI-powered self-driving, at what point do regulators start blaming the machine when something goes wrong?

Automakers must build knowledge

Part of the challenge in considering these issues is that as things stand, there is limited understanding among consumers around what rights there are. “Consumers appreciate AI,” says Cheri Falvey, Partner, Crowell & Moring, “and in particular the ease with which navigational apps help guide them to their destination. Whether they appreciate how their data is accumulating and developing a record of their mobility patterns, and what their rights are in respect to that data, is another question.”

There is often little precedent for regulators to rely on when making new policy in this arena, so it’s a good time to create a proactive regulatory strategy that invites discussion and collaboration from the start

This is in part because it is not always clear when AI is at work. A driver may register when a car’s navigation system learns the way home, but won’t necessarily realise that data on how a car is driven is being collected for predictive maintenance purposes, or that their data is being fed into infrastructure networks to manage traffic flow.

Theissen agrees. “Many consumers, even so-called digital natives, are entirely oblivious of how Big Data and AI are already in use today and affecting their lives,” he says, “and most are not aware of their rights in relation to them.” AI is often behind the scenes, from spam filters to ride-hailing apps, where it is used as an optimisation tool.

Transparency will need to improve. In the future, AI will drive vehicles and ultimately take responsibility for road-users’ safety, as well as power the mobility services which cities and governments hope will reduce emissions and congestion. Without developing understanding and knowledge among consumers of their rights on data ownership, the industry risks alienating people and harming its own ability to deploy the technology.

“AI is critical to the future of the automotive industry,” says Falvey, “and so is the ability of consumers to understand their rights and protect their data. If consumers cannot understand, they won’t trust AI or the companies that use it, and this will impact the future of innovation.” Already there have been some cautionary tales, she adds, with privacy lawsuits against ride-share companies which have had user data compromised by hackers. However, she adds, it is clear that automakers and tech companies are working with lawmakers, ensuring that there are mechanisms in place to help clearly define how data can be used.

Tricky business?

Unsurprisingly, this is unlikely to be straightforward. The primary question, says Theissen, particularly when it comes to autonomous driving, is whether old law can regulate new issues. Up until now, he suggests, the answer has been yes: in Germany, for example, traditional tort and product liability law has remained unchanged with regards to automated driving. The country brought new regulations for autonomous vehicles (AV) into force in 2017, and in 2016 established an ethics commission which produced a report comprising of 20 rules prioritising human safety, data protection and transparency.

NHTSA guidance makes it clear they expect autonomous vehicle operators to have documented processes for assessment, testing and validation of their crash avoidance capabilities and design choices, many of which have AI functionality

“However,” he continues, “the complexity of AI is definitely a factor that will shape regulation. One of the key challenges will be to develop cutting-edge technology in parallel with processes to adopt legislation and get the type-approval for new software. To cope with that, it seems increasingly likely that industry and legislators will have to work hand-in-hand.”

The issue of ‘AI literacy’ is also relevant: regulators and legislators will have to understand how it functions, and how it is applied in specific products and services. Furthermore, legislators will need to consider whether AI can really be regulated by ‘black letter law’—which is to say, specific and unambiguous laws—or whether governance through guidelines and self-regulation is more appropriate.

“Access to source code to understand what has happened when things go wrong is critically important,” says Falvey, adding that processes also need to be established for when things go wrong. “How AI may have changed the functionality of a vehicle through changes to the software presents another challenge,” she explains. “Telemetry data can help, but accessing telemetric data on a fleet of vehicles for use in root cause analysis and other safety functions requires significant planning to avoid unreasonable data storage demands.” Preserving and producing it to regulators, she adds, will present significant challenges as well.

Developments today

Work has already begun on the regulatory framework which will safely allow for the deeper integration of AI into vehicles and mobility. A February 2020 white paper on AI published by the European Commission stressed the need for a solid, European-wide approach that would improve lives and keep the bloc competitive on a global basis, whilst keeping any AI deployment in line with the EU’s values and protecting people’s privacy.

A regulatory and investment approach is recommended. Long-term, the bloc aims to build an ‘ecosystem of trust’ which gives companies the confidence and legal certainty they need to innovate. The EU identifies AI as a key technology in meeting its environmental targets, and notes the role the technology could play in enabling cheaper, more sustainable transport.

AI is critical to the future of the automotive industry, and so is the ability of consumers to understand their rights and protect their data. If consumer’s can’t understand, they won’t trust AI or the companies that use it

In the US, Falvey believes California leads the way, moving earlier this year on legislation that requires micromobility operators to submit anonymised trip data for its own use. The law will also apply for AVs, and clarifies protections for trip data.

“At the federal level in the US,” she continues, “NHTSA guidance makes it clear they expect autonomous vehicle operators to have documented processes for assessment, testing and validation of their crash avoidance capabilities and design choices, many of which have AI functionality.” NHTSA is also signalling expectations with regard to autonomous vehicle system architectures, and their potential to address pre-crash scenarios.

The challenge for regulators will be to protect citizens without stymieing innovation, and many have signalled they want to communicate with companies early and often, concludes Falvey: “There is often little precedent for regulators to rely on when making new policy in this arena, so it’s a good time to create a proactive regulatory strategy that invites discussion and collaboration from the start.” Automakers must not build their products in the dark: having a voice in the room now could save a lot of trouble later.

Welcome back , to continue browsing the site, please click here