Skip to content

AV sensor fusion must perceive beyond what’s visible for next-gen performance

Tarik Bolat puts forward the argument in favour of ground-positioning radar as an essential part of the sensor fusion system autonomous vehicles

In 2019, untempered by worrisome rumours of a global pandemic, the automotive industry was thrumming with optimism for the deployment of autonomous vehicles (AVs). General Motors planned the launch of its robocar offering, Tesla promised its cars would be fully self-driving by years’ end, and nearly every OEM had hopped on the hype train. Fast forward three tumultuous years, and some are saying AVs have stalled. Headlines bounce between safety concerns among regulators and managed expectations, “we’re still in concept phase”. What’s caused this critical change in perception? Could it be the sensor systems themselves?

Special report: Vehicle sensing and monitoring systems

Automakers have realised that the so-called ‘corner cases’ on which sensor data is seldom trained are far more common than originally thought. In order to deliver a safe experience consumers can trust, more work must be done to ensure AVs can operate safely in every scenario.

Integrating uncorrelated, independent vehicle sensors into the tech stack for AVs and highly-automated advanced driver assistance systems (ADAS) is the only way to guarantee passenger safety in the way regulators—and the public—require. While it’s widely accepted that multiple sensors are needed to build any safety-critical ASIL D-rated system, automakers must make a concerted effort to integrate new technology into the current stack. The name of the game is independent redundancy. If vehicles are reliant on a suite of sensor systems that look at objects in a similar way, like camera, LiDAR, and forward-facing radar, all of which are focused on above-ground environments, how can they operate safely in environments where vision is obscured and lane markings are hard to perceive?

Integrating uncorrelated, independent vehicle sensors into the tech stack for AVs and highly-automated advanced driver assistance systems is the only way to guarantee passenger safety

There are new technologies, such as ground-positioning radar, that collect data beneath the road’s surface and look at different objects in new ways. With safety at scale of the utmost importance, developers must embrace this technology for a more robust and reliable AV product. It’s not only critical for consumer trust, but also for building confidence with regulators that AVs can be more capable than human drivers of getting people and goods from A to B safely.

Camera, LiDAR and traditional radar are not enough

The typical combination for a sensor fusion system is camera, radar, GNSS, ultrasonics, and LiDAR. With sensor fusion, the goal is to create a fail-proof driving experience based on independent and uncorrelated datasets. However, the industry only just started collecting datasets of the road’s subsurface, which can be more reliable than the information gathered above ground for localisation, a critical component of autonomous driving.

Imagine an AV is driving on pothole-pocked roads, lane markings are starting to fade, and there are few visual landmarks such as poles or trees. This is not an uncommon situation with 40% of U.S. roadways in poor or mediocre condition. This is a problem for current sensor fusion systems that are often trained on well-maintained paved roads and need clear lane markings or visual cues to determine where a vehicle is located and how to drive safely in an unknown environment. Camera, radar and LiDAR have difficulty in these situations because they are limited to what they see in front of, and around, the vehicle.

GPR pickup in snow
Technologies like GPR will allow a vehicle to reliably determine its position even in the midst of winter

Beyond road degradation, there are scenarios when common weather situations obstruct the view of sensor fusion systems. Rain, snow, and other inclement weather are commonplace and can block a sensor’s field of view and cause it to miss important reference objects and degrade a vehicle’s performance. Other interference, such as crowded urban streets and solar glare, can cause one or more sensors to fail. This has implications for functional safety standards: is the industry going far enough to ensure the safety of automated systems amid common challenging environments? Can we call sensors redundant when they’re sending a similar set of surface landmarks? Redundancy only works well when constructed with uncorrelated sensors, like ground positioning radar, that offer diverse pictures of driving environments and ensure maximum safety.

Beating limitations with new sensors for the AV stack

There’s no question—the sensors deployed on today’s AVs have powered incredible advances in the last several years. Promising developments in LiDAR see the price declining and performance improving, camera-based perception is buoyed by artificial intelligence systems, and thermal cameras provide an alternative to typical above-ground systems. But there’s clearly room for improvement. Reports indicate that consumers are not happy with existing ADAS and AV technologies—some studies have shown as many as 70% of drivers disable ADAS features due to unreliability and inconsistent performance. As it stands, autonomous perception systems remain limited to what they see in front, and around, the vehicle.

Incorporating ground positioning radar that can see into the subsurface of the road can yield ASIL D-rated vehicle localisation, reassuring drivers and manufacturers that their vehicle is operating with maximum safety. For example, the environment beneath the road’s surface is stable over long time periods. Unaffected by changes in weather and light, every inch of road has a subsurface environment as unique as a human fingerprint enabling vehicles to localise with precision and reliability. Packaged under AVs, this new technology introduces a never field of view and data layer never before seen.

The name of the game is independent redundancy.

Subsurface data helps close the sensor fusion gap

The industry must reassess where it stands on sensor fusion to advance the safety of ADAS and AV systems, and leverage new technologies to hasten ASIL-D certification to make roads safer for all. Despite advances in AV sensor technologies, there are still significant performance-cost trade-offs, and some in industry have not yet recognised the value in leveraging uncorrelated, accurate subsurface data as a means of redundancy and validation.

Different sensor technologies that offer more accurate, complete and dependable data sources are needed for safety, and technology companies must work together to lobby for regulations that require this of all AV systems. Introducing a high-performance sensor that fails and succeeds independently of the current stack makes the likelihood of a common point of failure vanishingly small. Subsurface data will help the industry get there, and make sure there is no single point of failure in safe AV systems.


About the author: Tarik Bolat is Chief Executive and Co-founder of GPR

Welcome back , to continue browsing the site, please click here