Skip to content

LeddarTech proposes gradual ramp-up from ADAS to autonomous

Instead of focusing on Level 4 autonomy straight away, LeddarTech believes in scaling up AI functionality iteratively for a better result. By Will Girling

Competition in the autonomous vehicle (AV) space is heating up: advances in artificial intelligence (AI) could be creating a US$10tr opportunity, and early pioneers are eager to capitalise. CES 2025 demonstrated that progress in software-defined mobility is accelerating quickly, and automated/autonomous driving could be its most valuable use case.

LeddarTech, which exhibited at the event, certainly thinks so. But there are still substantial challenges for the industry to resolve. Founded in 2007, this global auto software developer headquartered in Canada believes new approaches to AI, sensor fusion, and vehicle perception can help automakers and suppliers finally bring AVs to market.

However, Chief Executive and President Frantz Saintellemy tells Automotive World that the journey will be gradual. Rather than going all in on SAE Level 4+, LeddarTech is building autonomy iteratively, from Level 2 advanced driver assist systems (ADAS) up. Using the LeddarVision Surround-View LVS-2+ stack, he states, yields a safer and better performing AI foundation for progressing to Level 3 and above

What big technical challenges is the automotive industry currently facing while implementing ADAS and autonomous driving systems?

Developing AVs requires substantial capital investment and long-term commitment. In this economically challenging environment, compounded by negative and mixed public sentiment towards AVs, manufacturers are quickly redirecting their focus to short- and medium-term projects, which are easier to realise. ADAS that can scale to higher levels of autonomy and eventually fully autonomous vehicles may ultimately be the winning approach.

However, from a technical perspective, there’s a performance delta: existing ADAS systems have constrained operational capabilities. Many struggle in adverse weather conditions such as rain, fog or dirt, and their effectiveness is often reduced at night. Some systems may fail to detect pedestrians or cyclists with the required accuracy. For AVs, these performance deltas extend to control and decision-making technologies. Reports periodically emerge of AVs getting stuck, honking at each other or even driving in circles.

From where do these performance issues stem?

Many environmental perception solutions currently in use are rigid, with software designed to work exclusively with specific sensors. This creates challenges for automakers and Tier 1s, as it limits their ability to improve performance, add new features, or maintain systems in the field. They also face difficulties scaling their systems to higher levels of ADAS and autonomous driving. Transitioning from Level 2 to Level 3 requires a complete overhaul, leading to increased development time, higher system costs, and added complexity in maintaining production for multiple software versions.

Developing AVs requires substantial capital investment and long-term commitment

In the end, most of the technical challenges can be traced back to the object-level fusion, which is widely used in today’s basic ADAS warning systems. These systems struggle to meet regulatory safety requirements while also addressing consumer demand for convenience features at affordable costs.

How can the LeddarVision Surround-View LVS-2+ stack help automakers?

LeddarVision uses advanced AI and computer vision algorithms to generate precise 3D environmental models that enhance decision making and improve navigation safety. The stack offers centralised and multi-modality sensor-agnostic fusion that can be used to scale from automated driving to highly automated driving. The system can handle an expanding variety of use cases, features, and vehicle sensor configurations. It also addresses many object-level fusion ADAS architecture limitations through AI-based, low-level sensor fusion and perception technology, which extends the effective perception range. We can achieve up to twice the effective perception range using the same sensor set.

How does low-level sensor fusion improve perception?

Cars are increasingly equipped with complex sensors—including cameras, LiDARs, radars, and ultrasonic sensors—to gather data about their surroundings. How this data is processed is crucial, and there are two fusing techniques: object-level fusion and low-level fusion.

In the traditional object-level fusion technique, each sensor individually detects an object and runs perception algorithms to identify what it is, as well as determine other properties of the object. This approach processes data from each sensor in isolation.

Meanwhile, the low-level fusion approach pioneered by LeddarTech fuses the raw data from multiple sensors before running perception algorithms on the combined data to identify the object and its properties. AI algorithms process the fused data to detect, identify, classify, and segment objects such as other vehicles, road signs, obstacles, and vulnerable road users like pedestrians. Additionally, AI is used to analyse the vehicle’s surroundings to support motion and path planning. Machine learning techniques, particularly deep learning, are employed to train models that can recognise and classify these objects with high accuracy.

Sensor configuration for LeddarVision Surround-View (Source: LeddarTech)

AI algorithms, such as convolutional neural networks (CNNs) and vision transformers, are then utilised to process and interpret the data from the vehicle’s sensors. Sensor fusion techniques combine this data to provide a comprehensive understanding of the environment, ensuring redundancy and enhancing accuracy. Deep learning models, particularly those based on CNNs and recurrent neural networks, are trained on extensive datasets to detect and classify objects. This includes identifying other road users, pedestrians, unexpected obstacles, road signs, and lane markings. Techniques like transfer learning improve these models further by fine-tuning pre-trained networks on specific driving datasets.

Can you share any use cases or partnerships that demonstrate your product’s efficacy?

We have conducted more than 80 in-vehicle, on-the-road demonstrations and engaged with more than 200 different industry professionals. The feedback has been overwhelmingly positive: OEMs and Tier 1 suppliers have expressed significant interest in our solution.

One of our current collaborations is with Arm. By optimising critical performance-defining algorithms within the ADAS perception and fusion stack for the company’s central processing units (CPUs), we have successfully minimised computational bottlenecks and enhanced overall system efficiency using the Arm Cortex-A720AE CPU. This partnership is key as the industry shifts towards a software-defined vehicle era with centralised and zonal E/E architectures.

How will you continue to iterate and develop LVS-2+ for higher levels of autonomy?

The transition from Level 2 to Level 3 autonomy marks a significant evolution, shifting system operation from a fail-safe model to a fail-operational one. This progression introduces numerous new requirements and challenges, including updates to the safety concept, enhanced sensor redundancy architecture, enriched environmental perception features, and increased computing capabilities.

OEMs and Tier 1 suppliers have expressed significant interest in [LeddarTech’s] solution.

LeddarTech has already begun to define these critical concepts and develop the foundational building blocks needed to support Level 3. In collaboration with our industry partners, we have successfully developed an initial safety concept to address the unique challenges of fail-operational systems. This concept serves as a cornerstone for our ongoing advancements, ensuring that LVS-2+ continues to meet the rigorous demands of higher autonomy levels.

So, what role could LeddarTech play in taking automated/autonomous driving fully into the mainstream?

We are delivering scalable, cost-effective solutions that provide Level 3 performance at Level 2 costs, making ADAS more accessible. By processing raw data from multiple sensors, LeddarVision enhances safety and reliability in complex scenarios, strengthening consumer trust in automated driving. Collaborating with OEMs, Tier 1s and other major industry players, LeddarTech fosters industry-wide innovation and accelerates the deployment of autonomous features. This approach reduces development complexity and time-to-market, enabling automakers to bring cutting-edge technologies to a broader audience with less risk.

https://www.automotiveworld.com/articles/leddartech-proposes-gradual-ramp-up-from-adas-to-autonomous/

Welcome back , to continue browsing the site, please click here