Skip to content

SDV underpinnings advance from connectivity to on-device AI

Qualcomm Europe’s President shares how the company became the go-to supplier for SDV building blocks. By Megan Lampinen

Qualcomm has emerged from its smartphone roots to become one of the leading players within the software-defined vehicle (SDV) ecosystem. Its expertise in connectivity and intelligent computing is paving the way for AI-defined mobility, raising the bar on immersive and personalised in-vehicle experiences and accelerating the push towards increasingly automated driving.

Enrico Salvatori, Senior Vice President and President of Qualcomm Europe, has overseen the company’s automotive journey first-hand and helped position its technology as the go-to solution for software-defined mobility. Given current consumer trends and recent technology advances, he believes the sort of on-device AI enabled by Qualcomm will underpin the automotive sector’s next stage of evolution.

Qualcomm is an engineering company with a history in smartphones. How did it come to play a foundational role in the SDV space?

Our original core competency within automotive was connectivity in the car through Wi-Fi and Bluetooth. We were simultaneously evolving the telematics roadmap. But as an engineering company, we love to fix problems. The automotive environment has plenty; it poses more constraints and challenges than we were used to within the smartphone business. We developed a computing platform to enhance the cockpit experience and in-vehicle infotainment (IVI) engagement. That put us in the right place at the right time.

The E/E architecture was moving from decentralised compute with multiple microcontrollers to a central compute architecture. That meant more processing power was requested to cover multiple functions. Qualcomm was ready with our CPU architecture. We drew on previous experience with smartphones and came up with a dedicated CPU for automotive.

Where has your computing journey positioned you today?

Edge computing, or computing in the car, evolved into the current AI in the car—adding a neural processor unit (NPU) alongside the CPU or GPU to support AI. Today, we have a unique position delivering connectivity, cockpit, and AI for autonomous driving and ADAS with the Digital Chassis Platform. This allows OEMs and Tier 1s to focus on the application layer in which there’s more opportunity for differentiation.

This path from connectivity to edge computing to edge AI in the car has also helped Qualcomm in other markets. It’s no longer the smartphone platform serving cars—it’s the other way around. In some scenarios, the car platform is serving PC and smartphone compute.

Qualcomm Snapdragon Digital Chassis
The E/E architecture has evolved significantly over the years

There is a lot of noise about AI in general, but Qualcomm talks specifically about edge AI and AI on device. What’s the significance of that approach?

It means the hardware can run AI algorithms on the device. AI model training stays on the cloud, and we offload some of the functions, like the inference. We don’t compete with the cloud; we add value and are complementary. This architecture trend is opening the path for new models and AI applications focused more granularly on what the user experience could provide. We are in the middle of that trend.

Can you offer some examples of the impact on user experience?

Vehicle occupants can talk to the car and receive all the information they need for pretty much every application. There is an evolution towards adaptive user interface, and potentially moving away from keypads or touchscreens. AI can recognise a gesture by drawing on camera data; it can see you are pointing at an object outside, cross-check with a digital map to understand that it is a monument, and then provide relevant information. With AI in the car, we can recognise the drivers and passengers and adapt the temperature and music settings, etc., according to their profile. It makes for a much richer user experience overall.

What about the safety implications of bringing inference into the device?

AI can read the driver’s status and tell if they are tired or distracted and take action if necessary. It also comes into play with security. The data needed to run AI inference can be kept local in the car, so no need to take this data into the cloud.

Why are these capabilities so important for automakers?

As well as the cyber security issue, it helps them compete in terms of user experience, which OEMs are increasingly focused on improving through their software ecosystem. The fact that data will be available in the car reduces the cost of offloading information to the cloud.

What major industry developments are shaping the direction of Qualcomm’s business and its offerings?

Hybrid AI, a combination of cloud with device, will prove the optimised scenario and, we anticipate a lot of activity here. We also see a lot of interest in software applications that leverage AI, and we introduced our AI Hub for this reason. Available on our portal, it offers a vast library of models to help application developers. They can select the model they want to use and the platform—what chipset, what NPU performance—and test the application they develop on virtual hardware in the cloud. You will see more and more app stores migrating to this AI Hub.

What’s the outlook for your automotive business in terms of revenue or order pipeline?

Automotive is definitely growing and accounting for a larger share of Qualcomm’s overall business. We have a solid pipeline of global programmes worth US$45bn, much of which stems from European OEMs. 2023/2024 marked the start of the cockpit IVI commercialisation, with the first cars arriving on the market. Before that, our automotive business was mainly around telematics and connectivity. Now we are introducing the ADAS platform, which should prove a significant revenue driver.

Competition in the industry is heating up. What do you think gives Qualcomm the edge over its rivals?

With the Digital Chassis, there is a real benefit in having a single hardware and software environment across the technology streams. We can re-use software and de facto implement the SDV because we can go across connectivity, CPU, GPU, and NPU performance horizontally. Sensor use can be optimised because we can have a sensor serving more than one function, with both autonomous driving and cockpit. That is where we will make a valuable contribution to the automotive sector. Of course, using GenAI and other AI models to shape the in-car user interface and customer experience is the next big step.

https://www.automotiveworld.com/articles/sdv-underpinnings-advance-from-connectivity-to-on-device-ai/

Welcome back , to continue browsing the site, please click here