Deep learning, a form of machine-learning and a component of artificial intelligence (AI), could yet prove an essential enabler for fully autonomous vehicles. Inspired by the structure of the human brain, it could give cars the ability to safely negotiate the many complex scenarios that human drivers encounter on the roads every day.
The concept is nothing new. Indeed, the original idea of an artificial neural network modelled on the brain was conceived in the earliest days of AI, but for software to ‘learn’, it needs to be able to parse gargantuan amounts of data, or ‘examples’, and this requires massive amounts of computing power. Historically, this has limited practical applications, but the rise of Graphics Processing Unit (GPU) accelerated computing has helped overcome this restraint. Compared with a central processing unit (CPU), which has multiple cores to perform tasks in sequence, a GPU has thousands of smaller cores which can perform multiple tasks simultaneously. GPU-accelerated computing works by offloading compute-intensive tasks to the GPU, leaving the CPU’s cores to execute other lines of code.
Recent developments demonstrate just how far AI has come. Google-owned DeepMind’s AlphaGo computer won a game of Go, a traditional Chinese board game, against Lee Sedol, the world’s best player – a feat all the more remarkable when you consider there are several billion times more legal positions in Go than there are atoms in the observable universe.
Speaking at the inaugural GPU Technology Conference (GTC) in Amsterdam, Jen-Hsu Hung, Chief Executive of Nvidia, declared that society was on the verge of a new AI revolution. The big bang for GPU-computing, he suggested, came in 2012 when AlexNet, a deep neural network image classification software, outperformed human-written algorithms designed by some of the world’s leading computer vision experts. The wider extrapolation of AlexNet’s achievements is “daunting,” said Hung, and is going to enable a whole host of technologies – not least, self-driving vehicles.
“If you have a fleet of self-driving cars, one of them is eventually going to find something it doesn’t recognise,” he said. “If that car stops, it will remain stopped. And so full autonomy is going to require even more ability to detect the corners of conditions.” But how closely is the automotive industry watching GPU developments, and how relevant will they be?
Up the learning curve
Erik Coelingh is Senior Technical Leader for Safety and Driver Support Technologies at Volvo Car Corporation, and as far as he is concerned, Volvo cannot hope to fulfil its visions without deep learning. Coelingh and his team are spearheading the Drive Me project, in which 100 customers will test-drive XC90s equipped with Volvo’s latest self-driving technology. Trials will take place among real life traffic along commuter routes surrounding the OEM’s hometown of Gothenburg, and are due to begin in 2017. A similar project has been confirmed for London, and other cities are being considered.
“We need to learn everything we can around how deep learning can be applied to make cars more intelligent,” said Coelingh. “Volvo has been talking about self-driving cars since the summer of 2007, and back then, people were unsure what self-driving programmes could mean. That has changed very quickly. Work on Drive Me started in 2013, when our top management realised just how important this was going to be. The promise of self-driving cars is now extremely attractive, both to consumers and to society.”
To help fulfil this promise, Volvo already works closely with Nvidia, and is using the company’s Drive PX 2 platform in the advanced platforms that will soon hit the road. The computational power this affords allows the cars to process video images, make decisions and carry out other compute-intensive tasks.
“Just consider what a self-driving car needs to do,” said Coelingh. “It needs to be able to perceive its environment, and this creates a huge amount of data that needs to be processed in real time in order to make good decisions. The Drive PX 2 platform allows us to connect the things that collect this data. Our self-driving cars are using nine cameras, seven radars, a LiDAR and high-definition map data. Processing all this requires huge amounts of power.”
There is no self-driving car on sale today, and there won’t be until someone can prove their robustness” – Eric Coelingh, Volvo Car Corporation
GPU-accelerated computing will lend vehicles the power to perform such complex sensor fusion, which in turn will help to democratise safety. Coelingh is clear on the need for this – LiDAR, for example, will be an extremely necessary component. Being able to detect anything that might end up on the road in any conditions, said Coelingh, is extremely challenging. “The only way to solve that is to use all sensing principles that you have available. Radar, camera and LiDAR all have their weaknesses, but in combination, you can really make a robust system. It’s true that LiDAR is expensive, but developments continue behind the scenes to solve this problem.”
Even the laser Volvo uses now, he suggested, is “really quite affordable” for a car like a Volvo. “We will continue to lead in democratising this kind of technology,” Coelingh said. “We were the first to equip collision avoidance features, based on LiDAR technology, in 2008. There are millions of Volvos using AEB and other features which are saving lives every day.”
Let’s not be hasty
Demos and promotional films, suggested Coelingh, are a good way to show off what’s possible in autonomous drive technology, but may give the impression that we’re closer to a self-driving future than we really are. The challenges ahead, he stressed, remain enormous. Building a vehicle that can deal with exceptional circumstances such as extraordinary weather situations, road traffic incidents or technical faults in the infrastructure will require the leaps forward in deep learning and computing power for which Nvidia is pushing.
“The full power of deep learning has not yet been realised within automotive,” said Coelingh. “We’ve only taken baby steps so far. But this will change very quickly. Right now, every new Volvo vehicle has a camera, and this camera is gathering data – recognising pedestrians, lane markers, animals, cyclists and more. That technology is there. But when we make the step towards fully autonomous vehicles, which is to say when we tell our customers they no longer have to drive and they can perhaps read a book instead, the amount of processing power needed, as well the quality and robustness of the technology, is of a completely different magnitude.”
For now, several situations remain that current technology cannot handle. Several features that will be essential for autonomous driving, such as automatic emergency braking (AEB), have already reached a level of maturity, but still have vulnerabilities. “For example, you may have a single camera looking out the front of your vehicle,” he said. “If the sun is low on the horizon and shining directly into the camera, it’ll blind it. Today this would be an exceptional situation, but if there were many cars with this feature on the road and it began happening every day, we would see a high number of road traffic incidents.”
Xavier is one of our greatest endeavours as a company. There are so many ideas now, so many applications that we weren’t able to conceive before” – Jen-Hsu Hung, Nvidia
Other exceptional situations, Coelingh added, include pedestrians entering areas they should not, such as highways, or technical failures of components such as the brake pump. In the case of the latter, the car would still need to be able to brake itself, he said, as a self-driving car cannot rely on a potentially distracted driver. A second system would be needed, adding further complexity to the vehicle. “There is no self-driving car on sale today,” he concluded, “and there won’t be until someone can prove their robustness.”
Nvidia is branching out further into the mobility sector, announcing at GTC a partnership with TomTom. The mapping technology company has chosen the Drive PX 2 platform to use in its mapping vehicles, and together the companies plan to create a Cloud-to-car platform capable of converting HD video in HD mapping – “a computational challenge of extraordinary proportions,” said Hung.
Also unveiled at GTC 2016 was Xavier, an AI supercomputer designed specifically for use in self-driving cars. Built with seven billion transistors, the technology will eventually replace the Drive PX 2 platform. “Xavier is one of our greatest endeavours as a company,” Hung said, suggesting the computer was among a number of developments that proved how important AI was “for the future of the world. There are so many ideas now, so many applications that we weren’t able to conceive before.”
This article appeared in the Q4 2016 issue of Automotive Megatrends Magazine. Follow this link to download the full issue.