Skip to content

LiDARs for self-driving vehicles: a technological arms race

Christoph Domke and Quentin Potts explore the debate around LiDAR: an unparalleled 3D environment mapper, vital to Level 5 autonomy to proponents, or a shortcut to a “dead end” to opponents

A fiery debate has been raging in the autonomous driving world about the role of LiDAR in the future of self-driving cars: an unparalleled 3D environment mapper, vital to Level 5 autonomy to proponents, or a shortcut to a “dead end” to opponents such as Tesla’s Elon Musk. But what are the facts in this ongoing debate?

Special report: Will LiDAR guide us to a driverless future?

LiDAR, typically used as an acronym for “’light detection and ranging’”, is essentially a sonar that uses pulsed laser waves to map the distance to surrounding objects. It is used by a large number of autonomous vehicles to navigate environments in real time. Its advantages include impressively accurate depth perception, which allows LiDAR to know the distance to an object to within a few centimetres, up to 60 metres away. It’s also highly suitable for 3D mapping, which means returning vehicles can then navigate the environment predictably —a significant benefit for most self-driving technologies. One of the key strengths of LiDAR is the number of areas that show potential for improvement. These include solid-state sensors, which could reduce its cost tenfold, sensor range increases of up to 200m, and 4-dimensional LiDAR, which senses the velocity of an object as well as its position in 3-D space. However, despite these exciting advances, LiDAR is still hindered by a key factor; its significant cost.

A busy street scene captured by Velodyne’s Alpha Puck (a 360-degree horizontal field of view LiDAR), 2019. Image credit: Velodyne/Handout via REUTERS

LiDAR is not the only self-driving detection technology, with cameras as the major rival, championed by Tesla as the best way forward. Elon Musk has described LiDAR as “a fool’s errand” and “unnecessary”. The argument runs that humans drive based only on ambient visible light, so robots should equally be able to. A camera is significantly smaller and cheaper than LiDAR (although more of them are needed), and has the advantage of seeing in better resolution and in colour, meaning it can read traffic lights and signs. However, cameras have a wide host of characteristics that make them tricky to use in common driving conditions. Whereas LiDAR uses near infra-red light, cameras use visible light, and are thus more susceptible to issues when faced with rain, fog, or even some textures. In addition, LiDARs do not depend on ambient light, generating their own light pulses, whereas cameras are more sensitive to sudden light changes, direct sunlight and even raindrops.

LiDAR’s advantages do not end here—by creating a 3D cloud of points, LiDAR is far better at judging distances than cameras, as well as being impervious to surfaces that are reflective, textured or textureless. Cameras require significant computing effort, such as complex neural networks, to gauge distance between objects, by aggregating different camera feeds or a single feed over time. 2D images can also trick cameras, making them more open to malicious attacks. In terms of colour detection, LiDAR proponents argue that in a connected and driverless world, traffic information can be dispensed by machine-to-machine signals from traffic lights and other markers, addressing a key flaw for LiDAR. In addition, costs are plummeting. Google’s first driverless car prototype in 2012 used a US$70,000 LiDAR. In 2017, Waymo engineers declared they had brought the cost down by 90%. Today, a number of the top LiDAR manufacturers such as Luminar offer autonomous driving-grade LiDARs for under US$1,000.

The advantages of cameras, apart from price and colour/text recognition, are more subtle. They essentially revolve around the fact that LiDAR is a shortcut, and can only really process spatial information, not the complexities of highway environments. How would a LiDAR recognise that a pedestrian is looking down at his phone and may wander on the street? Can LiDAR differentiate between a plastic bag and an obstacle? Could LiDAR recognise a cyclist looking over his shoulder to join into a new lane? The answer to these questions is no. Once camera AVs have been perfected, they argue, LiDAR will be rendered obsolete. This is because by combining cameras with a simple radar (cheaper and better performing in adverse weather, although with worse image granularity than LiDAR), much is done to address their weakness in adverse conditions.

Still snapshot from a Tesla showing the view from its medium-range cameras on the front of the car, 2020. Image courtesy of Tesla.

The key obstacle to the dominance of cameras is the AI that reads and interprets the data-heavy feeds, and must recognise all manner of situations in milliseconds. This helps explain a current prevalent view, that the best option is a hybrid, using both LiDAR’s superior vision and cameras’ colour, object and text recognition, to obtain a clear picture of the surrounding environment. It is notable that this solution is also computing-heavy, and of course dependent on human-written algorithms.

This results in what is essentially a race, between LiDAR’s cost curve and AI programmers. Should LiDAR reach an accessible price faster than cameras can become truly useable and reliable, LiDAR would likely become universal in AVs as a cheap, reliable and very accurate distance sensor, at the very least in conjunction with cameras. Although in future LiDAR may not be absolutely necessary, its reliability, simplicity and universality could make it a very attractive springboard to Level 5 autonomy. However, if Tesla or others succeed in the near future in creating a complex neural network capable of quickly and reliably processing camera imaging information (Musk has teased that this might be possible within the year), then LiDAR is likely to become an expensive redundancy for a number of major manufacturers. The race is on.


About the authors: Christoph Domke is Senior Director, Clean and Smart Mobility Lead at FTI Consulting. Quentin Potts is Clean Energy Consultant at FTI Consulting

The opinions expressed here do not represent the views of FTI Consulting

 

 

Welcome back , to continue browsing the site, please click here