We have developed a new simulation technology that significantly reduces the industry’s dependence on real-world testing for the development of autonomous vehicles (AV) and ADAS. Our new ray tracing rendering technology is the first to accurately simulate how a vehicle’s sensor system perceives the world.
“The industry has widely accepted that simulation is the only way to safely and thoroughly subject AVs and autonomy systems to a substantial number of edge cases to train AI and prove they are safe,” said Matt Daley, Operations Director at rFpro. “However, up until now, the fidelity of simulation hasn’t been high enough to replace real-world data. Our ray tracing technology is a physically modelled simulation solution that has been specifically developed for sensor systems to accurately replicate the way they ‘see’ the world.”
The ray tracing graphics engine is a superior fidelity image rendering system that sits alongside rFpro’s existing rasterization-based rendering engine. Rasterization simulates light taking single bounces through a simulated scene. This is sufficiently quick to enable real-time simulation and powers rFpro’s industry-leading driver-in-the-loop (DIL) solution that is used across the automotive and professional motorsports industries.
Ray tracing is rFpro’s software-in-the-loop (SIL) solution aimed at generating synthetic training data. It uses multiple light rays through the scene to accurately capture all the nuances of the real world. As a multi-path technique, it can reliably simulate the huge number of reflections that happen around a sensor. This is critical for low-light scenarios or environments where there are multiple light sources to accurately portray reflections and shadows. Examples include multi-storey car parks and illuminated tunnels with bright ambient daylight at their exits, or urban night driving under multiple street lights.
Modern HDR (High Dynamic Range) cameras used in the automotive industry capture multiple exposures of varying lengths of time. For example, a short, medium and long exposure per frame. To simulate this accurately, rFpro has introduced its multi-exposure camera API. This ensures that the simulated images contain accurate blurring, caused by fast vehicle motions or road vibrations, alongside physically modelled rolling shutter effects.
“Simulating these phenomena is critical to accurately replicating what the camera ‘sees’, otherwise the data used to train ADAS and autonomous systems can be misleading,” said Daley. “This is why traditionally only real-world data has been used to develop sensor systems. Now, for the first time, ray tracing and our multi-exposure camera API is creating engineering-grade, physically modelled images enabling manufacturers to fully develop sensor systems in simulation.”
rFpro’s ray tracing is applied to every element in a simulated scene, which has been physically modelled to include accurate material properties to create the highest-fidelity images. As this is computationally demanding it can be decoupled from real-time. The rate of frame rendering is adjusted to suit the level of detail required. This enables high-fidelity rendering to be carried out overnight and then played back in subsequent real-time runs. This overcomes the usual trade-off between rendering quality and running speed.
“Ray tracing provides such high-quality simulation data that it enables sensors to be trained and developed before they physically exist.” explains Matt Daley, Operations Director, rFpro. “As a result, it removes the need to wait for a real sensor before collecting data and starting development. This will significantly accelerate the advancement of AVs and sophisticated ADAS technologies and reduce the requirement to drive so many developmental vehicles on public roads.”
rFpro’s new ray tracing capability is available now to complement existing desktop options and will also be available soon within High Performance Computing (HPC) solutions.