SHARE
AV deep learning models require accurate long-range detection data
And not only autonomous cars, most advanced perception systems applied to autonomous motion (self-driving trucks, drones, shuttles …) whether Lidar-based, camera-based, or time-of-flight camera-based, require highly accurate, long-range detection data.
The reason? These autonomous machines must strictly meet the highest safety standards, and long-range detection data is one of the critical inputs needed for accurate training and subsequent validation of their underlying deep learning models.
The challenge of generating long-range detection data for AV
Autonomous vehicle sensors need to be able to detect any other vehicle, pedestrian, animal, or object that may affect its trajectory at any present and/or future moment, especially those that are in motion.

Let’s check several methods for generating long-range detection data:
Generating long-range detection data from real-world datasets
Traditionally, data has been annotated manually by humans or semi-automatically through any available software solution, but for obvious reasons (pixels cannot be annotated manually), we would never achieve the required accuracy by using real-world data alone.
Generating long-range detection data from real-time graphic engines
Real-time graphic engines are able to provide labeled data, but since its generated data hasn’t been processed through an accurate, optical and sensor simulation, it still can’t guarantee pixel accuracy or physically correct metadata that the perception system may need for training and validation…
Is sensor simulation that important to generate long-range detection data?
Some artifacts inherent to the technology associated with cameras and sensors make it especially difficult to generate accurate synthetic images of distant objects.
One of these artifacts is the “motion blur” that makes the images obtained by the camera blurry for objects whose relative speed is too high for the camera. It is especially important for long-range items, since these objects usually occupy a relatively small area in the image.

If we add the scanning mechanics of the camera sensor known as “Rolling Shutter”, the effect of “motion blur” is even more evident, deforming the appearance of objects considerably, especially those in the distance.
Generating long-range detection data from pixel-accurate synthetic data
About Anyverse™
With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.
Need to know more?
Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.
SHARE