How to generate accurate long-range detection data for AV – Facing the challenge

SHARE

AV deep learning models require accurate long-range detection data

How to generate accurate long-range detection data to train and validate autonomous vehicles has been challenging for developers since the very beginning of autonomous transportation.

And not only autonomous cars, most advanced perception systems applied to autonomous motion (self-driving trucks, drones, shuttles …) whether Lidar-based, camera-based, or time-of-flight camera-based, require highly accurate, long-range detection data.

The reason? These autonomous machines must strictly meet the highest safety standards, and long-range detection data is one of the critical inputs needed for accurate training and subsequent validation of their underlying deep learning models.

The challenge of generating long-range detection data for AV

Autonomous vehicle sensors need to be able to detect any other vehicle, pedestrian, animal, or object that may affect its trajectory at any present and/or future moment, especially those that are in motion.

The motion of objects and the speed at which they move (we are talking about objects moving and interacting with each other at speeds of several meters per second) means that long-range detection data has to be extremely accurate in order to avoid crashes or potentially unsafe situations, as well as having an extremely precise data generation pipeline and automatic data labeling for training and validation that can guarantee pixel-level accuracy.
How to generate accurate long-range detection data for AV

Let’s check several methods for generating long-range detection data:

Generating long-range detection data from real-world datasets

Traditionally, data has been annotated manually by humans or semi-automatically through any available software solution, but for obvious reasons (pixels cannot be annotated manually), we would never achieve the required accuracy by using real-world data alone.

The annotation of this value would be impossible without accepting a great percentage of error that we cannot afford in order to develop a trustworthy autonomous vehicle, in addition to all the optical effects that may occur in an image, making it impossible to output this information.

Generating long-range detection data from real-time graphic engines

Real-time graphic engines are able to provide labeled data, but since its generated data hasn’t been processed through an accurate, optical and sensor simulation, it still can’t guarantee pixel accuracy or physically correct metadata that the perception system may need for training and validation…

Is sensor simulation that important to generate long-range detection data?

Some artifacts inherent to the technology associated with cameras and sensors make it especially difficult to generate accurate synthetic images of distant objects.

One of these artifacts is the “motion blur” that makes the images obtained by the camera blurry for objects whose relative speed is too high for the camera. It is especially important for long-range items, since these objects usually occupy a relatively small area in the image.

How to generate accurate long-range detection data for AV_1

If we add the scanning mechanics of the camera sensor known as “Rolling Shutter”, the effect of “motion blur” is even more evident, deforming the appearance of objects considerably, especially those in the distance.

Simulating all these phenomena correctly is absolutely key if we want to generate realistic data for training and validation.

Generating long-range detection data from pixel-accurate synthetic data

Training and validating autonomous vehicle deep learning models which face this technical challenge requires major data generation and automatic labeling accuracy that only pixel-accurate synthetic data with ground truth combined with an accurate sensor simulation pipeline can offer today.

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.

Scroll to Top

Let's talk about synthetic data!