The need for Pixel-accurate, synthetic data for autonomous driving perception, development & validation

SHARE

Talking about Pixel-accurate, synthetic data for autonomous driving perception, development & validation is not talking about pretty, computer-generated images that may look realistic when human eyes look at them.
It’s talking about a self-driving car that always stops, regardless of whether the traffic light is damaged, dirty, or slightly opaque. It’s about an autonomous bus that recognizes a zebra crossing, no matter whether the asphalt is cracked or flooded because a fire hydrant has broken. Or about a self-driving truck safely merging onto a crowded highway at rush hour with the sun glaring up ahead.
Talking about pixel-accurate, synthetic data is talking about safety enhancement and trustworthy data to develop accurate autonomous driving systems.

Machines need photorealistic… machine data

What does this mean?

Traditionally, the world of computer graphics has linked photorealism for machine vision applications to what’s considered photorealism for humans. And to be fair, the image our eyes perceive and our brain processes must closely match reality, or else we would be dead…

Our human perception system has already evolved to be as pixel-accurate as possible, but machines perceive differently, possibly more accurately than humans do.

And how is this linked to the autonomous driving use case?

The autonomous driving market demands data with the highest level of accuracy

The advanced perception and AV/ADAS industry is implementing a new generation of sensors and optical systems (such as, new photodetectors in different parts of the spectrum, and other advanced optical and sensory capabilities) that perceive the world in very different ways than humans do and demand data capable of fulfilling these new functionalities.

Synthetic data is going to be unequivocally needed for designing, training, calibrating, validating, and ultimately upgrading sensors and perception systems alike. But not just any synthetic data… Pixel-accurate synthetic data for autonomous driving capable of faithfully and accurately simulating this new generation of sensors

Safety first: no room for uncertainty

Now that we understand why data accuracy is key to a successful perception systems & new-generation sensors combination, it’s time to emphasize another important matter. Data accuracy has nothing to do with the concept of photorealism that we commonly attach to the images generated by off-the-shelf, real-time computer graphics engines.

Don’t get me wrong, these engines can develop beautiful flashy images that can perfectly fit the requirements for other and less complex applications, but they don’t provide the precision for accomplishing the required data accuracy to develop human-safe and trustworthy, fully autonomous transport, based on artificial perception.

Planning on upgrading sensors?

Learn how to faithfully simulate your sensor and ISP and get full control to decide the best sensor for your perception system

Let’s take the aerodynamics sector as an example to better visualize the problem.

Data accuracy has always been a sensitive matter for aerodynamics engineers. For decades, they have only trusted the wind tunnel as the closest-to-real data source, and it was only a few years ago when simulation (or synthetic data) was introduced to their data generation pipelines.

Today, however, there are hundreds of simulators in the aerodynamic market, companies such as Airbus or Boeing are very skeptical about the use of these simulators. They choose them carefully because generating images and data that look real is not enough. This data must be adjusted and validated to perform as well as real data with a highly strict percentage of error. It’s the reason why only very few pass the test and get validated to be used for this type of precision project.

Perhaps in the aerodynamics of a building, a certain margin of error can be tolerated, but in such a critical case as an airplane wing designing process, high data accuracy is vital (a thousandth of an error could imply a waste of tons of fuel).

Not all data is valid to train human-safe autonomous vehicles

Directly connected to human safety, autonomous driving is another critical use case. Safety becomes one sine qua non condition AV/ADAS developers must commit to if they don’t want to end up working in a quicksand paradigm… and this means a model shift from photorealistic data (according to human perception) to pixel-accurate data according to their autonomous vehicle sensor definition, optical system, and underlying AI.

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.

Scroll to Top

Let's talk about synthetic data!