Anyverse sensor simulation: accelerate autonomous systems deployment with sensor-specific synthetic data

SHARE

Developing perception systems is not an easy task. We are talking about systems that need to understand what they see in the real world and react accordingly. But, How do they see the world? How do you teach a machine what the real world is and interpret it?

In this insights series, we will focus on Anyverse sensor simulation and how to accelerate autonomous systems deployment with sensor-specific synthetic data.

New-generation systems hungry for accurate sensor-specific data

Modern perception systems use all kinds of new-generation sensors to see the world like radar, lidar, and of course, optical cameras that are the ones that most faithfully mimic the human eye, which coupled with the brain, form the most advanced perception system that exists to date. On the other hand, to implement the perception system “understanding” they use deep neural networks trained for different purposes, like object detection, object segmentation, or depth estimation.

No matter what the problem is, neural networks need data, lots and lots of data. But not any data, it has to be as varied as possible to avoid bias in the system and other domain shift problems.

Getting real-world data for your hungry system is not easy. You have to take thousands of pictures and curate them, which requires infrastructure and organization, it can be a separate project in itself. If that is not enough, just images are not enough either for neural networks to learn.
Synthetic data platform for advanced perception | Anyverse
While training, you need to tell the neural network what is what it’s seeing, and for that, you need to tag and annotate every single image with ground truth. One more, time-consuming, and often not a very accurate task. When you thought you were done, it turns out your system is not performing well, so you need more training and yes, more data.

Sensor-specific synthetic data to accelerate autonomous systems deployment

An alternative to this endless loop is to use hyperspectral synthetic data. You create a synthetic 3D scenario and render thousands of pixel-accurate images based on it adding automatic variability and generating all the ground truth info you need at the same time. Fair enough, problem solved. Not quite.

When you train deep neural networks with synthetic data you have to make sure that they will be able to perform when facing the real world and understand it as well as the synthetic data. How well your network generalizes real-world images from synthetic images, is key to your system’s success.

For that, you need to faithfully simulate the behavior of real cameras when generating synthetic images.

Anyverse Camera Sensor Simulation eBook

Learn everything you need to know about our camera sensor simulation pipeline

Anyverse™, the hyperspectral synthetic data platform that introduces deep sensor and ISP sensor simulation

Anyverse™ is a high-fidelity syntheic data generation platform featuring a physics-based synthetic image render engine to produce very realistic images and sequences of unprecedented quality.

The engine uses an accurate light transport model and provides a physics description of lights, cameras, and materials. This allows for a very detailed simulation of the amount of light that is reaching the camera sensor.

synthetic data generation

Equally important is the simulation of the camera sensor itself, i.e. how the light coming from the scene is converted into the final color image. Different academic papers demonstrate that a machine learning model, based on deep neural networks, trained on a synthetic dataset generated considering camera sensor effects, performs in general better than if the effects are not present. You can check these papers on the subject [1][2].

Don’t miss the next chapter

In this series, we will introduce Anyverse camera sensor simulation, what you need to implement it, what parameters and knobs you can have under your control, what are their effects on the final images, and of course what are the benefits of having sensor simulation as part of your perception system development process.

We hope you enjoy this new series of original content and don’t forget to come back to our blog next week to discover the second chapter around the camera sensor pipeline!

Read chapter 2 now>>>

References

[1] Carlson A., Skinner K. A., Vasudevan R., Johnson-Roberson M.: Modeling Camera Effects to Improve Visual Learning from Synthetic Data. arXiv preprint arXiv:1803.07721v6 (2018) https://arxiv.org/abs/1803.0772
[2] Liu Z., Lian T., Farrell J.E., Wandell B.A.: Neural Network Generalization: The impact of camera parameters. arXiv preprint arXiv: 1912.03604v1 (2019) https://arxiv.org/abs/1912.03604

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.

Scroll to Top

Let's talk about synthetic data!