SHARE
How simulating light and sensors help build better perception systems
Developing computer vision systems is not an easy task. We are talking about systems that need to understand what they see in the real world and react accordingly. But, How do they see the world? How do you teach a machine what the real world is and interpret it?
When it comes to self-driving cars ...
In the case of autonomous vehicles, there is still a heated debate whether optical cameras are enough for self-driving cars or other types of sensors are necessary.
Everybody wants to solve the same problem: Engineer vehicles that understand the world around them and can react accordingly, in any situation, for safe autonomous driving. SImplifying a lot, at the end of the day, solving the problem boils down to:
- Select and implement all the necessary deep learning models (neural network architectures) to make the right decisions for every possible situation
- Have data to train the neural networks, huge amounts of data, as varied as possible to avoid bias and other domain shift problems, and sufficient to cover all possible situations

Synthetic data as a “real” alternative
If you correctly characterized the light sources, including the sun and the sky, and every material in a 3D scene, you know exactly the amount of energy per wavelength reaching the camera sensor. With this spectral information, now you can simulate the physics in the sensor itself and how it transforms the energy in electrons and then into voltage that finally, after some digital processing, will give you an image as it was taken with the real camera.

Add a procedural engine to generate thousands of variations of the 3D scene, change camera position, lighting, and weather conditions. Leverage the processing power of the cloud to run everything in parallel and you have the Anyverse™ synthetic data platform. It features a proprietary physics-based synthetic image render engine.
It uses an accurate light transport model and provides a physics description of lights, cameras, and materials. Allowing for a very detailed simulation of the amount of light that is reaching the camera sensor and an equally detailed simulation of the sensor itself to produce the final color image.
No light, no perception, is that simple...
Why is this important? No light, no perception, is that simple. For us, no light, no simulation. And no simulation means your synthetic data may not be that useful to train and test deep learning-based perception systems. It may be more difficult for the neural networks to generalize to real-world images. Because at the end of the day that is every perception system’s goal: understand the real world and interpret it.
Different academic papers demonstrate that a machine learning model, based on deep neural networks, trained on a synthetic dataset generated considering camera sensor effects, performs in general better than if the effects are not present. You can check these papers on the subject:
Sensor simulation goes beyond data
Bear in mind that a faithful sensor simulation goes beyond data. If you are developing your own sensors you can make design decisions without the complexity and cost of prototyping on silicon. You can develop the best “eye-brain” combination for your perception problem without leaving the lab. It allows efficient agile practices from classic software development applied to software 2.0 development, a term coined by Andrej Karpathy in 2017 describing a paradigm change when developing deep learning-based systems.
Save time & costs - Simulate sensors!
Physically-based sensor simulation to train, test, and validate your computer perception deep learning model
About Anyverse™
Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.
With Anyverse™ you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.
Need to know more?
Visit our website anyverse.ai anytime, our Linkedin, Facebook, and Twitter social media profiles.