Why physics matters

SHARE

Why physics matters ?

Reproducing the physics of light accurately

Digital simulation is progressively turning mainstream in the process of training and enhancing machine learning models. Real-world data is, of course, 100% real, but also expensive, time-consuming in tagging and statistically biased, usually lacking corner cases and low-frequency events. In fact, human drivers are usually not trained to handle dangerous or unexpected events when they are statistically improbable. The more improbable the event, the higher the risk of not handling it properly.

However, for the case of self-driving vehicles or any other autonomous system, this assumption cannot be held, as people show extremely low tolerance to mistakes made by artificial intelligence.

It’s becoming evident that high AI robustness and safety levels cannot be achieved exclusively by real-world data training.

There is however a question about how much accuracy is needed in our synthetic dataset to reproduce the real world with low bias and enough variability to boost our AI confidence levels. Video games have proven to be a valid solution for AI training as their visual quality have achieved high levels of realism in the last years. Other approaches use game development platforms such as Unreal and Unity.

They can produce visually appealing images and sequences though they typically sacrifice physical accuracy in favor of real-time performance — a must for a video game.

At Anyverse, we argue that physical correctness is essential to avoid biased learning. Think of situations where complex lighting becomes a key factor in driving. Physically-based light transport simulation is computationally expensive and not supported by current video game technologies.

Strong sun glares caused by wet surfaces, car lights scattered through fog or heavy rain, traffic lights diffracted by tiny water drops on the lens of the camera, traffic lights reflected in water ponds or glass facades and blurred by motion, are just few examples of complex visual conditions where the AI could be easily cheated.

Anyverse uses a bidirectional unbiased spectral ray-tracing approach to produce high fidelity images. All this jargon means that Anyverse is very close to reproduce the physics of light accurately, further than current video game platforms. How this difference can be measured in a practical way? While we are currently working on benchmarks, it’s not difficult to imagine situations where complex reflections of traffic lights are missing due to render inaccuracy. This will ultimately lead to AI inability to assess the situation correctly, thus increasing exponentially the risk of accident. That’s why we firmly believe that physics matter.

Save time & costs - Simulate sensors!

Physically-based sensor simulation to train, test, and validate your computer perception deep learning model

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™ you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Come visit our booth during the event, our website anyverse.ai anytime, our Linkedin, Facebook, and Twitter social media profiles.

Scroll to Top

Let's talk about synthetic data!