Physically-based sensor simulation to shorten development lifecycles

New sensor? No problem Physically-based sensor simulation to shorten development lifecycles

SHARE

New sensor? No problem! Physically-based sensor simulation to shorten development lifecycles

Still testing sensors in the real world? Save time and costs simulating and testing different sensors and configurations

Video Transcript

Slide 1

Good afternoon everyone.

 

Let me start with a little story that happened to me just last week. I had a call from one of our customers, I cannot tell you who the customer is, but I won’t be breaking any NDAs if I tell you what it was about. They basically said: “Javier, we finally realized
and understood what Anyverse brings to the table. We have this new sensor we need to try on our perception system”.

 

What I’m going to tell you in the next 5 minutes is what we do for this customer and what we can do for customers just like them that develop perception systems for ADAS and other applications.

Slide 2

As you most probably know… There are 2 aspects that are fundamental in any perception system: the “eyes” of the system that collect the information from the environment…

Slide 3

…and the “brain” that processes the information and understands it to solve the perception problem. It could be for an autonomous car, a drone, a robot, an alarm system… You name it.

Slide 4

The “eyes” in a perception system are cameras that can be enhanced with other sensors such as LiDAR, RADAR, thermal sensors and others being developed.

Slide 5

The “brain” are deep convolutional neural networks implemented with different
architectures, that need to be trained on extensive amounts of data so they can learn to interpret and understand the reality captured by the sensors.

Slide 6

These 2 fundamental components are tightly related and intertwined. You need to train the brain with the same set of eyes the system is going to use in real life. Otherwise, you can get undesired system behavior. In fact, the “eyes” you use have a
direct impact on what the brain “sees” and “understands”.

 

What happens when you have to try a new set of “eyes” on your system like it happened to the customer who called me last week? Your “brain” needs to learn all over again. When developing perception systems this happens more frequently than you think. Not only because there is a new sensor you need to try, but you may also want to try a different configuration or rigging of your sensors. But learning all over again is easier said than done.

Slide 7

The learning process for ADAS is complex, and something you are probably familiar with:

 

  • For very specialized systems you may be developing the camera sensor and the deep learning models at the same time. You probably need to manufacture prototypes in silicon to run experiments. How efficient is that? It may take months before you can validate different deep learning models for your system.

  • You have to rig a car with cameras, drive thousands of miles, curate the images and then label them, train and validate your deep learning models. Sometimes you won’t even have enough variability in the images to avoid bias in the system. Deep learning models are sensitive to changes on the sensors and camera ISP configurations. How many experiments can you run with this setup?

Slide 8

What if you can simulate the sensor instead of building it for experimentation? The good news is that we can.

 

We use synthetic data for that. But not any synthetic data. Simulating sensors require physically-based sensor simulation of the spectral behavior of light through a synthetically generated scene. Calculating the generation of every pixel in the sensor considering all physical phenomena happening in the light-scene interaction and on the sensor itself. Every sensor has its own characteristics, but they can be simulated if you have the spectral information of the light reaching it.

 

If you can simulate different sensor specs and configurations, you can run as many experiments as you need to develop your system ́s deep learning brain without leaving the lab and without all the extra work required with real-world images and sensors.

Slide 9

So, how is this possible? How does it work?

 

Based on our own physically-based multispectral render we have implemented the sensor simulation pipeline that allows you to try different sensor and ISP parameters as a post-process.

 

  1. First the heavy lifting. The render traces rays from the light sources in the scene and from the camera (it’s what’s called Bidirectional Path Tracing) taking into consideration all physical effects on the different objects in the scene, the simulation of the camera lens and effects of the shutter to have all the spectral
    information of the scene that, finally, reaches the sensor.

  2. Then the post-process. With one single render you can simulate sensor parameters such as pixel size, well capacity, QE curves, apply frequency filters, if you want, or try different Color Filter Arrays. This way, we generate raw data for every different configuration.

  3. Finally, in turn, these raw data can go through different ISP configurations to generate the final images corresponding to sensor-ISP parameter combinations.

Slide 10

As you can see, with one single render you can generate several images as captured by different sensor-ISP configurations to help you make the right decision about your sensor.

Slide 11

The result is that with sensor simulation you can shorten perception systems development lifecycles. Iterate as many times as you need: Simulate different sensors and configurations – Generate datasets (for your sensor configurations, enough scene variability and accurate ground truth information) – Run your deep learning model experiments (train and validate with the datasets. It could even be a mix of synthetic and already existing real-world data) – Review model performance – Adjust sensor configurations – Start again… As many times as you need.

Slide 12

Finally, if you have any doubts about all this, I want to share with you the takeaway from a paper talking about Neural Network Generalization and The impact of camera parameters.

This basically says that models trained with physically based multispectral images perform very well when inferring on real-world images.

 

And my takeaways:

  • Using real sensors for ADAS development can be unpractical, time-consuming, inaccurate and expensive.
  • Using physically based sensor simulation and synthetic data looks like the practical choice to cut costs and shorten development lifecycles.

 

 hope that, like the customer who called me last week, you now have a better understanding of the value that Anyverse can bring to the development of your perception system for ADAS.

Thank you.

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™ you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Come visit our booth during the event, our website anyverse.ai anytime, our Linkedin, Facebook, and Twitter social media profiles.

Scroll to Top

Let's talk about synthetic data!