ANYVERSE

Delving into Anyverse’s sensor simulation: light, optics, and sensors

Delving into Anyverse’s sensor simulation light, optics, and sensors

SHARE

Share on linkedin
Share on twitter
In the previous article of this Anyverse sensor simulation insights series, we showed what our camera sensor simulation pipeline looks like and its three main blocks: the optical system, the imaging sensor, and the image processor.
Anyverse’s camera sensor simulation pipeline

Now it’s time to learn what happens inside the camera when light hits the lens and keep discovering why sensor-specific synthetic data is key to autonomous systems development.

Let’s get started!

The Optical system

First step: light gets into the camera through the lens.

Anyverse does a geometric simulation of the optical system using our ray tracing technology. This technology allows us to simulate extreme optics distortion as shown in the picture below (fisheye lens).

Delving into Anyverse’s sensor simulation: light, optics, and sensors

Fisheye lens

There are several important effects caused by optics, such as lens shading. The view angle allows rays that are reaching the corners of the sensor to travel more distance, hence there is a decrease in the energy reaching those areas. Since focal length affects the view angle, this parameter has an impact on lens shading too. Another source of lens shading is coming from rays that are blocked due to the lens set.

A camera lens system is typically a complex assembly of multiple lenses. However, the laws of geometrical optics allow us to replace this assembly with a single lens of appropriate shape to make things simpler.

The problem is that using this approximation, we cannot calculate the rays that are blocked in the lens system. This is a minor trade-off we had to do.

Delving into Anyverse’s sensor simulation: light, optics, and sensors

Lens shading

We can simulate lens blurring too, also known as depth of field. Again the ray-tracing technique allows us to compute an accurate depth of field effect. Focal length and aperture are the parameters that have a high impact on this effect. The picture below shows this effect amplified for the sake of visualization.

Delving into Anyverse’s sensor simulation: light, optics, and sensors

Lens blurring

The imaging sensor

The next step of the light in its travel to become a picture is the imaging sensor.

The core of the system is the electronic device that substitutes the film in classic analog cameras. It transforms the energy from the photons reaching it into a voltage that, in turn, converts it into digital values that represent every pixel in the final image.

To simulate all the processes that happen inside an imaging sensor, you don’t need to simulate all the electrical circuitry and its components (transistors, capacitors, etc.). It is similar to when you want to simulate the dynamics of fluids, you don’t need to simulate every single fluid molecule.

In the infographic below we describe the most important transformations that are modeled by Anyverse’s camera sensor simulator.

The spectral radiance coming from the scene that we are simulating, after going through the optics system, is converted into spectral irradiance at the sensor surface. This spectral irradiance comes in terms of watts per square meter per nanometer. This is the amount of energy per wavelength that is reaching the sensor’s surface, and this is exactly the information that we have at the input of the imaging sensor block.

Would you like to see what’s happening at every imaging sensor stage?

Download our free eBook and get the full story. Learn about all the filters the spectral irradiance needs to go through before reaching the photodetectors on the sensor surface as well as the transformations before reaching the final stage: the image processor.

Don’t miss the next chapter

Don’t miss the third chapter of this insights series to know more about how we process the RAW data coming from the sensor to adapt it to what the human eye sees and what the different devices can display.

Read other chapters >>>

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.

Looking to start your Synthetic Data journey or need help with your current project? We'd love to know more.

Looking for the right Synthetic Data to speed up your system? Please, enter the Anyverse now

Let's talk about synthetic data!

[contact-form-7 404 "Not Found"]