No spectral information, no faithful sensor simulation

No spectral information, no faithful sensor simulation

SHARE

No spectral information, no faithful sensor simulation

Faithful sensor simulation and spectral information… What’s the connection between them? If you had the chance to read our eBook on Physically-based camera sensor simulation, then you will already have a pretty good idea of what the answer will be. But why is this important? No light, no perception, hence, no light simulation, no sensor simulation, it is that simple.

Taking as a starting point that to do an accurate camera sensor simulation you need the light and the spectral radiance coming from the scene characterized as an electromagnetic wave, let’s take a look at its journey from the scene to the sensor, as this is the only way to simulate the physical phenomena happening on the optical system and the sensor.1

Road to faithful sensor simulation

Let’s start from the beginning, the light coming from the scene. It goes through the optical system until it reaches the sensor surface where it is converted into voltage, and finally into a digital value (all this happens in the imaging sensor).

An important fact that you need to know is that light (energy) is simulated as an electromagnetic wave all the way across the sensor simulation pipeline to its final digital values. This feature is key and absolutely necessary for an accurate camera sensor simulation.

No spectral information, no faithful sensor simulation_Camera sensor pipeline

Optical system – “The front door”

In its first stage, the spectral radiance coming from the scene goes through the lens system (as a rule, compound by a complex set of multiple lenses), being converted into spectral irradiance at the imaging sensor surface as a result.

The filters – “Preparing the wave”

Not all the information coming in the light beam is necessary to create the final image, some information is discarded and other is transformed. That’s why the spectral irradiance needs to go through different filters (the low-pass filter, the infrared cut-off filter, and the color filter array) before reaching the photodetectors on the sensor surface.

At this point, a very appropriate question would be: why is the spectral information of the energy key to this process? The answer is that these filters work based on wavelength and only allow certain bands of the spectrum to pass through. For example, an infrared filter would not let in the energy coming from that band of the spectrum.

The imaging sensor – “The core of the system”

The energy (and its spectral information) now reaches the sensor surface and its photodetector. If the core of the pipeline is the imaging sensor, the heart of the imaging sensor is the photodetector. In a nutshell, it collects photons and converts them into electrical current. To simulate it, the first thing we need to do is to convert the energy coming from the scene into photons.

Since we know the amount of energy per wavelength coming from the scene, we can compute the number of photons per wavelength using Planck’s equation (E = hc/λ).

Now it’s time to transform the photons entering the photodetector into electrical current.

And… as you may be wondering right now, how does this happen? This occurs because the photons rip electrons from the sensor’s silicon material into the conduction band producing the electrical current.

Sensor’s Quantum Efficiency (QE) curves (they are specific for every different sensor depending on the materials, design, and manufacturing process) are used to calculate this as they provide the ratio between the photons entering the photodetector and the number of electrons that come out. This ratio depends on the wavelength (energy in the quantum world).

No spectral information, no faithful sensor simulation

If we can draw a conclusion from all this (appealing to quantum efficiency), it wouldn’t be unfair to say… no spectral information, no faithful sensor simulation.

Now, to generate synthetic data (images) with a faithful sensor simulation that provides a higher level of accuracy when you use the data to train deep learning-based perception systems, you need a render that can generate all the spectral information from a 3D scene. That is exactly what Anyverse’s hyperspectral render does.

Anyverse’s render simulates light sources and materials with physical accuracy at the wavelength level and generates the spectral info that is later used to generate a final image using Anyverse’s sensor simulation pipeline.

Both, the render and the sensor simulation pipeline are the core of Anyverse’s hyperspectral synthetic data platform that helps our customers to generate the data they need to develop advanced sensing and perception systems.

1 All the information and processes described here come from the Anyverse’s sensor simulation solution. Download our eBook Anyverse camera sensor simulation for further information.

Make sensible sensor decisions

Physically-based sensor simulation to train, test, and validate your computer perception deep learning model

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™ you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website anyverse.ai anytime, our Linkedin, Instagram, and Twitter social media profiles.
Scroll to Top

Let's talk about synthetic data!