Meet Anyverse’s camera sensor simulation pipeline

SHARE

In the first chapter of this insights series, we introduced Anyverse sensor simulation and why sensor-specific synthetic data is key to autonomous systems development.

Now, we will focus on the Anyverse Sensor Simulation Pipeline, and later in the following chapters, we will dive deep into each stage.

Meet Anyverse’s camera sensor simulation pipeline

Let’s start with the basics: what does a generic camera sensor look like? It basically consists of three main blocks.

Anyverse’s camera sensor simulation pipeline

The light coming from the scene goes through the optical system. Then light reaches the sensor surface where it is converted into a voltage, and finally into a digital value. All this happens in the imaging sensor. The digital value coming from the imaging sensor is converted into a final RGB image in the image processor block.

It is very important to highlight the fact that light (energy) is simulated as an electromagnetic wave all the way across the rendering pipeline to its final digital values. For instance, light sources in Anyverse platform emit using a characteristic spectrum profile that depends on the type of light source (LED, incandescent bulb, sky, etc).

The materials’ behavior is also wavelength-dependent, this allows for typical effects like dispersion. This feature is key and absolutely necessary for an accurate camera sensor simulation.

Anyverse Camera Sensor Simulation eBook

Learn everything you need to know about our camera sensor simulation pipeline

Camera sensor pipeline detailed

Now is when the fun starts, let’s play with the light!

As we said, if the light is physically characterized we can see (and simulate) how it behaves interacting with the different components of the pipeline to create an image.

Typical camera mountings include some filters between the optics and the sensor surface. These filters work in the spectrum domain performing different transformations on the incoming light.

In the picture below you can see a common set of filters used.

The low-pass filter and the infrared cut-off filter remove part of the spectrum in the infrared range because our eyes are not sensitive to them. The color filter array (CFA) is another key component of the camera mounting. A light photodetector detects light intensity with no wavelength specificity, therefore color information can’t be separated. The CFA separates color information before reaching the sensor’s surface.

In Anyverse hyperspectral synthetic data platform, the scene energy is collected in the full visible electromagnetic spectrum, and that’s why the filtering process described above is possible.

Don’t miss the next chapter

Now it’s time to go through and discover each stage of the Anyverse camera sensor simulation pipeline. Don’t miss the third chapter of this insights series to know more about the optical system, or put another way, what happens inside the camera when light hits the lens.

Read other chapters >>>

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.

Scroll to Top

Let's talk about synthetic data!