ANYVERSE

Reaching Anyverse’s sensor simulation pipeline last stage: the image processor

SHARE

In the previous article of this Anyverse sensor simulation insights series, we delved into the light and its travel through the optical system and the imaging sensor before becoming a picture.

Now it’s time to learn more about the image processor and the necessary processes that need to be carried out to consume the RAW digital data coming from the sensor as a beautiful color image.

Join us on this journey!

The image processor

To consume the RAW digital data coming from the sensor as a beautiful color image, we need to process it and adapt it to what the human eye sees and what the different devices can display. This last part of the sensor simulation pipeline is known as the ISP (image signal processor). It is responsible for converting the digital values coming out from the sensor to the final color.

Anyverse™ provides the RAW data, before any ISP process, in case you want to apply your own ISP.

ISPs - The camera manufacturer’s “secret sauce”

With this process, they can provide that “special look” they want to the final pictures. Below,  the different processes and transformations that the ISP does on the digital values:

Reaching Anyverse's sensor simulation pipeline last stage the image processor

Demosaicing

Demosaicing is the process of reconstructing the color per pixel given the fact that during the capture process we used color filters. As explained earlier, every photodetector is capturing only one specific color. The image coming from the sensor, also known as RAW data, is a grayscale image that has to be processed to compute a color image.

In the picture below you can see a RAW image with a 12-bit depth resulting from the imaging sensor pipeline. This is the input to the ISP.

Reaching Anyverse's sensor simulation pipeline last stage the image processor

Raw image 12-bit depth

The reconstructed image after the demosaicing process is already a color image. This color image is the representation of the image as seen by the imaging sensor using the spectral sensitivities given by the color filter array.

RAW RGB to CIE XYZ

However, the spectral sensitivities of the human eye are different, so we need to convert the color from the raw RGB color space to the CIE XYZ color space.

The CIE color model is a color space model created by the International Commission on Illumination known as the Commission Internationale de l’Eclairage (CIE). It is also known as the CIE XYZ color space or the CIE 1931 XYZ color space.

Anyverse Camera Sensor Simulation eBook

Learn everything you need to know about our camera sensor simulation pipeline

The CIE color model is a mapping system that uses tristimulus (a combination of 3 color values that are close to red/green/blue) values, which are plotted on a 3D space. When these values are combined, they can reproduce any color that a human eye can perceive.

The CIE specification is supposed to be able to accurately represent every single color the human eye can perceive. Since two images are worth 2,000 words, I think I got my math right, see below for a comparison between a raw RGB image and a CIE XYZ image.

Captura31

Raw RGB

(How the camera sees the world)

Reaching Anyverse's sensor simulation pipeline last stage the image processor

CIE XYZ

(How the human sees the world)

In the comparison above, notice the reddish tone of the image as seen by the camera sensor. This is because even though the IR cut-off filter is removing the range above 700 nm, the red filter still has a wider range of wavelengths than the green and blue filters.

Below the Spectral Power Distribution (SPD) for the RGB filters used to produce all the images throughout this sensor simulation insights series.

Reaching Anyverse's sensor simulation pipeline last stage the image processor

RAW RGB to CIE XYZ

Something great about the human eye is that under different lighting conditions, it always perceives the white color (and others) correctly. Cameras can’t do that, they have to compute the colors precisely by their spectral profile.

Somehow we have to process those colors so they look correct to the human eye, this process is known as white balance. There are different approaches to do it.

Below, for example, the CIE XYZ image white balanced using the gray world approach, where we force the average color in the scene to be gray.

Reaching Anyverse's sensor simulation pipeline last stage the image processor

White balance (gray world)

Whereas in this other one, we used the white world approach, where we force the brightest color in the scene to be white.

Reaching Anyverse's sensor simulation pipeline last stage the image processor

White balance (white world)

To summarize, now we have a white balanced image in the human eye reference color space: CIE XYZ. We may think we are finished but the problem is that we don’t consume the digital image directly, we need to print it or display it on a device. Like the human eye, every device has specific colors they can display, they have their own color space.

CIE XYZ to sRGB

So we need to convert the image from the CIE XYZ color into the display device color space. The question is: what color should we send to the display device so the represented colors match the CIE XYZ colors?

There are many target device color spaces, sRGB, Adobe RGB 98, ECI RGB, PAL/SECAM, NTSC 1953, etc. With Anyverse™ you can simulate any of them, the one that is most widely used for display devices such as computer, phone, or tablet screens is the sRGB color space.

Below the final image using the sRGB color space.

Reaching Anyverse's sensor simulation pipeline last stage the image processor

White balance (white world)

A bit dark isn’t it? That’s because we haven’t applied gamma correction yet.

Gamma correction

The way the human eye perceives brightness is not linear, this makes it more sensitive to dark tones. That’s why images without gamma correction look very dark. Gamma correction cancels out the display response curve effectively increasing the brightness of the final image.

Below the image with gamma correction applied.

Reaching Anyverse's sensor simulation pipeline last stage the image processor

Gamma correction

Can't wait until next week?

Download now our Camera Sensor Simulation eBook and learn everything you need to know about our camera sensor simulation pipeline

Don’t miss the next chapter - Validation

Don’t miss the next chapter of this insights series to know more about how accurate or close to real sensors is the Anyverse sensor simulation pipeline described throughout this series.

Read other chapters >>>

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.

Looking for the right Synthetic Data to speed up your system? Please, enter the Anyverse now

Client Story

Would you like to know how Cron AI has improved LiDAR simulation accuracy with physically correct synthetic data?

Let's talk about synthetic data!

[contact-form-7 404 "Not Found"]