Rolling shutter simulation to strengthen your computer vision ML model

Rolling shutter simulation to strengthen your computer vision ML model


Why simulate the rolling shutter artifact? – In the context of gathering data for deep learning-based advanced perception systems development.

Let’s start from the basis that a vast majority of cameras used in autonomous systems (if not all) equip CMOS sensors, and a good number of these have a rolling shutter. Those cameras and those sensors will perform as the eyes in a perception system, with an underlying AI brain that will need to be fed, optimized, and validated with data. And to a large extent, the robustness of this AI will rely on the accuracy of the data used for its development – understanding accuracy as its capacity to generalize the real world.

Now, at high speeds, the rolling shutter introduces some artifacts in the images distorting the real world. Should we simulate the rolling shutter to account for those artifacts when generating synthetic data to train a perception system??

“It is about providing an accurate simulation of what sensors perceive, and not what a human eye would see, to achieve the highest levels of trustworthiness from autonomous systems for the good of all.”

What are the rolling shutter artifacts?

The rolling shutter artifacts take the form of distortion appearing in images captured in cameras that record the frame line by line on an image sensor instead of capturing the entire frame all at once.

The rolling shutter sensor scans from the top of the image to the bottom, so the top of the frame is recorded slightly earlier than the bottom. This slight lag can create some unintended distortions if you’re filming fast-moving items across a scene, as is the case with autonomous vehicles, in which both the speed of other vehicles and their own moving speed apply.

Rolling shutter artifacts appear on cameras with a standard CMOS sensor. On Cameras with a CCD sensor, with global shutter, will record an entire image all at once and you will get only motion blur artifacts. These cameras are significantly more expensive and difficult to manufacture.

Rolling shutter artifacts simulation - Is it possible?

The simulation of the rolling shutter artifacts is just one stage of the comprehensive and accurate simulation of the imaging sensor that Anyverse implemented in its sensor simulation pipeline. Let’s start with the exposure and shutters.

The exposure is the time the photodetectors are active. The longer they are active the more photons they’ll collect, hence more intensity on the final image. To control the exposure, CMOS sensors use electronic shutters, this means that there is circuitry that controls when and for how long the photodetectors are collecting photons.

CCD sensors use electronic shutters as well but there is a difference. CCD sensors expose all photodetectors at the same time and for the same amount of time. Every photodetector sees the same point in time as the others. This is the global shutter.

In contrast, CMOS sensors expose rows of photodetectors sequentially, this is called rolling shutter (as we learned at the beginning of this article).

Anyverse™ simulates both, global and rolling shutters. Below is how the image of a moving car might look like while captured using a rolling shutter camera:

Rolling shutter simulation to strengthen your computer vision ML model

If the same rendered image of the same moving car is captured using a global shutter camera, this is how it would look like:

Rolling shutter simulation to strengthen your computer vision ML model

The only object moving in that scene is the vehicle. It can be clearly seen how the vehicle is distorted because of the row exposure sequence in the case of the rolling shutter.

The CMOS circuitry is less complex than the CCD circuitry and this is a key factor to consider when going to very high sensor resolutions.

The cost is the main reason why most of the sensors used in the AV industry today use CMOS technology. The disadvantage of rolling shutter versus global shutter is the distortion introduced in the images caused by the different exposure times for different rows.

Anyverse™ - The synthetic data platform for advanced perception

Accelerate the development of your perception system with hyperspectral data that mimics exactly what your sensors see

Why you should simulate the rolling shutter artifact

Dataset usage is already common and is becoming key for the development of computer vision ML models, and sensor design, calibration, and validation, both in early and advanced stages. But these increasingly complex models, as well as increasingly sophisticated sensors, demand datasets with a higher level of precision. Data that mimics exactly what sensors see, to train models that process these inputs.

When the cameras in the system use rolling shutter, they see high-speed objects with the artifacts introduced by the rolling shutter, if your synthetic data doesn’t include the rolling shutter artifacts, your model most likely won’t generalize well to the real world when it faces high-speed objects. You absolutely need to simulate the rolling shutter artifacts.

Developing your model with data that simulates the rolling shutter, or aiming higher, which accurately simulates the sensor, means several advantages:

[Bonus] Latest release in rolling shutter simulation: Vibration curves

Recently, Anyverse introduced a new feature to its sensor simulation pipeline, and more specifically, to the rolling shutter artifact. Vibration curves, a development motivated by the needs of several customers.

The vehicle body transmits high-frequency vibrations that CMOS sensors with this type of image acquisition are able to “see”. We have introduced this vibration synthetically to exactly reproduce those cases observed in the real world and thus obtain more realistic images.

Now the user can introduce vibration curves to the camera devices to simulate the high-frequency vibration produced by the vehicles. This, combined with the rolling shutter simulation allows to create images and sequences with the rolling shutter artifacts.

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anytime, or our Linkedin, Instagram, and Twitter profiles.

Let's talk about synthetic data!