Most common camera positioning
Most OEMs position the cameras between the A-pillar and the center stack. Let’s get to know them in a little more detail:
- Dashboard camera (in front of the steering wheel): It is focused on the driver and mainly used for driver state monitoring. It has a fairly closed field of view (FoV), 90º or even less. In this location, we usually find sensor technologies such as RGB-IR, ToF, or NIR.
- Rearview mirror camera: This camera is used to monitor both the driver and the occupants. It provides a greater FoV, normally about 130º, being able to reach up to approximately 160º. It also implements sensor technologies such as RGB-IR ToF, or NIR.
- Center stack camera: This camera placement is very similar to the previous one in terms of its characteristics and, in the same way, it is used to monitor the driver and the occupants, it has a FoV of approximately 130º and usually uses the same sensor technologies.


Parameters that influence camera positioning.
Camera placement is not something trivial, we are looking for the exact point that provides an optimal FoV, and camera angle which in turn achieves good performance in combination with the chosen sensor technology.
- Field of View: It is the maximum angle that the camera can capture. In a nutshell, the FoV answers the question: “How much can the camera see?”
- Camera angles (pitch, yaw, and roll): They refer to the specific rotation at which the camera is placed to take a shot. It has to be carefully chosen to allow the camera to detect the critical areas for an in-cabin monitoring system (face, hands, upper body, objects on the seats, etc).
- Most used sensor technologies:
- RGB-IR: This technology captures both RGB and infrared images in a single sensor. Used for biometric authentication, facial and gesture detection, etc. They may use active infrared illumination.
- ToF: Time-of-flight systems use infrared active illumination and are able to detect people and objects, their absolute position, movement and shape.
- NIR: The most effective type of camera in night driving and low visibility conditions. It typically uses infrared active illumination as well to illuminate the cabin without bothering the driver and passengers, since IR is not visible.
Now that we have reviewed camera main locations, we must ask ourselves a few things. The fact that these are the most frequently implemented locations does not mean that they are the only ones or the ones that best optimize our in-cabin monitoring systems. These positions have demonstrated effective for some in-cabin monitoring use cases, but there are new cases, mostly the ones involving the rear passengers that require other camera positions to avoid occlusions that can confuse the perception systems.
Testing new camera locations in the real world is expensive, time and resource-consuming, not to mention the privacy issues when we involve filming people. These facts often hinder innovation and quick technology development.
But it’s not all bad news, have you heard of in-cabin monitoring simulation, and more specifically, about camera positioning simulation?
Build a robust in-cabin monitoring system to meet Euro NCAP requirements
Craft pixel-accurate synthetic data to train and validate your DMS & OMS with Anyverse Synthetic Data platform for in-cabin monitoring AI.
Advantages of camera positioning simulation.
Simulation allows you to:
- Innovate camera locations and discover the best camera position depending on the target to monitor.

- Find the optimal camera positioning for your system & sensors and optimize the in-cabin monitoring system.
- Simulate different sensors in different locations and check how they behave in different environments and lighting situations. Is a NIR camera with active IR illumination enough to “see” the rear seats in low illumination conditions?
- And last but not least, it allows you to prepare your system to face the Euro NCAP evaluation of Driver State Monitoring systems with guarantees.
About Anyverse™
Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.
With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.
Need to know more?