ANYVERSE

In-cabin monitoring use cases and data needs

SHARE

In-cabin monitoring systems*. They no longer need an introduction… I’m pretty sure you know there is a whole new regulation built around them and soon, they will be mandatory in autonomous driving and human driving applications.

*It should be noted that we are referring to in-cabin camera systems.

More commonly, there are 3 types of Interior camera/sensing systems:

  • Driver Monitoring Systems (DMS): A camera system that accurately monitors the driver’s condition (from various perspectives) to guide the optimal judgement for ADAS/AD driving of the vehicle.

  • Occupant Monitoring Systems (OMS): A camera system that enhances passengers (other than the driver) such as those in passenger seats and rear seats, paying special attention to children. OCMS also monitors the entire interior of the vehicle during autonomous driving, the riding environment, and convenience when riding.

  • DMS + OMS: A camera system that covers both applications with one camera.

But what are the in-cabin monitoring use cases? What are the main applications or functions of these systems?

In-cabin monitoring use cases

There are several in-cabin monitoring use cases, and likewise, they have different data needs, hence, different matter for data challenges that you need to overcome if you want to successfully train the deep learning models behind the systems.

Driver monitoring:

  • Driver operating status (on a call, eating, drinking, etc.)
  • Driver driving status (fatigue/tension, drowsiness/arousal, distraction, drinking, etc.)
  • Driver authentication
  • Sideways/looking away detection (driver posture, etc.)
  • Seatbelt detection
  • Detection of other objects
  • Gesture control

If we look into the details, it’s more complicated than it might seem… To develop a DMS capable of accurately monitoring if the driver is drowsy or distracted, for example, it needs to be able to (also) accurately perform:

  • Gaze detection: Is the driver looking at the road? Is the driver looking in the rearview mirror? … 
  • Body pose detection: Is the driver losing attention? How is the driver using their hands? …

These base behavior detection tasks add an extra challenge for the system and hence to the data required to train it.

Occupants monitoring:

  • Occupant status (safety confirmation, riding condition/posture, etc.), drive recording
  • Occupant movement detection (food and drink, smartphone operation, smoking
  • Occupant condition (ride state, posture/safety, etc.)
  • Seatbelt detection
  • Pets detection

A huge challenge when monitoring occupants is the issue of privacy. Images of people are, rightfully, protected by law. Collecting images and getting the rights to use them is not simple and can impact the variability we can get in our training, consequently risking having a biased system. Even more so, if possible, when the occupants are children.

We should pay special attention to the children monitoring use case:

Children monitoring:

  • Child detection
  • Child seat detection

You want to make sure the system is going to be able to distinguish between children of different ages, if there is a child alone in the car, if children are well placed in their child seats (when applicable), if they are completely secured…

In-cabin monitoring use cases and data needs

Developing in-cabin monitoring systems requires data, a lot of data…

Something interesting is that for all these use cases, the perception system behind the interior monitoring system is probably going to be based on deep learning models, and these deep learning models need big amounts of data to be trained (and to perform well in solving the problem for which it has been designed).

Keep in mind that for different use cases you may need completely different setups of data and to overcome several significant challenges, as we saw. This means generating thousands of images, dealing with worldwide privacy issues and children’s rights, having enough variability to avoid bias, and having enough data accuracy in terms of the necessary ground truth. You are going to need precise ground truth data about gaze direction and body pose, for example. All of these challenges are not easy to overcome with real world data alone.
Maybe the appropriate question now is…How do you plan to train your in-cabin monitoring system, and how can you overcome the matter for data challenges?

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.

Looking to start your Synthetic Data journey or need help with your current project? We'd love to know more.

Looking for the right Synthetic Data to speed up your system? Please, enter the Anyverse now

Client Story

Would you like to know how Cron AI has improved LiDAR simulation accuracy with physically correct synthetic data?

Let's talk about synthetic data!

[contact-form-7 404 "Not Found"]