Just like on a spooky Halloween night, anything sudden and unexpected could happen during a car trip… So better make sure your in-cabin monitoring system is well trained right?
Driver monitoring, occupant monitoring, autonomous driving, and other autonomous deep learning-based visual systems… are critical use cases in which the safety of the occupants is at stake. In case of failure or malfunction due to lack of precision of the system, the result will not be that a pumpkin has not been classified as a decorative element… The result could mean a serious risk to the safety of the occupants.
There are many types of anomalies, extreme lighting conditions, interactions between objects inside and outside the cabin… that can lead to misinterpretation of what is happening inside the car by the autonomous system’s deep learning model. Such as the famous “corner cases” or the false positives that we will talk about below.
Corner cases: when things are not the way they seem
A corner case is nothing more than a situation that is unlikely to occur, but if it does… it implies a serious risk to the safety of people in the field of in-cabin monitoring technology.
Synthetic data can be the solution to prevent your system from being “tricked” by these cases. Synthetic data allows you to simulate any situation, no matter how unlikely, any object, car interior, people poses or behaviors, ethnicity… In terms of avoiding privacy issues, achieving enough data variability, as well as saving a lot of time and resources for development teams, the advantages of generating synthetic data compared to gathering real-world data are not few, as we have already explained in other articles.
But let’s up the ante. Your interior monitoring system is going to face many situations that are going to truly challenge its sensors and the way they perceive and interpret the in-cabin scene.
Both external agents that influence the status of the cabin: extreme lighting situations (blinding headlights, extreme darkness in a blackout…), situations of low or no visibility (torrential rain, sandstorm…); as internal: a magazine with a face on the cover, a driver wearing a Frankenstein mask, the reflection of a billboard in the car window…
Is your in-cabin monitoring system trustworthy enough?
Let’s do a brief exercise for a second. Taking into account the data with which you are currently developing your in-cabin DL model: would it be able to correctly detect and interpret any of the “false positives” we talked about in the previous section of this article? What about predicting the behavior of a driver wearing a Frankenstein mask? Could it spot a boy dressed as a ghost without wearing a seat belt?
Now you’ll probably be thinking… ok, but I’m not developing a system to detect Halloween creatures one night a year… and you’re absolutely right!
The point of this article is to highlight that in a technology that involves the safety of people, the accuracy of the perception system is a critical issue. You want to be sure that your system ensures occupant safety and performs trustworthy no matter what situation it may face.
Don’t let your in-cabin monitoring system AI be tricked
As we have seen, there are many factors that can interfere with the way the sensor perceives the scene around it and its subsequent interpretation by the system’s artificial intelligence.
If your in-cabin monitoring system has been developed with data generated to show the world in the same way that your specific sensor sees and interprets the world, and you add almost infinite possibilities of scene variability… your system will be robust and reliable in any situation (even on Halloween night!).
With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.