Anyverse™ is the new hyperspectral synthetic data platform that lands as the only solution to gather all the data developers require to build Euro NCAP compliant in-cabin monitoring systems cost and resource efficiently, as well as accelerating AV & ADAS development.
Real world details are infinite. When generating synthetic images simulating cameras, we need to be able to reproduce and capture as many details as possible from a computer generated 3D world as we would capture using real cameras in the real world. Don’t forget that, at the end of the day, the perception systems will use real cameras (and other sensors). Those details that we need to generate more faithfull images to feed our perception brain is what we call hyperspectral data.
Traditionally, if we can use that term to talk about technology as recent as in-cabin and driver monitoring systems, camera positioning has been bound above the dashboard and the center stack. But, are these camera placements optimal? Are these able to faithfully monitor the other occupants as well and not just the driver? Why stick to only these positions? Opening the door to simulation can help optimize the system without wasting budget, but let’s start from the beginning.
SHARE Just like on a spooky Halloween night, anything sudden and unexpected could happen during a car trip… So better make sure your in-cabin monitoring system is well trained right? Driver monitoring, occupant monitoring, autonomous driving, and other autonomous deep learning-based visual systems… are critical use cases in which the safety of the occupants isContinue reading “Trick or treat, don’t let your in-cabin monitoring system AI be tricked!”
In this article we will try to answer several questions: why is the near infrared band key for (camera-based) in-cabin monitoring systems to perform well in low light? Why is simulating the NIR a challenge? What solutions have been used so far to simulate it? How does Anyverse simulate it?
Why should you seriously consider synthetic data to train and validate in-cabin monitoring systems? What are the advantages of synthetic data versus real-world data to train these systems? And why are many DMS/OMS developers already implementing synthetic data in their data generation pipelines?
There are several in-cabin monitoring use cases, and likewise, they have different data needs, hence, different data challenges that you need to overcome if you want to successfully train the deep learning models behind the systems.
Whether it’s DMS, OMS, or any other interior camera system, acquiring data to develop in-cabin monitoring systems is challenging. But… Why is that? Why is acquiring real-world data particularly hard for the in-cabin monitoring use case?