AI Training, validation, fine-tuning, sensor development… whatever high-fidelity synthetic data you need for, getting started with Anyverse it is now easier than ever! – Download Anyverse today
The safety and robustness of autonomous vehicles continue to be some of the major concerns for developers today, with systems sometimes failing to detect obstacles and pedestrians, or malfunctioning due to false positives, corner cases, and other challenges. LiDAR technology has emerged as a potential solution to help fill this gap and makes self-driving safer.
Compared to driver state monitoring systems (DMS), or even occupant monitoring systems (OMS) for which we have experienced huge advances in terms of safety regulations and technology in recent years, the approach to in-cabin monitoring systems for public transport is relatively recent but closer than many might think.
Real world details are infinite. When generating synthetic images simulating cameras, we need to be able to reproduce and capture as many details as possible from a computer generated 3D world as we would capture using real cameras in the real world. Don’t forget that, at the end of the day, the perception systems will use real cameras (and other sensors). Those details that we need to generate more faithfull images to feed our perception brain is what we call hyperspectral data.
Traditionally, if we can use that term to talk about technology as recent as in-cabin and driver monitoring systems, camera positioning has been bound above the dashboard and the center stack. But, are these camera placements optimal? Are these able to faithfully monitor the other occupants as well and not just the driver? Why stick to only these positions? Opening the door to simulation can help optimize the system without wasting budget, but let’s start from the beginning.