How to train an accurate and reliable in-cabin monitoring system

正確で信頼性の高い車室内監視システムをトレーニングする方法

正確で信頼性の高い車室内監視システムをトレーニングする方法

Interior sensing developers are working against the clock to train, test, and validate their in-cabin monitoring systems in order to provide automakers a reliable and accurate system to comply with the new EU and US regulations coming around the corner.

But, as you can guess, crafting data to train in-cabin monitoring systems is not an A-B trip. Developers will need to face several obstacles such as inaccurate data, poor data, not enough real-world data, lack of variability which means biased systems, privacy issues, and children’s rights, and more that you can’t even imagine.

The automotive interior sensing market was demanding a specific technology capable of successfully covering (in a cost & time-efficient manner) this wide range of potential issues. Well, Anyverse™ has responded and has brought a specific solution for interior simulation based on its top-performing synthetic data generation platform.

But… let’s start from the beginning.

The need for interior monitoring systems - New Regulation

At this point of the game, it’s no longer news to anyone that Automotive interior sensing is expected to experience substantial growth in the next five years, with driver monitoring systems (DMS) and occupant detection systems (ODS) soon becoming mandatory across the EU.

“It’s now well accepted that camera-based DMS is the most appropriate way to directly track driver drowsiness and distraction and perform safe, vehicle-initiated handover in semi-autonomous cars,” concluded ABI Research analysts in a recent study of driver and in-cabin monitoring systems.

EU mandate driver monitoring systems from 2022

Europe has taken the lead, the Euro NCAP 2025 roadmap to introduce driver monitoring systems is clear. It started in 2020 and its purpose is to mitigate the very serious problem of driver distraction and impairment through alcohol, fatigue, etc. From 2022, the priority will shift to child presence detection, which can detect a child left alone in a car and alert the owner and/or emergency services to avoid heatstroke fatalities.

 

The Euro NCAP also announced that it will require driver monitoring systems for five-star safety ratings and European General Safety Regulation has mandated the technology for all new cars, vans, trucks, and buses from 2024.

The US is on the same boat

This trend extends globally as the USA becomes focused on a road safety agenda to address risks posed by emerging semi- and fully automated vehicle technologies. Meanwhile, the National Transportation Safety Board (NTSB) in the US has also recommended the use of DMS as an effective means of driver engagement in Level 2 vehicles.

“As vehicles become more automated and until they are capable of handling the driving task 100% of the time, there will always be a requirement for the vehicle to initiate handover back to the driver,” explained Seeing Machines CEO, Paul McGlone. “In order for that handover to be conducted effectively, the vehicle must be able to register the attention state of the driver and react accordingly.”

In addition to this, the US, members of the Alliance of Automobile Manufacturers and the Association of Global Automakers, which account for nearly 100% of the US light vehicle sales, voluntarily agreed to make child detection a standard feature by 2025.

How to train an accurate and reliable in-cabin monitoring system

What are the applications for in-cabin monitoring?

Driver sensing for autonomous drive handover:

Body, head, and face monitoring:

Active safety systems for NCAP and legal requirements:

Object detection:

The rise of interior monitoring for ADAS and autonomous driving

Driver monitoring experts have no doubt about the near future, they expect that advanced in-cabin monitoring technologies will also follow the lead of ADAS and support the fusion of various technologies to provide an even richer understanding of the environment within the vehicle cabin.

The advent of robotaxis is an appropriate example to explain how these systems could become essential. ABI Research describes occupant monitoring systems as “critical for fully autonomous ride-sharing vehicles” thanks to their ability to identify inappropriate conduct by users, personal objects left in the vehicle, or situations that require intervention, like health emergencies or spillages.

We are experiencing a move towards self-driving vehicles, in which drivers become occupants, which means driver monitoring systems and occupant monitoring systems will merge giving rise to a fully in-cabin monitoring system.

Camera-based driver monitoring systems

We have just seen some of the many applications for interior monitoring and it’s near and attached to the autonomous driving future, but how would the sensor stack configuration be to optimize the performance and output of these systems?

At this phase of the in-cabin monitoring technology development journey, a camera-based driver monitoring system combined with deep learning artificial intelligence (to ‘learn’ as many traits of driver distraction and occupant behavior as possible) is the only in-cabin, high-performance technology, focused directly on the driver and occupants, that can provide the system with the critical information it needs.

Now, you already know the basics and have some more information about why interior monitoring has become a trending topic and why finding the best technology to develop these systems is absolutely critical to developers.

Let’s talk about specific technology to train, test, and validate camera-based in-cabin monitoring systems then!

Why developing accurate in-cabin monitoring systems require specific data generation technology?

Developers need data, lots of data, to train and validate their deep learning perception systems, and finding the right data is definitely not an easy task. Real-world data is limited, expensive, and time-consuming to get, curate and label, and maybe not even completely accurate… especially when we talk about interior monitoring.

How to train an accurate and reliable in-cabin monitoring system
There are several challenges developers need to overcome in order to successfully launch their interior monitoring systems and Anyverse™ synthetic dataset generation solution can truly help:

Privacy is an important issue when gathering data from the driver and other occupants, reaching a whole new level when children are involved… It greatly limits your possibilities to collect data from the real world. Would you “give” your baby to some development team to make some tests? The answer sounds pretty obvious.

With Anyverse™ you can reproduce non-viable in-cabin scenes in the real world due to data protection regulations, children’s rights, or safety issues: a child left behind on the back seat, driver distractions, drowsiness, microsleeps, an activated airbag system, pets, or any other potentially hazardous situations.

Developers need a vast amount of data to train and validate their driver monitoring system with a guarantee of success and real-world data is limited, expensive, and time-consuming to get, curate and label.

Anyverse™ provides synthetic data with pixel-accurate ground truth: generating as many labeled images as you need at a fraction of the time and cost it takes to get, curate, and manually label real-world images.

Driver and occupant’s age or ethnicity, environment (including the car), time of day, illumination, etc. Developers need enough variability and close to reality accuracy to training their monitoring system so it can generalize to the real world they are going to face in production.

With Anyverse™ you can programmatically control your scenes and automatically generate thousands of variations of data adding variability to the interior: different cars and manufacturers, materials, colors, textures, etc. Generate all the images your system needs with almost infinite variability: countless drivers and occupants with multiple poses, behaviors, and interactions inside the cabin.

“Anyverse™ has developed specific technology for interior simulation, including wide variability in vehicles, people features, body poses, environment, lighting, and others. Interior monitoring developers can now procedurally generate thousands of images and metadata with ground truth data at a reasonable cost.”

Technology to develop a reliable in-cabin monitoring system

Developing new technology always requires accuracy, but when we talk about technology to improve human safety, we need to pay special attention to every single detail because it can make a critical difference. This is the exact case we face when developing a reliable interior monitoring system and this is why gathering the right data to train, test, and validate these systems is absolutely vital. How do you want your system to interpret the in-cabin world?

時間とコストを節約 - センサーをシミュレートしましょう!

コンピュータ認識ディープラーニングモデルをトレーニング、テスト、検証するための物理ベースのセンサーシミュレーション

Anyverse™ について

Anyverse™ 新しいソフトウェア 2.0 プロセスを適用して、ディープ ラーニングの認識モデルを継続的に改善し、システムの市場投入までの時間を短縮するのに役立ちます。当社の合成データ生成プラットフォームにより、忠実度の高い正確でバランスの取れたデータセットを提供できます。データ駆動型の反復プロセスと併せて、必要なモデルのパフォーマンスを達成できるよう支援します。

Anyverse™を使えば、正確に あらゆるカメラセンサーをシミュレート どちらがあなたの知覚システムでより良いパフォーマンスを発揮するかを判断するのに役立ちます。最先端の測光パイプラインのおかげで、実際のデバイスを使用した複雑で高価な実験はもう必要ありません。

さらに詳しく知りたいですか?

当社のウェブサイトを訪問 anyverse.ai いつでも、私たちの リンクトイン, フェイスブック, そして ツイッター ソーシャル メディア プロフィール。

トップにスクロールします

合成データについて話しましょう!