自動運転車の準備はできていますか? #NotAコーナーケース

共有

Corner case or not a corner case?

Blink into the future and you will see driverless cars, motorcycles, and trucks, tons of them, entering and leaving cities, small towns, speeding on the highway. Now blink back to reality, a reality made of complex driving situations all around the globe. Various factors contribute to the complexity of being on the road, such as weather and lighting conditions, human unpredictability, or changes in common scenarios such as broken traffic lights or animals crossing the street. Some would argue that these are so-called “corner cases” are low probability scenarios but in fact even if you don’t see these constantly you perhaps encounter theme every other day without even realizing. They are #NotaCornerCase!

Imagine all the data

From ADAS/Driver Assistance Level to Full Automation, the automotive industry is buckling down for the future of robotics on the road. Massive real-word data is being captured, meticulously tagged and used to train machine learning algorithms, as part of the perception process. However, no matter how monumental company efforts are, real-world data is just not enough. It simply cannot cover all possible scenarios, and this is where synthetic data comes into play! But not just any synthetic data. Data needs to be photorealistic, specific, scalable, with numerous variations, and incorporated metadata to be able to just “plug it in” your ML. It has to be true to reality. It has to be Anyverse!

Synthetic data is the solution to the loopholes in AV perception training and testing

and here are some examples why:

Eyes on the road

The roads with all their elements such as lanes, traffic signs, street and traffic lights, other vehicles are tricky even to experiencedhuman drivers but what happens when the ADAS lane keeping system does not recognize sand or ice on the lane lines?

The road is full of obstacles and challenges such as low-visibility turns, vandalized traffic signs, less common vehicles such as the famous tuk-tuk, missing traffic signs, messy construction sites. The list goes on and on. So the question is – how can you ensure your driverless vehicle is prepared for all the tricky scenarios?

Eyes off the road

Sometimes off-road elements affect safety even more that road-related elements. And we don’t mean just what’s on the sidewalk or nearby buildings. Just think of Mother Nature! Weather conditions such as snow, rain, and fog can impair visibility and consequently car control. What happens when sun reflections on windows or wet surfaces blind you? Or heavy rain prevents your autonomous vehicle from measuring car distances properly? Even in plain daylight sun glare can cause trouble on the road.

With Anyverse you can mirror reality and produce synthetic data that is physically correct, no tricks applied. Furthermore, it is equipped with serious sensor abilities and numerous lens effects such as scatter, distortion, dirt, etc. 

Beware: humans

Humans… Humans everywhere! We all know a driverless vehicle does not mean a humanless world. People will make sure to get in the way and make the “life” of self-driving cars somewhat more complicated. No doubt there will be kids playing on the sidewalk, oblivious jay-walkers, protesters or a flashmob blocking the way. Because humans 🙂

Don't cut corners

We can conclude with certainty that most low-probability scenarios for some are everyday happenings for others. Life is unpredictable as it is, so what is to be expected of an autonomous vehicle?

Truth it, weather and lighting peculiarities alone are serious enough a challenge to the driverless world and are by far no corner cases.

To stay ahead of the game, you can start preparing for all possible scenarios by including specific synthetic data in your machine learning training. With Anyverse you can have any scene you like to be able to test it and see improving fidelity levels or possible loopholes. Stay tuned for some awesome scenes we’ve prepared to help you raise the bar. 

Coming soon...

時間とコストを節約 - センサーをシミュレートしましょう!

コンピュータ認識ディープラーニングモデルをトレーニング、テスト、検証するための物理ベースのセンサーシミュレーション

Anyverse™ について

Anyverse™ 新しいソフトウェア 2.0 プロセスを適用して、ディープ ラーニングの認識モデルを継続的に改善し、システムの市場投入までの時間を短縮するのに役立ちます。当社の合成データ生成プラットフォームにより、忠実度の高い正確でバランスの取れたデータセットを提供できます。データ駆動型の反復プロセスと併せて、必要なモデルのパフォーマンスを達成できるよう支援します。

Anyverse™を使えば、正確に あらゆるカメラセンサーをシミュレート どちらがあなたの知覚システムでより良いパフォーマンスを発揮するかを判断するのに役立ちます。最先端の測光パイプラインのおかげで、実際のデバイスを使用した複雑で高価な実験はもう必要ありません。

さらに詳しく知りたいですか?

イベント期間中に弊社のブースにぜひお立ち寄りください。 anyverse.ai いつでも、私たちの リンクトイン, フェイスブック, そして ツイッター ソーシャル メディア プロフィール。

トップにスクロールします

合成データについて話しましょう!