Is your autonomous vehicle ready? #NotACornerCase

SHARE

Corner case or not a corner case?

Blink into the future and you will see driverless cars, motorcycles, and trucks, tons of them, entering and leaving cities, small towns, speeding on the highway. Now blink back to reality, a reality made of complex driving situations all around the globe. Various factors contribute to the complexity of being on the road, such as weather and lighting conditions, human unpredictability, or changes in common scenarios such as broken traffic lights or animals crossing the street. Some would argue that these are so-called “corner cases” are low probability scenarios but in fact even if you don’t see these constantly you perhaps encounter theme every other day without even realizing. They are #NotaCornerCase!

Imagine all the data

From ADAS/Driver Assistance Level to Full Automation, the automotive industry is buckling down for the future of robotics on the road. Massive real-word data is being captured, meticulously tagged and used to train machine learning algorithms, as part of the perception process. However, no matter how monumental company efforts are, real-world data is just not enough. It simply cannot cover all possible scenarios, and this is where synthetic data comes into play! But not just any synthetic data. Data needs to be photorealistic, specific, scalable, with numerous variations, and incorporated metadata to be able to just “plug it in” your ML. It has to be true to reality. It has to be Anyverse!

Synthetic data is the solution to the loopholes in AV perception training and testing

and here are some examples why:

Eyes on the road

The roads with all their elements such as lanes, traffic signs, street and traffic lights, other vehicles are tricky even to experiencedhuman drivers but what happens when the ADAS lane keeping system does not recognize sand or ice on the lane lines?

The road is full of obstacles and challenges such as low-visibility turns, vandalized traffic signs, less common vehicles such as the famous tuk-tuk, missing traffic signs, messy construction sites. The list goes on and on. So the question is – how can you ensure your driverless vehicle is prepared for all the tricky scenarios?

Eyes off the road

Sometimes off-road elements affect safety even more that road-related elements. And we don’t mean just what’s on the sidewalk or nearby buildings. Just think of Mother Nature! Weather conditions such as snow, rain, and fog can impair visibility and consequently car control. What happens when sun reflections on windows or wet surfaces blind you? Or heavy rain prevents your autonomous vehicle from measuring car distances properly? Even in plain daylight sun glare can cause trouble on the road.

With Anyverse you can mirror reality and produce synthetic data that is physically correct, no tricks applied. Furthermore, it is equipped with serious sensor abilities and numerous lens effects such as scatter, distortion, dirt, etc. 

Beware: humans

Humans… Humans everywhere! We all know a driverless vehicle does not mean a humanless world. People will make sure to get in the way and make the “life” of self-driving cars somewhat more complicated. No doubt there will be kids playing on the sidewalk, oblivious jay-walkers, protesters or a flashmob blocking the way. Because humans 🙂

Don't cut corners

We can conclude with certainty that most low-probability scenarios for some are everyday happenings for others. Life is unpredictable as it is, so what is to be expected of an autonomous vehicle?

Truth it, weather and lighting peculiarities alone are serious enough a challenge to the driverless world and are by far no corner cases.

To stay ahead of the game, you can start preparing for all possible scenarios by including specific synthetic data in your machine learning training. With Anyverse you can have any scene you like to be able to test it and see improving fidelity levels or possible loopholes. Stay tuned for some awesome scenes we’ve prepared to help you raise the bar. 

Coming soon...

Save time & costs - Simulate sensors!

Physically-based sensor simulation to train, test, and validate your computer perception deep learning model

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™ you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Come visit our booth during the event, our website anyverse.ai anytime, our Linkedin, Facebook, and Twitter social media profiles.

Scroll to Top

Let's talk about synthetic data!