Digital simulation is progressively turning mainstream in the process of training and enhancing machine learning models. Real-world data is, of course, 100% real, but also expensive, time-consuming in tagging and statistically biased, usually lacking corner cases and low-frequency events. In fact, human drivers are usually not trained to handle dangerous or unexpected events when they are statistically improbable. The more improbable the event, the higher the risk of not handling it properly. However, for the case of self-driving vehicles or any other autonomous system, this assumption cannot be held, as people show extremely low tolerance to mistakes made by artificial intelligence.
It’s becoming evident that high AI robustness and safety levels cannot be achieved exclusively by real-world data training.
There is however a question about how much accuracy is needed in our synthetic dataset to reproduce the real world with low bias and enough variability to boost our AI confidence levels. Video games have proven to be a valid solution for AI training as their visual quality have achieved high levels of realism in the last years. Other approaches use game development platforms such as Unreal and Unity. They can produce visually appealing images and sequences though they typically sacrifice physical accuracy in favor of real-time performance — a must for a video game.
At ANYVERSE, we argue that physical correctness is essential to avoid biased learning. Think of situations where complex lighting becomes a key factor in driving. Physically-based light transport simulation is computationally expensive and not supported by current video game technologies.
Strong sun glares caused by wet surfaces, car lights scattered through fog or heavy rain, traffic lights diffracted by tiny water drops on the lens of the camera, traffic lights reflected in water ponds or glass facades and blurred by motion, are just few examples of complex visual conditions where the AI could be easily cheated.
Anyverse uses a bidirectional unbiased spectral ray-tracing approach to produce high fidelity images. All this jargon means that Anyverse is very close to reproduce the physics of light accurately, further than current video game platforms. How this difference can be measured in a practical way? While we are currently working on benchmarks, it’s not difficult to imagine situations where complex reflections of traffic lights are missing due to render inaccuracy. This will ultimately lead to AI inability to assess the situation correctly, thus increasing exponentially the risk of accident. That’s why we firmly believe that physics matter.