AI is permeating many aspects of our life, some of those AI-based systems surrounding us are the ones that require a faithful perception of the environment to seamlessly help with various tasks, like the ADAS in your car or computer vision systems that help autonomous robots organize a warehouse.
The AI deep learning models behind the perception systems need to understand the real world by example with vast amounts of training data for specific environments and use cases including corner cases. Trying to use only real-world images may be difficult and expensive to get. That is why the use of synthetically generated data is helping improve the performance and accuracy of perception systems.
Real-world details are infinite. When generating synthetic images simulating cameras, we need to be able to reproduce and capture as many details as possible from a computer-generated 3D world as we would capture using real cameras in the real world. Don’t forget that, at the end of the day, the perception systems will use real cameras (and other sensors). Those details that we need to generate more faithful images to feed our perception brain is what we call hyperspectral data.