SHARE
I need ground truth data.
I need annotated datasets.
These are two of the requests that – frequently and many times together – we receive through our web forms (more often than you might imagine).
This is where data annotation enters the scene.
Seeking ground truth data generation
Developing autonomous transport systems with real-world datasets means these need to be manually annotated so their deep learning algorithms are capable of recognizing and giving meaning to every object contained in the dataset.
Why is the method these annotations were generated by absolutely key to achieving (or not achieving) ground truth data?
But… What if I told you there were a third and error-free option?
Perfect pixel-level annotations
Leaving aside the fact that it is impossible to annotate real-world data at pixel-level, why apply a non-free-from-error methodology to train systems which are more accurate than humans?

One way to go is to use deep neural networks that can generate pixel-perfect annotations. The paradox is that these neural networks need to be trained with real-world data that has to be manually annotated leading to a chicken and egg situation. How accurate can the annotation neural networks be if they are not trained with accurate enough data?
About Anyverse™
With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.
Need to know more?
Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.