Synthetic data to develop a trustworthy autonomous driving system | Chapter 12

SHARE

CHAPTER 12

Author
Hamid Serry, WMG Graduate Trainee, University of Warwick

In the last post, we discussed the issues associated with a low bounding box pixel size, and how we have analyzed the KITTI dataset to produce reasonable minimum sizes for our virtual dataset generation.

We also investigated occlusion and the proportion of objects which were over 50%  occluded. This week we are going into more detail about the steps after generating a dataset, mainly how we plan to mix the proportion of generated images and KITTI images. 

Addition of images

Adding Anyverse generated images to the KITTI dataset when performing a training cycle would increase the overall number of images to train the network with. This could have a number of effects on the performance as an increased dataset, if not already oversaturated with data, could in most cases only benefit the training. Training on a larger dataset and then comparing it to the originally sized dataset would not be a fair comparison, as many variables as possible should stay the same when comparing network results. This especially applies to the size of the datasets.

An advantage of adding extra images could be supplementing an existing dataset with extra data,  diversifying it, and potentially aiding with preventing overfitting of the network. This could be a  positive effect on the use of Anyverse as a supplementary service for neural network training.  Although the merging of the datasets may require some work to ensure all labels and formats are uniforms. Confirming whether the images could act as an additional source of labeled data for a  dataset would prove valuable for the project, showcasing photorealistic generations and their uses.

Replacement of images

An alternative method of using Anyverse’s dataset alongside the KITTI dataset would be to replace some of the KITTI images with generated images. In an ideal world, similar images (based on the number of class objects in each image) could be replaced with each other, maintaining an even distribution of the classes after a replacement operation was performed. This, however, would require a lot more time and effort to achieve, much beyond the scope of this project.

A random shuffle would be more effective in this case, which will roughly produce a similar distribution, however, would not be an exact match. For preliminary results on how a replacement of images will affect the neural network performance, this will be sufficient.

The test of replacing images will evaluate if the performance outcome can be matched with the performance of a KITTI-trained network, what the causes are for the performance increase/decrease, and how this can be adjusted for a better outcome. At the time of writing, the plan is to have 3 different mixes, 75:25, 50:50, and  25:75 of KITTI and Anyverse images.

In addition to the KITTI trained network and a fully Anyverse trained network, it will provide a rich comparison into the effects of using the generated dataset with object detection models.

Conclusion

As this is the penultimate week of the project, most of the remaining time is on evaluating the dataset which has been generated and finding the best ways of utilizing this within the training of neural networks.

We have looked into how we are planning to merge the KITTI and Anyverse datasets to yield some comparisons, and the advantages of different mixing types. Next week we hope to share the results of this analysis and further wrap up the time on the project. Until then!

Read other chapters >>>

About Anyverse™

Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.

With Anyverse™, you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.

Need to know more?

Visit our website, anyverse.ai anytime, or our Linkedin, Instagram, and Twitter profiles.

Scroll to Top

Let's talk about synthetic data!