信頼できる自動運転システムを開発するための合成データ |第9章

共有

CHAPTER 9

Author
Javier Salado, Technical Product Manager, Anyverse

Over the past weeks, we’ve focused on getting our Faster R-CNN neural network ready to detect some of the classes of objects defined in the KITTI dataset. We trained the network with the real-world samples from the dataset to fine tune hyperparameters and run some preliminary validation tests with the validation set from the split we did with the dataset.

Now, during the last couple of weeks, we have been setting up Anyverse Synthetic Data Platform to generate a synthetic dataset following the specs of KITTI. In the following days, we will be ready to let the platform generate a synthetic training dataset that we will use with our Faster R-CNN.

Synthetic datasets

The goal is to generate 2 synthetic datasets, one applying Anyverse’s sensor simulation pipeline and one without applying it. This will provide 2 different types of images we are going to use for training and be able to compare results. We want to study how the use of synthetic data affects network performance and if the different types of synthetic data used for training have a significant impact on the network’s capability to generalize to real-world images.

Anyverse Synthetic Data Platform

With Anyverse Synthetic Data Platform we can easily generate the data we need by defining a city scenario where we can randomly populate with different objects of the KITTI classes and others to enrich the dataset. Anyverse Studio is the tool we use to define the scenario and program the variability in it and the logic to fill the scenes with the objects of interest.

First, we needed to characterize the cameras used by KITTI and reproduce the rigging on a virtual EgoVehicle in Anyverse Studio. The color cameras are:

2 × PointGray Flea2 color cameras (FL2-14S3C-C), 1.4 Megapixels, 1/2” Sony ICX267 CCD, global shutter, with Edmund Optics lenses, 4mm, opening angle ∼ 90°, vertical opening angle of the region of interest (ROI) ∼ 35°

We defined both cameras following the placement and distances described by KITTI documentation in Anyverse Studio and this is the result for our virtual EgoVehicle:

Synthetic data to develop a trustworthy autonomous driving system

Image 1 – EgoVehicle and camera rig

センサーシミュレーション

We gathered all the available information about the Sony ICX267 sensor and characterized the sensor in Anyverse Studio. Then we fine tuned the ISP parameters to get images similar to the KITTI data we have.

Synthetic data to develop a trustworthy autonomous driving system

Image 2 – Sensor and ISP configurations

Synthetic data to develop a trustworthy autonomous driving system

Image 3 – Sensor simulation result

Scene variability programming

We continued training the team on the use of Anyverse Studio during a 4-hour workshop. Then, with a little bit of practice, we have been working on the programming of the variability and population logic to get images as close as possible to the real-world KITTI samples. This is still a work in progress.

Synthetic data to develop a trustworthy autonomous driving system

Image 4 – Script console

Next steps are finalizing the programming of the dataset generation and launching it in a way that will allow us to have the same amount of data we have from KITTI in 2 flavors with and without sensor simulation. Stay tuned!

他の章を読む >>>

トップにスクロールします

合成データについて話しましょう!