Javier Salado, Technical Product Manager, Anyverse
Over the past weeks, we’ve focused on getting our Faster R-CNN neural network ready to detect some of the classes of objects defined in the KITTI dataset. We trained the network with the real-world samples from the dataset to fine tune hyperparameters and run some preliminary validation tests with the validation set from the split we did with the dataset.
Now, during the last couple of weeks, we have been setting up Anyverse Synthetic Data Platform to generate a synthetic dataset following the specs of KITTI. In the following days, we will be ready to let the platform generate a synthetic training dataset that we will use with our Faster R-CNN.
Anyverse Synthetic Data Platform
First, we needed to characterize the cameras used by KITTI and reproduce the rigging on a virtual EgoVehicle in Anyverse Studio. The color cameras are:
2 × PointGray Flea2 color cameras (FL2-14S3C-C), 1.4 Megapixels, 1/2” Sony ICX267 CCD, global shutter, with Edmund Optics lenses, 4mm, opening angle ∼ 90°, vertical opening angle of the region of interest (ROI) ∼ 35°
We defined both cameras following the placement and distances described by KITTI documentation in Anyverse Studio and this is the result for our virtual EgoVehicle:
Image 1 – EgoVehicle and camera rig
We gathered all the available information about the Sony ICX267 sensor and characterized the sensor in Anyverse Studio. Then we fine tuned the ISP parameters to get images similar to the KITTI data we have.
Image 2 – Sensor and ISP configurations
Image 3 – Sensor simulation result
Scene variability programming
We continued training the team on the use of Anyverse Studio during a 4-hour workshop. Then, with a little bit of practice, we have been working on the programming of the variability and population logic to get images as close as possible to the real-world KITTI samples. This is still a work in progress.
Image 4 – Script console
Next steps are finalizing the programming of the dataset generation and launching it in a way that will allow us to have the same amount of data we have from KITTI in 2 flavors with and without sensor simulation. Stay tuned!
Read the previous chapter >>>