A breakthrough synthetic data
solution for advanced perception.
Get the right data for your perception model – sensor-specific and ground-truth.
Get in touch, share your sensor specs, scene and data needs and we generate custom synthetic datasets for you. Here is the workflow we apply:
Sensors are crucial for perception training and testing so this is where we start. We collect your requirements and build the exact sensor model(s) you need. We define camera parameters and choose a specific lens or add LiDAR.
+ Lens type, FOV, color filter, response curves, sensor size
+ Raw sensor data
+ Image Processing functions
+ LiDAR settings
We define your ego car or any other ego vehicle such as an UAV or a robot. Then we add, position and rotate all the sensors previously created. There is no limit to the number of cameras/LiDAR.
+ Ego vehicle set-up
+ Add and position cameras/ LiDAR
Once we have your vehicle and its sensors all set, it’s time to take care of its scenario. We start by setting scene features and continue by adding all the additional objects to complete the scene, including the ego-vehicle.
+ Scene model and assets
+ Dynamic assets – traffic and pedestrian
+ Ego-vehicle behavior
Next step – move on to defining variability ranges such as object materials and textures, weather and lighting conditions, and other parameters for greater variability. Perfect for everyday corner cases and challenging situations.
+ Weather and lighting conditions
+ Object materials and textures
+ Object positions and dynamic parameters
05. Data generation
Finally, we proceed to the dataset setup and decide on the number of variation cycles, ground-truth data and channel outputs (instance, material, reflectance, roughness, depth).
+ Batch variability
+ Ground truth data, bounding boxes, positions
+ Pixel-accurate data channels
Any synthetic data questions, drop us a message!