How to Get Customizable Synthetic Data for Advanced Perception

A breakthrough synthetic data

solution for advanced perception.

Get the right data for your perception model – sensor-specific and ground-truth.

Our Workflow

Get in touch, share your sensor specs, scene and data needs and we generate custom synthetic datasets for you. Here is the workflow we apply:

01. Sensors

Sensors are crucial for perception training and testing so this is where we start. We collect your requirements and build the exact sensor model(s) you need. We define camera parameters and choose a specific lens or add LiDAR.

+ Lens type, FOV, color filter, response curves, sensor size
+ Raw sensor data
+ Image Processing functions
+ LiDAR settings

02. Ego-vehicle

We define your ego car or any other ego vehicle such as an UAV or a robot. Then we add, position and rotate all the sensors previously created. There is no limit to the number of cameras/LiDAR.

+ Ego vehicle set-up
+ Add and position cameras/ LiDAR

ANYVERSE_Ego Vehicle_Sensors_Top

03. Scenario

Once we have your vehicle and its sensors all set, it’s time to take care of its scenario. We start by setting scene features and continue by adding all the additional objects to complete the scene, including the ego-vehicle.

+ Scene model and assets
+ Dynamic assets – traffic and pedestrian
+ Ego-vehicle behavior

04. Variability

Next step – move on to defining variability ranges such as object materials and textures, weather and lighting conditions, and other parameters for greater variability. Perfect for everyday corner cases and challenging situations.

+ Weather and lighting conditions
+ Object materials and textures
+ Object positions and dynamic parameters

05. Data generation

Finally, we proceed to the dataset setup and decide on the number of variation cycles, ground-truth data and channel outputs (instance, material, reflectance, roughness, depth).

+ Batch variability
+ Ground truth data, bounding boxes, positions
+ Pixel-accurate data channels

ANYVERSE Perception Solutions_Metadata_Pixel Segmentation

Key benefits

Sensor Data Model

Custom defined sensors for fitting your exact perception model, optical and LiDAR. Get unprocessed high bit depth raw data in addition to your RGB.

Ground-truth Data

Automatically-generated and with no margin for error. Choose from a number of pixel-accurate channels available, apart from classing bounding boxes and more.

Variability Under Control

Set the ranges of variability for anything, including light, weather, objects, textures, positions, behaviors, etc. covering possible everyday corner cases.

Any synthetic data questions, drop us a message!

Scroll to Top

Let's talk synthetic data!

Design your dataset

Let us take you through the steps required for a customizable ANYVERSE dataset. 
We will provide you with some samples afterwards or will get in touch to understand better your data needs.

Camera Lens
Data Output
About you

Let’s start with the practical use of your dataset!


Where does the action take place? Pick a scenario!


Would you like to add camera(s) and/or LiDAR to your ego vehicle?


Do you require a specific camera lens ?


What resolution does your model require?


Finally, tick all the additional metadata channels you’d like to have!


Sign up to our newsletter!