Get the ANYVERSE Evaluation Dataset for Advance Perception Training

High-fidelity synthetic data makes all the difference in autonomous vehicle & ADAS perception training.
Download our Evaluation Dataset for non-commercial use to see how ANYVERSE fits in your pipeline! 
Please see the research license terms, and contact us if you require a commercial license.

ACRS Dataset

ANYVERSE City Random Scenes (ACRS) is a synthetic dataset generated with ANYVERSE for research purposes only
The ACRS Dataset contains a total of 3.4K HD synthetic images with the following structure. The images were generated from a unique urban scene replicating reality. Additionally, objects of interest (pedestrians and vehicles) have been placed randomly for greater variability.
ACRS provides pixel-accurate, instance-specific, 2D bounding-box, and depth annotations for a set of interest objects in the context of autonomous driving. Overall, the dataset is a comprehensive compilation of data to be used for testing synthetic data in AD/ADAS training.


The 3D scene that recreates the city scenario shown in the images, has been procedurally generated. Using a rule-based engine, ANYVERSE can generate plausible driving scenarios of any type such as urban, suburban, rural and a highway.


ANYVERSE’s unbiased spectral high-range render engine accurately reproduces any visual conditions present in real life. As a result, the images produced are extremely realistic and correct. Physically-based models for light emitters, atmospheric/weather conditions, and materials help recreate the most challenging environmental situations and achieve an accurate optical simulation of objects and surfaces.


Annotations are provided for the following 9 classes: bicycle, bus, car, cyclist, motorcycle, pedestrian, rider, truck, and van.
Color images available as PNG (RGB, 8 bits per channel).

Instance images available as PNG. Distinct objects belonging to the same class have different pixel values (only for the considered classes).

Depth images available as EXR. The information provided is like that provided by a “Time of Flight” depth sensing device, i.e. a pulse is sent out and the sensor detects the pulse’s reflection of the objects to calculate the distance.
The default depth range is [0-200] meters, and every depth value is inverse normalized to [1-0]. Depth values larger than 200 meters are set to a default value of “inf”. The depth value is stored as just one channel in a EXR file encoded in a 32 bit float type.

Semantic/class images available as PNG.

Sample Images

Bounding Boxes ANYVERSE

Color + 2D Bounding Boxes

Instance Channel


Depth Channel


Semantic Channel


Scroll to Top