The Solution

Anyverse is the data to your AI. Breakthrough solutions to help you get ahead of your competition.

High-fidelity synthetic datasets can improve the efficiency and robustness of your training/testing pipeline at a lower cost compared with real-world dataset approaches. Typical use cases include:

 Improving classification and detection of traffic lights in all plausible and non-standard conditions (e.g. ‘yellow-light’ situations, light malfunctions, bad visibility, etc.). Same with traffic signs, road lines, etc.
 Detect pedestrian intentions and the behavior of other vehicles or bicycles. Test corner cases including animals, vehicle door opening, or balls rolling into the street .
 Simulate all kind of atmospheric and visibility conditions (e.g. fog, strong water reflections, sun caustics, traffic lights reflected on mirrors, etc.)
 Simulate all kind of dangerous situations such as obstacles, barriers, adversarial elements (e.g. faked traffic signs), damaged infrastructure, etc.

The workflow


Anyverse operates on a data-as-a-service model. Our team will work with your machine learning, perception or simulation experts, configuring the Anyverse solution to meet your specific needs, whether you are dealing with an autonomous or driver-assist system. 

Massive datasets

We produce sequences of high-fidelity synthetic datasets in batches – ranging from thousands to hundreds of thousands of images. New sequences are created upon new feature/scene requirements or variational changes.

Limitless Variations

For every image produced, unlimited variations of physically-accurate lighting conditions can be generated at minimum cost without recalculating. Such variations include daylight/night, changes in intensity for every light and lens artifact.

Your Specifications

Anyverse is configured to meet your specifications in terms of types of scenarios, scene features, weather and lighting variations, dynamical conditions, sensor specs, and output formats.

Other features

Dataset Structure

Datasets are composed of sequences of color images captured by single or multiple ego-car sensors, plus annotation files and metadata channels corresponding to each image. 


Set all camera parameters to your exact needs. Modify the camera and vehicle position and define multiple cameras. Choose the type of lens such as fish-eye, pinhole, 360-degree or thin models. Lens scattering and noise effects can be applied.

Channels / 3D information

The datasets come with automatic generation of metadata channels, including 3D information, for each image such as:  depth, object ID, material ID, instance ID, 3D motion vectors, surface normals, 3D positions, and radiance.

Image Segmentation

Anyverse produces segmentation for every image, including semantic, instance and rectangle segmentation at pixel level. More than 40 semantic classes are defined.

Annotation Files

Annotated data is available in json, xml and Google protocol buffer formats. Associated metadata such as time of the day, weather conditions, camera parameters, etc. is also included.

Video Sequences

Dynamic video sequences can be created based on predefined ego-car paths and lapse time settings.