ANYVERSE

Play Video

A flexible and accurate

synthetic data generation platform

Craft the data you need for your perception system in minutes. Design scenarios for your use case with endless variations. Generate your datasets in the cloud.

The Anyverse™ Platform

Anyverse brings you a scalable synthetic data software platform to design, train, validate or test your perception system’s AI. It provides unparalleled computing power in the cloud to generate all the data you need in a fraction of time and cost compared with classic real-world data.

synthetic data generation

Hyperspectral (pixel-accurate)

render engine

synthetic data generation

Accurate sensor simulation

procedural-scene-generation SN

Procedural (API-based) scene generation

Graphical interface for dataset development

synthetic data generation

Built-in assets library

Scalable (API-based) cloud data production engine

A flexible modular platform

Anyverse offers a modular platform for scene generation, rendering, and sensor simulation, allowing you to decide which modules fit better with your workflow. You may want to use all modules or connect different parts of Anyverse™ to your data pipeline. Anyverse™ Studio is the graphical interface application that enables you to visually develop your scenes and datasets.
anyverse-platform
MODULE 01

Anyverse™ Studio

User interface for the Anyverse Platform. Graphically design the base for your datasets. Whether you need static annotated images or sequences, use Anyverse’s extensive asset library to compose a scene. Apply dynamic behaviors and program the environmental variability you need with python scripts. Produce your datasets in the cloud and explore the results in Anyverse™ Studio, including all the associated ground truth data.

camera definition

GUI Scene design

sensor & ISP definition

2D & 3D view ports

3D assets management

dynamic behavior

weather conditions

ilumination conditions

python scripting
API

endless variability

script based dataset generation

MODULE 02

Render

Anyverse™ hyperspectral render engine implements a pure spectral ray-tracing engine that computes the spectral radiance of every light beam interacting with materials in the scene, simulating lights and materials at close physical level.

Features

lens

custom lens

model

hyperspectral

(256 bands sampling)

motion blur

global rolling shutter

complex environments (sky and water)

high bit-depth image output

raw sensor data

photometric accuracy

Ground-truth channels

Anyverse’s datasets comprise various information such as color images, raw sensor data, JSON files and ground-truth channels.

Click on the options to discover more

JSON file with all meta information: camera and objects positions, 2D and 3D bounding boxes, characters’ poses, environment information like time of day and weather conditions and more.

16 or 32-bit color image generated from the render and the sensor and ISP simulation. Typically used to feed your model training pipeline.

Image in which every pixel for every object class in the scene has a specific unique color according to Anyverse’s ontology. Used as ground truth to help the models understand what pixels belong to what object.

Only for objects of interest. The pixels of different instances of the same object class have different colors. This helps the AI understand different instances of the same class during training.

Every different material has a different color at the pixel level. This channel can be interesting for use cases dependent on the objects of interest materials.

Every pixel in this image has 3 32-bit channels each with one of the x, y, z coordinates of the pixel in the world reference system. It is useful for spatial reference.

Contains the XYZ image. The color in Anyverse rendering system is encoded by spectra rather than RGB triplets. Spectral information is converted to XYZ images using the CIE 1931 system.

This channel is called Albedo sometimes. It contains the color image without the contribution of lights. It is useful to have as a reference when the color image is not created.

It contains the material roughness for every pixel with a value between 0 and 1. A material with a roughness value of 1 will provide white pixels, whereas a material with 0 roughness will give black pixels.

In this channel Anyverse encodes the velocity vector of every pixel in the image in world coordinate system. These will be non-zero for objects that are moving in the scene when the sample is taken. Useful for dynamic use cases.

This is the raw image coming out of the sensor simulation without any ISP. Useful if you have your own ISP simulation to generate a final color image.

This channel contains a value between 0 and 1 representing the distance of every pixel to the camera. These values can be easily converted to meters. Useful to train AI models that deal with distance estimation of objects.

For every pixel in this channel you have the normal vector for the surface (geometry) that pixel belongs to, in the world reference system.

For every pixel in this channel you have the normal vector for the surface that pixel belongs to, including the texture effect, in the world reference system.

MODULE 03

Sensor

Simulate your camera sensor accurately. With the spectral info provided by the render we can simulate all the physics happening at the sensor to implement our Sensor Simulation Pipeline.

pixel size

exposure time

fill factor

well capacity

noise

qe curves

gamma

analog offset/gain

infrared

filter

low-pass

filter

color filter array

conversion gain

white

balance

rgb to xyz matrix

xyz to device matrix

pixel vignetting

Start generating hyperspectral synthetic data

for every stage of your advanced perception system development cycle with Anyverse™.

Let's talk about synthetic data!