synthetic data generation platform
User interface for the Anyverse Platform. Graphically design the base for your datasets. Whether you need static annotated images or sequences, use Anyverse’s extensive asset library to compose a scene. Apply dynamic behaviors and program the environmental variability you need with python scripts. Produce your datasets in the cloud and explore the results in Anyverse™ Studio, including all the associated ground truth data.
Anyverse™ hyperspectral render engine implements a pure spectral ray-tracing engine that computes the spectral radiance of every light beam interacting with materials in the scene, simulating lights and materials at close physical level.
(256 bands sampling)
Anyverse’s datasets comprise various information such as color images, raw sensor data, JSON files and ground-truth channels.
JSON file with all meta information: camera and objects positions, 2D and 3D bounding boxes, characters’ poses, environment information like time of day and weather conditions and more.
16 or 32-bit color image generated from the render and the sensor and ISP simulation. Typically used to feed your model training pipeline.
Image in which every pixel for every object class in the scene has a specific unique color according to Anyverse’s ontology. Used as ground truth to help the models understand what pixels belong to what object.
Only for objects of interest. The pixels of different instances of the same object class have different colors. This helps the AI understand different instances of the same class during training.
Every different material has a different color at the pixel level. This channel can be interesting for use cases dependent on the objects of interest materials.
Every pixel in this image has 3 32-bit channels each with one of the x, y, z coordinates of the pixel in the world reference system. It is useful for spatial reference.
Contains the XYZ image. The color in Anyverse rendering system is encoded by spectra rather than RGB triplets. Spectral information is converted to XYZ images using the CIE 1931 system.
This channel is called Albedo sometimes. It contains the color image without the contribution of lights. It is useful to have as a reference when the color image is not created.
It contains the material roughness for every pixel with a value between 0 and 1. A material with a roughness value of 1 will provide white pixels, whereas a material with 0 roughness will give black pixels.
In this channel Anyverse encodes the velocity vector of every pixel in the image in world coordinate system. These will be non-zero for objects that are moving in the scene when the sample is taken. Useful for dynamic use cases.
This is the raw image coming out of the sensor simulation without any ISP. Useful if you have your own ISP simulation to generate a final color image.
This channel contains a value between 0 and 1 representing the distance of every pixel to the camera. These values can be easily converted to meters. Useful to train AI models that deal with distance estimation of objects.
For every pixel in this channel you have the normal vector for the surface (geometry) that pixel belongs to, in the world reference system.
For every pixel in this channel you have the normal vector for the surface that pixel belongs to, including the texture effect, in the world reference system.
for every stage of your advanced perception system development cycle with Anyverse™.