Play Video

A flexible and accurate synthetic data generation platform

Craft the data you need for your perception system in minutes. Design scenarios for your use case with endless variations. Generate your datasets in the cloud.

The Anyverse Platform

Anyverse offers a scalable synthetic data software platform to design, train, validate, or fine-tune your perception system. It provides unparalleled computing power in the cloud to generate all the data you need in a fraction of the time and cost compared with other real-world data workflows.
synthetic data generation

Hyperspectral (pixel-accurate)

render engine

synthetic data generation

Physics-based sensor simulation

procedural-scene-generation SN

Procedural (API-based) scene generation

Graphical interface for dataset development

synthetic data generation

Built-in assets library

Scalable cloud architecture

A flexible architecture

Anyverse provides a modular platform that enables efficient scene definition and dataset production. Anyverse™ Studio is a standalone graphical interface application that manages all Anyverse functions, including scenario definition, variability settings, asset behaviors, dataset settings, and inspection. Data is stored in the cloud, and the Anyverse cloud engine is responsible for final scene generation, simulation, and rendering. It produces datasets using a distributed rendering scheme.

Arquitectura v desktop V1

Anyverse Studio

Anyverse Studio is the user interface for scene generation that allows users to recreate a diverse synthetic reality in static and dynamic conditions.Whether you need static annotated images or sequences, use Anyverse’s extensive asset library to compose a 3D scene. Apply dynamic behaviors and program the environmental variability you need with Python scripts.

Produce your datasets in the cloud and explore the results in Anyverse Studio, including all the associated ground truth data.

camera SPECS

GUI Scene design

sensor & ISP definition

2D & 3D viewports

3D assets

dynamic behaviorS

weather conditions

LIGHT
CONDITIONS

python
API

endless variability

script based dataset generation

Ground-Truth channels

Anyverse’s datasets include:
color images, raw hyperspectral data, annotations and pixel segmentation.

Dataset metadata:
Annotations, Color, Object Segmentation, Instance Segmentation, Material Segmentation, 3D Position, Radiance, Reflectance, Roughness, Motion Vectors, RAW data (before and after sensor), Depth, Surface normals, Light normals.

image-87
resources

Built-in assets library

Anyverse brings a built-in assets library including all kinds of assets and objects ready to populate any scenario for a variety of use cases. All of them are classified (including materials) and categorized for easy and intuitive management.

Hyperspectral render engine

Anyverse implements a pure spectral path-tracing engine that computes the spectral radiance of every light beam, simulating lights, cameras and materials with physical accuracy. This allows for a precise simulation of the amount of light reaching the camera sensor to generate a final image containing full spectral information per pixel.

lens

custom lens

model

hyperspectral

(256 bands sampling)

motion blur

Rolling shutter

complex environments (sky and WEATHER)

high bit-depth image output

raw sensor data

photometric accuracy

Anyverse simulates energy as an electromagnetic wave across the rendering pipeline to its final digital values.

Light sources, including the sky and sun, are modeled through their characteristic spectrum profile that depends on the type of light source and temperature (LED, incandescent, etc.). Materials are also physically modeled using BSDF functions.

Sensor simulation

Accurate modeling of sensors is essential to guarantee a high degree of generalization and reduce the domain gap.

Anyverse combines a physics description of lights and materials in a 3D scene with a detailed hyperspectral simulation of sensor intrinsics. The sensor pipeline calculates energy’s propagation per wavelength, considering many subtle physics effects such as the conversion of photons into voltage and RAW digital values before producing a final image.

A variety of sensors and lenses in visible and non-visible (near infrared) bands are supported.

Play Video

Anyverse performs optical simulation using advanced path-tracing technology, capturing effects such as extreme optics distortion, lens shading, lens blurring (depth of field), complex assembly of multiple lenses, and more.

pixel size

exposure time

fill factor

well capacity

noise

qe curves

gamma

analog offset/gain

infrared

filter

low-pass

filter

color filter array

conversion gain

white

balance

rgb to xyz matrix

xyz to device matrix

pixel vignetting

e-book

FREE EBOOK

Anyverse camera sensor simulation

Learn everything you need to know about our physically-based sensor simulation.

LiDAR

Anyverse performs LiDAR simulation using its core ray tracing engine, which accurately tracks the interaction of beams emitted from the LiDAR sensor with different objects and materials in the scene. Mechanical components involved in the emission and reception of laser beams are approximated by different functions to match the scanning pattern of the physical LiDAR sensors.

Anyverse can combine camera and LiDAR simulation simultaneously. This means that for every color image a corresponding LiDAR point cloud can be produced. This enables sensor fusion algorithms to use combined LiDAR and camera sensor inputs.

Different parameters can be configured to simulate a specific LiDAR, including spinning, solid-state, and flash LiDAR.

The point cloud data can be generated in different file formats. For every 3D point, rich scene information is supplied, such as position, distance, object ID, material ID, 3D bounding boxes, speed (in the case of dynamic objects), etc. 

Start generating hyperspectral synthetic data

for every stage of your advanced perception system development cycle with Anyverse™.

A flexible and accurate synthetic data generation platform

Craft the data you need for your perception system in minutes. Design scenarios for your use case with endless variations. Generate your datasets in the cloud.

Play Video

THE ANYVERSE PLATFORM

Anyverse™ is a scalable platform to design, train, validate, or fine-tune your perception system. It provides unparalleled computing power to generate all the data you need in a fraction of the time and cost compared with real-world data workflows.

Hyperspectral render engine

Physics-based sensor simulation

Procedural (API-based) scene generation

Built-in assets library

Graphical interface for dataset dev

Scalable cloud architecture

A FLEXIBLE ARCHITECTURE

Modular design that enables efficient scene definition and dataset production.

ANYVERSE STUDIO

Anyverse™ Studio is a standalone graphical interface application that manages all Anyverse functions, including scenario definition, variability settings, asset behaviors, dataset settings, and inspection.

Ground-Truth channels

Anyverse’s datasets include:
color images, raw hyperspectral data, annotations and pixel segmentation.

Dataset metadata:
Annotations, Color, Object Segmentation, Instance Segmentation, Material Segmentation, 3D Position, Radiance, Reflectance, Roughness, Motion Vectors, RAW data (before and after sensor), Depth, Surface normals, Light normals.

Built-in assets library

Anyverse brings a built-in assets library including all kinds of assets and objects ready to populate any scenario for a variety of use cases. All of them are classified (including materials) and categorized for easy and intuitive management.

HYPERSPECTRAL RENDER ENGINE

Anyverse™ hyperspectral render engine implements a pure spectral path-tracing engine that computes the spectral radiance of every light beam interacting with materials in the scene, simulating light transport at close physical level.

SENSOR SIMULATION

Using the hyperspectral data provided by the render, Anyverse can simulate all the physics happening at the sensor.

LiDAR

Anyverse simulates LiDAR accurately, including spinning, solid-state, and flash LiDAR, being able to combine camera and LiDAR simulation simultaneously. This enables sensor fusion algorithms to use combined LiDAR and camera sensor inputs.

Point cloud data can be generated in different file formats. For every 3D point, rich scene information is supplied, such as position, distance, object ID, material ID, 3D bounding boxes, speed (in the case of dynamic objects), etc.

Start generating hyperspectral Synthetic data

Scroll to Top

Let's talk about synthetic data!