ANYVERSE is a breakthrough synthetic data solution for AD/ADAS Perception.
ANYVERSE engineers can team up with your machine learning, perception or simulation experts to configure the ANYVERSE solution based on your specifications (scene features, environment, camera specs) and produce the datasets needed.
The ANYVERSE software platform offers a scalable, cloud-based, high-fidelity synthetic dataset production environment. This model allows you to manage the data production process and integrate the solution in your Perception Training pipeline.
ANYVERSE contains a number of base scenarios that can be modified to add pedestrians and/or vehicles in a dynamic manner. Current scene types include parking, urban, suburban and highway. In addition, custom scenarios can be produced by our 3D specialists.
A library containing a vast range of high-quality assets (over 1,000) such as vehicles, pedestrians, buildings, traffic signs, lanes, vegetation and street furniture. ANYVERSE can automatically produce random variations of scenes populated with assets in static or dynamic conditions.
ANYVERSE generates plausible driving scenarios automatically, using a proprietary procedural engine. Scenarios can vary from urban to suburban or rural environments. The scene engine is compatible with OpenDRIVE and OpenStreetMap.
Agent-based models produce dynamical conditions in traffic and people with minimal user intervention. Cars and pedestrians interact automatically to changing conditions such as traffic light states, cross streets, etc.
ANYVERSE features a physics-based synthetic image generation technology able to produce very realistic images and sequences of unprecedented quality. The engine uses an accurate light transport model that works at full spectral range (32 bits) and makes use of physics descriptions of lights, materials, and sensors.
ANYVERSE materials are defined by their spectral BSDF functions (Bidirectional Scattering Distribution Function). A multi-layer model provides the ability to stack multiple layers of materials for complex surfaces and simulation of wet, dirt or corrosion effects. Thin coatings are also available for very subtle and realistic effects such as film interference.
Camera / Sensors
A physics-based camera model allows you to set the lens type (fisheye, pinhole, 360, custom), sensor response and other camera features. Raw 32-bit sensor data in XYZ, spectral and RGB formats can be exported. Check out the ANYVERSE sensor pipeline diagram and learn more.
An arbitrary number of cameras can be defined according to their relative position in the car, allowing for complex camera rigging.
The simulation of LiDAR sensors in ANYVERSE is performed using the same ray tracing algorithm, computing the interaction of light rays cast from the simulated LiDAR device with all objects and materials in the scene. Specific settings for commercial LiDAR devices are available.
ANYVERSE can produce physically-correct unlimited variations of lighting conditions without re-simulating the scene. Arbitrary light variations in a collection of images or in a sequence can be easily produced. For example, a daylight sequence can be transformed into a night sequence.
ANYVERSE can simulate numerous environment settings and atmospheric effects to prepare the perception model for dazzling fog, heavy rain, snowy or icy environments. Physics-based algorithms are used to simulate rain, splashes, snow, mud, etc.
Datasets come with automatic generation of pixel-accurate data channels including depth, object ID, material ID, instance ID, 3D motion vectors, surface normals, roughness, 3D positions, and radiance. Object ID and material ID ontologies are included for proper classification.
Annotated data available in json, xml and Google protocol buffer formats. Annotated data includes 2D/3D bounding boxes with object identification, camera position and orientation, time of the day, weather conditions, and other simulation settings.