Making machine perception real
At ANYVERSE we aim to make the training and testing of perception-based systems cost-effective, faster and more accurate by providing high-fidelity synthetic datasets software solutions.
High-Fidelity visual simulation

The ANYVERSE synthetic dataset solution offers the speed, scalability, and visual fidelity needed by machine-learning teams to rapidly progress to high-confidence perception models. ANYVERSE simulates the visual appearance of the real-world with higher accuracy and variability than is practical with other approaches. ANYVERSE is the perfect software platform for Perception teams working in Autonomous Vehicles, Drive-Assist (ADAS) and Autonomous Robotics.

Our motivation

There is an compelling need for safety level accuracy for autonomous systems in all environments. This requires rich modeling of digital scenarios, including lighting, weather, varying physical conditions, color ranges, and behaviors. This, in turn, calls for solutions that can efficiently and precisely cover the full range of real-life conditions. ANYVERSE counts on Next Limit’s 20 years of experience in computer graphics and 3D simulation. We have perfectly combined our core technologies to accelerate the development of current and future smart autonomous systems. 

Our difference

High Fidelity Rendering

Game engines do not accurately mirror reality. ANYVERSE’s physics-based spectral unbiased renderer gives you accurate visual quality and an accurate representation of lighting and the environment.

Faster Training Cycles

Speeding up AI training cycles is critical. With ANYVERSE faster perception training iterations are possible,  guaranteeing a competitive advantage over teams relying exclusively on real-world datasets.

High Confidence

ANYVERSE is the agile solution to make an autonomous system safer and more reliable. Simulation of visually-challenging conditions is key to increasing confidence and robustness in the perception system.

Flexible Solution

The best part – there is no need to master complex software. Clients can team up with our engineers to produce custom datasets or get access to a highly scalable cloud platform to produce the datasets they need.

Core Technology

Scenes & Materials

  • Predefined and procedural generation of scenes for urban and suburban areas. Geographical variations.
  • Libraries for objects of training interest (traffic lights, signs, vehicles, pedestrians, etc.)
  • Physically-based materials for accurate optical simulation of objects and surfaces. 

Physics-based Rendering

  • Unbiased spectral high-range render engine able to accurately reproduce even the most challenging visual conditions.
  • Physically accurate atmospheric model (sun position, clouds, air pollution, etc.)
  • Physics-based weather simulation including optical behavior: rain, water, sand, snow, mud.
  • Fast-generation of lighting variations

Sensor Model

  • Accurate simulation of the photographic process of single or multiple cameras. 
  • Camera geometry: aperture speed, focal length, diaphragm shape, etc.
  • Lens modeling: shape, diffraction, chromatic effects.
  • Custom lens models.
  • CCD sensor modeling: film size, response curves.

What’s New