1. Home
  2. Knowledge Base
  3. Resources
  4. Platform Tutorials
  5. Example workspace for AV/ADAS use cases

Example workspace for AV/ADAS use cases

Introduction

This workspace serves as a demonstration of our ability to construct a synthetic dataset comprising images commonly employed for AV/ADAS use cases. Within this workspace, we have incorporated a camera attached to our ego vehicle, accompanied by pedestrians and vehicles positioned in front of the camera. The base scene features a straightforward urban layout consisting of four street blocks. Notably, the workspace is meticulously configured to introduce random variations in the models of vehicles, pedestrians, street blocks, and even the time of day during dataset production. This highlights the seamless introduction of variability, a crucial factor in training machine learning models for optimal performance.

Screenshot of Anyverse Studio when the workspace is loaded

When you open the workspace, you’ll see a warning icon next to the “Assets” node on the right side. This happens because one or more assets used in the workspace have been updated since they were added. But don’t worry, it’s not a problem at all. The workspace will work just fine without updating the assets to their latest versions.

Components

Base Scene

One of the first things you need to do when creating a workspace for AV/ADAS is to choose the base scene. To do this, navigate to the “Resources” panel and select the “Scene” category. The scene will be found under the “Scenes” node upon double-clicking. Once the scene is available, it should be set as the base scene for our workspace. To accomplish this, click on the “Simulation” node and access the properties panel. In the “Environment” section, locate the “Scene” property and click on the “No entity” button to browse for our scene.

Screenshot showing how to set the base scene for our workspace

Camera

One of the key components in our scene is the camera, which is typically affixed to the “Ego” node. This allows us to attach multiple cameras that move in accordance with the “Ego” node. To create a camera, simply right-click on the “Ego” node and select the “Create Camera” option.

When the camera is created its main components are created automatically as well. They are also attached automatically to the new camera. For this workspace we are using a Sony IMX265 sensor. For the ISP (Image Signal Processor) we use a basic one where only the demosaicing and the bit depth are used. For the camera lens we use a default pinhole model. Please note that the sensor, ISP and lens are added to the workspace under their respective nodes. You can click on them to see the parameters.

Camera with references to the sensor, ISP and lens

City blocks

Once the base scene is established within the “Simulation” node, the road layout becomes visible in both the 2D and 3D viewport. Alongside the road layout, our workspace incorporates a powerful feature in Anyverse Studio called “locators”. These locators are special nodes that serve as placeholders for positioning other objects. By utilizing locators, users can easily and precisely position objects within the workspace. To access the locators, navigate to the “Scene Locators” node, found under the “Simulation” node.

The “locator” concept proves to be advantageous for integrating street blocks into our base scene. Upon accessing the “Scene Locators” node, all available locators within the scene become visible. Specifically, the locators with the “BLOCK_” prefix will be used as placement points for street blocks. Our approach involves importing street block assets into the workspace and dropping them under the corresponding locator nodes, resulting in precise placement within the base scene. To conveniently select street blocks, go to the “Resources” panel, specifically the “Asset” category. Apply the tags “central” and “parallax” to locate street blocks intended for the “BLOCK_CENTRAL_” locators. Tags like “northsouth,” “eastwest,” and “corner” can be used to identify street blocks suitable for other locator categories.

Tags to filter the street blocks we are going to use in our workspace

Individual assets can be easily selected and dropped onto the desired locator, resulting in the immediate appearance of the street block in both the 2D and 3D viewports. However, it becomes apparent that this process becomes cumbersome when dealing with a large number of locators, especially in cases where there are hundreds or even thousands within our base scene. In the following section, we will delve into how we can streamline our workflow by leveraging the scripting capabilities of Anyverse Studio to efficiently populate the base scene with trees.

Workspace with all the street blocks added

Trees

Similar to the process with street blocks, we have the option to manually drag and drop trees from the “Resources” panel into the “TREE_” locators. However, this approach can be time-consuming. The most efficient method is to utilize the scripting capabilities of Anyverse Studio. Before proceeding with the script, we need to ensure that the tree asset is available. While it’s possible to randomly select the tree asset within the script, for the sake of simplicity, we will handle this step outside the script.

Within the “Resources” panel, locate the specific tree asset named “American_Sycamore_Sapling_Alter_Winter_High“. Simply type the name into the “Search…” field, and the asset will appear automatically. Drag and drop it into the “Assets” node within the workspace.

Tree asset used in the script

Now that everything is prepared, we are ready to execute the script.

assert_tree_id = workspace.get_entities_by_name('American_Sycamore_Sapling_Alter_Winter_High')[0]
locators = workspace.get_entities_by_type(anyverse_platform.WorkspaceEntityType.Locator)
for locator in locators:
locator_name = workspace.get_entity_name(locator)
if locator_name.startswith('TREE_locator_'):
workspace.create_fixed_entity('tree', locator, assert_tree_id)

This script retrieves the tree asset and iterates through all the locators, adding a tree object that automatically attaches to each locator. To run the script, press “Ctrl+R” or click on the green play button located directly below the code editor.

Script to add trees to the base scene using the scene locators

Traffic signs

For adding traffic signs to our workspace, we have a few locators available. In this case, we will follow the same procedure as we did for the street blocks, which involves dragging and dropping assets from the “Resources” panel to the “TRAFFIC_SIGN_” locators.

To keep things simple, we will use a single asset for the traffic sign. You can search for the asset by typing “stop” in the “Search…” field, and several assets containing that word will appear. Simply select the “Stop” asset and drag and drop it onto the “TRAFFIC_SIGN_” locators.

Traffic sign used in the workspace

Vehicles

Now, let’s bring some vehicles from the catalog into our workspace. The goal is to include a few different models to introduce variation. Similar to the process we followed for the scene, we need to navigate to the “Resources” panel and select the “Asset” category. With thousands of assets available, it’s best to use the “Filters” to narrow down the selection to the vehicles we are interested in. To filter out the vehicles we want, we need to set the “Tags” filter to “vehicle” and the “Attributes” filter to “resolution:Low”. This will refine the assets displayed and show us the vehicles that match our criteria.

Options to filter the vehicles we are going to use for this workspace

To utilize the vehicles in our workspace, we need to add them first. Simply select the desired vehicles from the catalog and drag and drop them onto the “Assets” node. By doing this, the assets will be ready to be incorporated into the “Simulation”. Since we will be leveraging the variation engine of Anyverse Studio to change the vehicle models, we only need to add a few of them to the “Simulation” node.

Once added to the workspace, you will be able to see the vehicles in both the 2D and 3D viewports. Each individual vehicle can be selected and moved to the desired location on the road, particularly in front of your camera.

Pedestrians

To add pedestrians, or any other type of asset, we can follow a similar process to what we did for adding vehicles. In the case of pedestrians, we are specifically interested in those with a static pose. To filter the assets accordingly, we can utilize the “character_pose” tag.

First, navigate to the “Resources” panel and select the “Asset” category. Apply the “Filters” option to narrow down the selection. Set the “Tags” filter to “character_pose” to focus on assets that have a static pose. By doing so, we can easily locate and choose the desired pedestrian assets to add to our workspace.

Options to filter the pedestrians we are going to use for this workspace

In order to incorporate pedestrians into our workspace, similar to what we did with the vehicles, we need to add them. Simply select a few pedestrians and drag and drop them onto the “Assets” node. Once done, transfer some of them to the “Simulation” node and position them within the scene.

Variability

The primary focus of this workspace revolves around demonstrating the capabilities of Anyverse Studio in generating variability. This variability can be achieved through the use of the variations engine and scripting.

In this workspace, we will be utilizing the variations engine, but before delving into it, it is important to grasp the fundamental steps and concepts involved in dataset production.

  • Iteration. Refers to the process of employing the variations engine to introduce changes within the workspace.
  • Capture. Entails the acquisition of images, along with their corresponding ground-truth and annotations. In the case of sequences, multiple captures may be obtained per iteration. However, our workspace will only have one capture per iteration. 
  • Batch. Refers to the outcome of several iterations, which could comprise either one capture per iteration or multiple captures per iteration for sequences.
  • Dataset. Represents a collection of batches that share the same configuration. This configuration pertains to the type of ground-truth data generated for each capture.

To initiate the production of a dataset, the initial step involves its creation. For this purpose, a tool called “Generation Inspector” is available, which grants access to both the existing datasets we have created and the capability to generate new ones.

At the top left highlighted the “Generation Inspector” tool, in the middle the panel to configure the result of a capture

With the dataset now created, our next step is to introduce variations using the variations engine. To do this, we can navigate back to the “Workspace” tool and select the “Generator” option. This particular node allows us to configure the number of iterations and other aspects related to dataset production.

Once the “Generator” node is selected, we can access the “Variations” section within the parameters panel. It is in this section that we will incorporate our desired variations. In the workspace, you can observe the variations that have been added. For the purpose of explanation, let’s focus on the variations related to the car model.

Below, you can find an example of how we configure the vehicle named “Mazda_CX7_Low” to randomly transform into one of the models from the provided list: “Lexus…”, “MG…”, and “Mercedes…”. The remaining variations follow a similar approach. It is worth noting that there is also a variation that randomly selects the time of day. To view the comprehensive list of variations, please refer to the workspace.

Configuration of the variation to change the model of a vehicle

The subsequent step involves specifying the desired number of iterations for the batch. Since we do not have sequences, each iteration will result in one capture. To initiate the production process immediately, we can click on the vertical dots located within the “Generator” node. This action allows us to send the batch to the dataset. It is important to select the appropriate dataset where this particular batch will be directed.

A progress panel provides updates on the process of sending the batch to the dataset. Once this task is complete, we can proceed to the “Generation Inspector” to either monitor the production status or examine the resulting outcomes.




Results

The dataset can be browsed using the “Generation Inspector”. We can click on each individual capture and see the final image and the associated ground-truth. Annotations are also shown on the right hand side. By right clicking on the batch we can download the results to our local drive.

Browsing the batches we have produced for the dataset

If you want to check the result you can download the dataset from this link. As we set 5 iterations you will have 5 captures. As explained earlier, because we variate pedestrians, street blocks, vehicles and time of day each capture will be different.

Color images

What’s next

This workspace provides only a glimpse of the vast capabilities offered by Anyverse. The following list outlines additional accomplishments that can be realized, particularly within the AV/ADAS use case:

  1. The scripting capabilities of Anyverse Studio provide an optimal approach for managing variations effectively.
  2. Anyverse Studio facilitates the inclusion of sequences. By utilizing behavior trees, a foundational scene can be animated, featuring vehicles and pedestrians. Cars will exhibit realistic behaviors such as adhering to their designated lanes, stopping at pedestrian crossings, making turns at intersections, and more.
Was this article helpful?

Related Articles

Scroll to Top

Let's talk about synthetic data!