1. Home
  2. Knowledge Base
  3. Resources
  4. Platform Tutorials
  5. Example workspace for conveyor belts use cases

Example workspace for conveyor belts use cases


In this workspace, we have created a conveyor belt system with various elements (cardboard boxes) to be examined from different perspectives. We have set up a camera rig with nine different cameras to obtain top-down views in most cases, where we can appreciate specific details of the items moving along the belts, such as adhesive tape, shipping stickers, QR codes, barcodes, warning labels, etc.

A sample generated by 1 of the 9 cameras


Building an industrial unit

We will use warehouse sections to build our industrial unit (it will house a system of various conveyor belts with different shapes, angles, and paths). Starting up with a blank project, just like we have done in other example workspaces, we will search for the necessary sections to build our facility in the ‘Resources’ window. We can access them easily just by filtering resources, using the tag warehouse_structure. In this case, we will only need to use the first two sections (by double-clicking on them in order to add them to our workspace): Warehouse_Interior_base_section and Warehouse_Interior_middle_section.

Warehouse structure assets

After adding these 2 assets to our workspace, we will drag the asset ‘Warehouse_Interior_base_section’ into the simulation. Within the Simulation, if we expand the asset, we will find a large number of locators to populate different elements such as shelves, outlets, emergency lights, safety fences, warehouse offices, among others. We will find a NewStory_Locator on which we will create a middle section: right-click on NewStory_Locator > Create Fixed Entity… > Warehouse_Interior_middle_section. This will create a perfectly assembled middle section with the base.

Adding a middle section

Expanding the entity we just created, we will find a NewStory_Roof_locator, where we can create another Fixed entity by selecting Warehouse_Interior_middle_section again. This will perfectly stack a new middle section to achieve higher walls and columns.

Now let’s take a look at the structure we just created. First, we will set our three assets (the base section and both middle section) to ‘Mesh’ visualization mode. To do this, simply click on each of these three assets (Warehouse_Interior_base_section and both Warehouse_Interior_middle_section) and in the Properties panel, use the display mode: Mesh (you can perform this action with all three assets selected).

Once all three components are in Mesh display mode, we can press the C key or click on the ‘Frame selected entities’ icon (located at the top left of the 3D viewport) to get a view where our assets are perfectly centered:

This will provide us with a view of the base and the two middle sections perfectly stacked:

Taking a view of our location (Visualization mode: Mesh)

Placing conveyor belts

To make things easier, we will use a master asset from which we will obtain the necessary locations to place the conveyor belts as well as the cameras throughout the scene. In Resources, we will look for the asset called conveyor_system. We will add this asset specifically created to generate a cluster of conveyor belts to our workspace by double-clicking on it.

Resources > search for ‘conveyor_system’

Once added to our workspace, the next step is to drag it directly from Assets to the Simulation node:

Expanding the entities of this node, we will get access to the locators specifically made to create Fixed Entities on them, both to generate the conveyor belts (and the cameras in their respective positions).

Next, we are going to include the different types of conveyor belts in our workspace by simply accessing Resources, selecting Label and searching for ‘conveyor_belt’, then click on the resulting class and finally click on the Add Label button:

This search will return the different types of conveyor belts available in Anyverse. As we can see from their names, each of them corresponds to the locators we had expanded in the conveyor_system node:

We can select all of them at once and drag them to Assets. This will add all the conveyor belts right into our workspace:

To place each conveyor belt in its corresponding location, we could create a Fixed entity on each of the locators. The name of each locator will indicate the conveyor belt we should create. For example: Right click on conveyor_roller_02_locator > Create Fixed Entity > conveyor_roller_02, and so on.

Even better, we can programmatically create the belts based on the names of conveyor_…_locator(s) to generate a Fixed Entity with the specific asset. This way, we can replicate the same action on each locator with its corresponding asset.

Placing cameras

Following the same method we have used for the conveyor belts, we can create a camera for each of the camxx_locators available inside conveyor_system. In this case, right-click on cam01_locator > Create Camera:

Creating a camera on a specific locator

Similarly, we can programmatically create each and every camera needed in each of the locators, with the desired technical specifications. In the example scene, we have used the following parameters:

There is no need to manipulate translation or rotation coordinates to frame the cameras, as each camera will inherit these parameters from the properly oriented locators, creating a nested object structure ready to be used. Note that in the example scene, we have used a sensor and an ISP. However, we won’t go into the definition of these elements in detail here, as they are properly explained in the Anyverse documentation.

Placing the boxes

Each of the conveyor belts, in turn, hosts a group of locators to place various items. In this case, we have chosen to place different cardboard boxes in various sizes, to which we have added adhesive tape and other elements such as shipping labels, barcodes, QR codes, warning signs, and so on.

Color vs. Label images

The boxes we are going to use for this example scene are accessible through the Resources panel. We can add them to our workspace directly from the Resources panel, as we did before with the conveyor belts, or we can perform programmatically queries to locate them. To find them, we simply need to search for resources that meet the following conditions:

  1. In the Label field, select: Box
  2. In the Tag field, select: inspection

With these conditions, we will obtain the following results:

Suitable products for set_bottles

We will use the locators (item_locator_xx) available on the conveyor belts to populate the desired boxes. In each locator, we will create a fixed entity and place the desired box. As always, it is recommended to perform this action programmatically to speed up the process. This approach also allows us to decide whether we want to populate all the locators on the conveyor belts or only a certain percentage of them. We can also apply a small random rotation to add variability, and so on.

We will create boxes as Fixed entities in every single item_locator_xx

Placing additional elements

The boxes are prepared to incorporate other types of elements such as adhesive tapes, shipping stickers, barcodes, QR codes, or safety warnings, among others. These elements also have variations in materials to further enrich the possibilities of variability.

In the Resources panel, by simply searching for ‘sticker’, we will find 4 elements that we can add to our workspace (barcode_sticker, batteries_sticker, qrcode_sticker, shipping_sticker):

On the other hand, in the boxes, we will find two types of locators:

  • shipping_sticker_locator: Suitable for both shipping_sticker and batteries_sticker
  • barcode_locator: Suitable for both barcode_sticker and qrcode_sticker

Here we can see three boxes with their respective locators and various elements populated in each of them (the classes are color-coded).

Additionally, we can add adhesive tape to each box. Each box has a unique shape, so there are various adhesive tape assets that fit perfectly with each box model. By typing “tape_” in the search field of the resources panel, we can access all the available tapes for the boxes (and include them in our workspace afterwards):

To place the adhesive tapes on the boxes, we will simply create them as fixed entities on each box, ensuring that each box model has its appropriate tape. For example, for box_a3, we will create a Fixed Entity (right click on box_a3 > Create Fixed Entity…) and choose tape_a3, and so on:

With these premises, we will be able to populate the different elements we want throughout all the boxes in order to enrich the dataset with greater class identification:



To enrich the variability of the content, we can use various materials that make the identification of the elements in the boxes more interesting. This way, we can apply variations in the materials of the stickers, for example, to use different colors, worn-out codes, damaged labels, or modifications in the texts, logos and more. Iterating with variations on the materials will allow us to generate assets with different appearances, so we can combine different versions of materials to enrich the final dataset. Here is an example of the same asset with modified materials on the cardboard box and the stickers:

Same asset with different materials

In the Resources panel, let’s clic on Material and type in the search field: inspection. We will obtain several materials to vary the different stickers (barcodes, QR codes, shipping stickers…). As usual, we can select all of them and drag them to our workspace, specifically to the Materials node.

Dragging materials to our workspace for improved variability

Similarly, we can search by Tag: boxes, and we will find materials that can be used on the boxes we are using:

To programmatically establish changes in materials, the compatibility between assets and materials is done as follows:

Each asset has some different materials (each material has a name). If the asset also has an attribute with the same name as one of its materials, the value of that attribute will determine which materials are compatible with it. This value will be the key that allows us to search for materials whose ‘compatibility’ attribute matches it. Let’s see it with an example.

Taking the asset box_a3 as a reference, we can observe that it has a material called ‘cardboard’:

By clicking on Workspace > Assets > box_a3, we can see in the Properties panel all this asset’s attributes. Indeed, we can see that one of them is called ‘cardboard’. Yes, it appears that the attribute has the same name as its material, ‘cardboard’. This indicates that the attribute is associated with the material and can be used to determine compatibility between assets and materials. The value of this attribute, in this case, is ‘amazon_a3’:

Now, we will simply search in Resources > Materials using the filter Attribute: compatibility with a value precisely of ‘amazon_a3’:

As a result, we will obtain the materials that are compatible with this asset (in this case, we can see its original material and a variation of it with some dirt and stains):

By programmatically establishing this system of searching for compatible materials for each asset, we can randomly vary between a large number of options for each resource, thus achieving a more dynamic variability in our datasets.

Lighting conditions

On the other hand, we can establish various lighting configurations. For daytime scenes, we can use any of the backgrounds available in Resources > Background.

Multiple background options for different lighting conditions

To ensure that the scene is lit with the background and not with a physical sky, you need to explicitly select this option in the simulation settings (after adding the various backgrounds). First, you need to set a background for the scene: On your workspace, clic on Simulation and go Properties > Background > click on “No entity” button > select your background:

Next, select the “Background” option from the dropdown menu:

Finally, we can easily set up a system of variations using the Variations editor: Generator > Variations > List of variations (+):

Once this change has been made, your workspace is ready to use different lighting in each iteration.


Preliminary actions

Since we have a large number of cameras (9) with FullHD resolution in this workspace, the consumption of megapixels is indeed considerable. We need to determine if we really want to capture all 9 samples or if, on the contrary, we are interested in selecting only specific cameras. To do this, we simply need to hide the cameras that we want to exclude by clicking on the eye icon next to each camera, leaving only the ones we want to render visible (in this example we just keep active cam09):

In this case, only the render of cam09 will be computed

In Anyverse, by default, occlusion of geometries is computed in order to optimize the rendering process: geometries that are occluded by others are not computed. However, to ensure that all our geometries are properly rendered, we can enable the exclusion of occlusion tests for all elements involved in the scene (by selecting an element and applying this option in the Properties panel):

To avoid manually enabling this option for each entity one by one, we can apply a simple script that performs this action for all elements:

Once we have the desired cameras activated and the elements excluded from the occlusion test, we can proceed with the batch generation.

Generate a batch

To generate a batch from your workspace, you first have to create a dataset with the output channels you require. Go to the generator inspector space (1), then click the + icon to the right in the Dataset node of the tree (2), give it a name and select the channels you need for your dataset (you won’t be able to change them after creation) (3) and the Create button (4).

Now you are ready to generate a batch, right click on the Generate node in the workspace and select a dataset (the one you just created, for example). A pop-up will give you the details of the generation you are going to run, after giving a meaningful name to the result there will be another popo-up with the progress of the generation. when closing you can go to the dataset in the generation inspector space to see the execution progress in the cloud. When all the render finish you will see the results.

Here there are some examples of the generated images using this workspace:

The dataset can be browsed using the “Generation Inspector”. We can click on each individual capture and see the final image and the associated ground-truth. Annotations are also shown on the right hand side. By right clicking on the batch we can download the results to our local drive:

If you want to check the result you can download the dataset. As we set 9 different cameras you will have 9 captures with different points of view and ground truth additional images.

What’s next

In this workspace, we have used predefined camera viewpoints, but the power of Anyverse lies in its flexible and versatile workflow that allows for variations in cameras, assets, and dataset conditions. Here are some suggestions for further enhancing your workspace:

Camera Viewpoints:

  • Try an aerial viewpoint, simulating a camera placed on a drone or security cameras to capture a panoramic view of the entire scene.
  • Use a first-person viewpoint, as if you were looking through the eyes of a factory operator moving along the conveyor belts.

Modifications to Conveyor Belt Assets:

  • Add different types of products or materials to the conveyor belts to simulate a greater variety of items in the manufacturing process.
  • Change the color, size, or shape of the objects moving on the belts to create visual variations in the dataset.
  • Introduce interactive elements, such as robotic arms or inspection devices, that interact with the products as they move along the conveyor belts.

Other Elements to Add to the Scene:

  • Include workers or employees in various positions along the conveyor belts. They can be supervising the process, handling the products, or performing specific tasks.
  • Incorporate transport vehicles, such as forklifts or pallet jacks, moving around the production area.
  • Add shelves or racks where finished products or materials used in the manufacturing process are stored.
  • Include lighting elements, such as hanging lights or spotlights, to simulate different lighting conditions in the environment.

These are just a few ideas to explore and customize your Anyverse scene. You can combine several of these suggestions or even try new configurations to obtain a diverse and tailored dataset. Remember that Anyverse provides flexibility and versatility for experimenting and generating variations in your dataset efficiently.

Was this article helpful?

Related Articles

Scroll to Top

Let's talk about synthetic data!