SHARE
Behind the scenes with the Anyverse™ CTO
What is your tech team like?
We have a very diverse team at Anyverse™! There are IT specialists, mathematicians, physicists, 3D artists, and more. It is fantastic to see this unique mix of people with years of experience and young talent work together and learn from each other.
Do you get personally involved in technical projects?
As a CTO, apart from developing our tech strategy, managing the team and dealing with customers, sometimes I do get to program myself. It’s been my passion ever since I was a kid and I believe it’s also beneficial for the project, as I get an “inside view” of the software, and not just the bird’s eye one.
What is more, I am actively involved in all tech communication with our clients from the moment we define together the project until the final delivery and feedback.
How are the projects? Take us through the client workflow!
Here is what happens when a client wants to try Anyverse™ in their pipeline:
- First, we discuss client needs and requirements. Sometimes we suggest ideas they hadn't thought about!
- Then the client can evaluate an existing dataset, even though we always recommend generating a custom one, specific to their needs.
- We start the process of generating a custom dataset.
- If the client needs a particular scenario, sensor model and/or assets, we require examples and as many details as possible, in order to create these in 3D.
- We send a few sample images to make sure the data meets client needs, and make iterations if necessary.
- We generate and deliver the dataset (sometimes in batches).
- Finally, we schedule a feedback session.
How customizable are the projects?
Data customization is key for us at Anyverse™, as we do not believe in generic solutions. Our goal is to match the data generated to the perception model in training/testing.
Sensor model
With Anyverse we basically simulate what happens inside a camera so we can generate data that matches the client model. Often we develop custom sensor models, both camera and LiDAR. In fact, it is quite common to create a specific camera lens according to the parameters provided by the client. For the calibration parameters they use a checkerboard placed in front of the camera. They take a picture and then using a software extract the exact parameter info from the pictures. At this point we are able to implement the lens model and replicate the lens perfectly.
Sensor rigging
Likewise, some clients focus on sensor rigging to determine where sensors are placed around the ego vehicle – at the front, on the right or on the left, inside, LiDAR on top for example. These can be combined and multiple sensors can be added, no problem!
Environment & assets
Scene customization goes on and on! Clients can request a specific location, number and type of assets, meteorological conditions, time of the day for their data, and many more. What they seek is simple – greater variability!
If we take the automotive industry as an example, the vehicles can range from trucks to ambulances and scooters and brands can vary as well. We also have a vast catalogue of characters including animals and people of different gender, age, ethnicity. In addition, we can manage object distribution. For instance a client may request a scene filled with bicycles only and have these placed at a certain distance from the ego vehicle.
Environment & assets
If we take the automotive industry as an example, the vehicles can range from trucks to ambulances and scooters and brands can vary as well. We also have a vast catalogue of characters including animals and people of different gender, age, ethnicity. In addition, we can manage object distribution. For instance a client may request a scene filled with bicycles only and have these placed at a certain distance from the ego vehicle.
Meteorological
Weather conditions include probability percentages, for instance the chance of rain, fog, clouds, etc. as well as wet surfaces. These make for great everyday corner cases.
Lighting
In terms of lighting, we control both artificial and natural lights, as long as these respect any physically-correct behaviors. We can turn them on and off, and also control the intensity of artificial lighting such as: car headlights, building lights, traffic lights, street lights.
Metadata
Since we simulate data in a virtual environment, any information related to the sensors, camera or LiDAR can be collected. Usually clients ask for color images and pixel-level segmented ones. Sometimes we provide instance images in which objects of the same type are not grouped. This is very useful for training! Last but not least, we get asked for depth images which help when the client lacks a LiDAR technology. Other channels include spectral information, reflectance, and more.
What are client projects like? Wich was the most challenging one?
We have had use cases from numerous industries – aviation, autonomous vehicles/ADAS, drones, security, sensor development.
The most challenging project so far was focused on detecting runway defects from a drone view. It was challenging to realistically replicate road cracks and deterioration. This is only possible when you can simulate materials according to how they would interact in the real world.
In the future, we also see a demand for more indoor projects, for example ones focusing on security with both cameras and LiDAR.
How has Anyverse™ evolved and what's to come?
We have a proprietary rendering engine very different from the game engine solutions on the market in terms of sensor, lights and material fidelity to reality. And through the years we have been developing it further to align it with the market. It is very gratifying to be able to learn something new with each project and thus constantly add new features and assets. As a result, customizations helps us have an even broader range of sensor models for the future.
Some of the current software improvements include a render engine working on GPU, procedural scene generation and an animation engine to control behaviors within the scene, for example a pedestrian crossing where they shouldn’t.
In conclusion, the plan is to keep going, learning and helping clients replicate reality the best way possible.
About Anyverse™
Anyverse™ helps you continuously improve your deep learning perception models to reduce your system’s time to market applying new software 2.0 processes. Our synthetic data production platform allows us to provide high-fidelity accurate and balanced datasets. Along with a data-driven iterative process, we can help you reach the required model performance.
With Anyverse™ you can accurately simulate any camera sensor and help you decide which one will perform better with your perception system. No more complex and expensive experiments with real devices, thanks to our state-of-the-art photometric pipeline.
Need to know more?
Come visit our booth during the event, our website anyverse.ai anytime, our Linkedin, Facebook, and Twitter social media profiles.