We have a very diverse team at ANYVERSE! There are IT specialists, mathematicians, physicists, 3D artists, and more. It is fantastic to see this unique mix of people with years of experience and young talent work together and learn from each other.
As a CTO, apart from developing our tech strategy, managing the team and dealing with customers, sometimes I do get to program myself. It’s been my passion ever since I was a kid and I believe it’s also beneficial for the project, as I get an “inside view” of the software, and not just the bird’s eye one.
What is more, I am actively involved in all tech communication with our clients from the moment we define together the project until the final delivery and feedback.
Here is what happens when a client wants to try ANYVERSE in their pipeline:
Data customization is key for us at ANYVERSE, as we do not believe in generic solutions. Our goal is to match the data generated to the perception model in training/testing.
With ANYVERSE we basically simulate what happens inside a camera so we can generate data that matches the client model. Often we develop custom sensor models, both camera and LiDAR. In fact, it is quite common to create a specific camera lens according to the parameters provided by the client. For the calibration parameters they use a checkerboard placed in front of the camera. They take a picture and then using a software extract the exact parameter info from the pictures. At this point we are able to implement the lens model and replicate the lens perfectly.
Likewise, some clients focus on sensor rigging to determine where sensors are placed around the ego vehicle – at the front, on the right or on the left, inside, LiDAR on top for example. These can be combined and multiple sensors can be added, no problem!
Scene customization goes on and on! Clients can request a specific location, number and type of assets, meteorological conditions, time of the day for their data, and many more. What they seek is simple – greater variability!
If we take the automotive industry as an example, the vehicles can range from trucks to ambulances and scooters and brands can vary as well. We also have a vast catalogue of characters including animals and people of different gender, age, ethnicity. In addition, we can manage object distribution. For instance a client may request a scene filled with bicycles only and have these placed at a certain distance from the ego vehicle.
Weather conditions include probability percentages, for instance the chance of rain, fog, clouds, etc. as well as wet surfaces. These make for great everyday corner cases.
In terms of lighting, we control both artificial and natural lights, as long as these respect any physically-correct behaviors. We can turn them on and off, and also control the intensity of artificial lighting such as: car headlights, building lights, traffic lights, street lights.
Since we simulate data in a virtual environment, any information related to the sensors, camera or LiDAR can be collected. Usually clients ask for color images and pixel-level segmented ones. Sometimes we provide instance images in which objects of the same type are not grouped. This is very useful for training! Last but not least, we get asked for depth images which help when the client lacks a LiDAR technology. Other channels include spectral information, reflectance, and more.
We have had use cases from numerous industries – aviation, autonomous vehicles/ADAS, drones, security, sensor development.
The most challenging project so far was focused on detecting runway defects from a drone view. It was challenging to realistically replicate road cracks and deterioration. This is only possible when you can simulate materials according to how they would interact in the real world.
In the future, we also see a demand for more indoor projects, for example ones focusing on security with both cameras and LiDAR.
We have a proprietary rendering engine very different from the game engine solutions on the market in terms of sensor, lights and material fidelity to reality. And through the years we have been developing it further to align it with the market. It is very gratifying to be able to learn something new with each project and thus constantly add new features and assets. As a result, customizations helps us have an even broader range of sensor models for the future.
Some of the current software improvements include a render engine working on GPU, procedural scene generation and an animation engine to control behaviors within the scene, for example a pedestrian crossing where they shouldn’t.
In conclusion, the plan is to keep going, learning and helping clients replicate reality the best way possible.