A couple of weeks ago, Tesla celebrated its AI Day, led by Elon himself along with Tesla’s Head of AI,Andrej Karpathy and other engineers from the software and hardware teams. Their “sole goal” was persuading experts in the field of robotics and artificial intelligence to come work at Tesla. But this event was much more than that and many of us were amazed by how Tesla is solving computer vision problems, and more specifically, how they generate training data for their car’s autopilot system.
Developing computer vision systems is not an easy task. We are talking about systems that need to understand what they see in the real world and react accordingly. But, How do they see the world? How do you teach a machine what the real world is and interpret it?