Tesla is at the forefront of self-driving systems. Currently Tesla cars use cameras, radar, and LiDAR sensors to collect data that helps the system navigate safely, but the car manufacturer now plans to replace it with a vision-only system using cameras and a powerful supercomputer hosting a neural network.
Collecting data for a self-driving system using only cameras instead of radars, LiDAR, and other components may seem inferior, but there are some benefits from this approach. When cutting on the amount of technology packed inside the vehicle, two other things reduce proportionally: costs and weight.
Moreover, there’s Elon Musk’s argument on vision vs radar: “When radar and vision disagree, which one do you believe? Vision has much more precision, so better to double down on vision than do sensor fusion.” He then added that vision is also faster than radar and LiDAR, concluding that as “vision processing gets better, it just leaves radar far behind.”
For the neural network host, Tesla will be using a system called Dojo. The system is still under development, but during CVPR 2021, Tesla’s head of AI Andrej Karpathy revealed the prototype system that will eventually be replaced by Dojo. The last-gen prototype of Dojo has 5,760 GPUs delivering up to 1.8 EFLOPS (exaFLOPS) and is equipped with 10PB of NVMe storage and a 1.6 TB/s connection. According to Karpathy, this system should sit at the fifth place of the TOP500 supercomputer list.
As for the car, each will be equipped with eight cameras capable of collecting footage at 36FPS. The collected footage is sent to the supercomputer, where it will be processed at a speed matching that of a human driver.
Compared to humans, the system will offer advantages like 360º awareness, better reaction times, and a non-distractible entity controlling the car. Karpathy also mentioned some cases where the system will get into action, including emergency braking to prevent a pedestrian from being hit and warn drivers about traffic lights.
Although the neural network part is still lacking, Tesla has already stopped equipping Model Y and Model 3 cars built in North America with anything besides cameras. As it seems, most of the work in Tesla’s new self-driving system is done by the cameras, so the lack of the neural network isn’t crucial to functioning accurately.