One of the things that gets left out much of the time about autonomous vehicles is a discussion of what might actually make such things feasible: artificial intelligence (AI). Sure, the sensors and the actuators and the processors are all integral, but it is going to require something that will have greater situational awareness in order to pull it off.
“Drivers deal with an infinitely complex world,” said Jen-Hsun Huang, co-founder and CEO, NVIDIA (nvidia.com). So when thereâs no driver, then there needs to be something that can deal with this infinite complexity.
So NVIDIA has developed DRIVE PX 2. It is based on two Tegra processors and two next-gen discrete graphics processing units based on the companyâs Pascal architecture. All in, this means it has the ability to perform up to 24-trillion deep learning operations per second. To put that in some sort of context: it is the processing capabilities of 150 MacBook Pros.
Or, looked at still another way, DRIVE PX 2 can process the inputs of 12 video cameras, in addition to lidar, radar and ultrasonic sensors. It fuses them to accurately detect objects, identify them, determine where the car is relative to the world around it, and then calculate its optimal path for safe travel.
NVIDIA launched DRIVE PX in the summer of 2015. Some 50 OEMs, suppliers, researchers and developers are working with this AI platform. DRIVE PX 2 is scheduled for launch in the fourth quarter of 2016.Â