Nvidia has announced NVIDIA Drive Map, a multimodal mapping platform for the autonomous vehicle sector. It has designed the platform to enable L3 to L4 autonomy while enhancing safety, combining DeepMap survey mapping and AI-based crowdsourced mapping to do so.

Through Drive Map, NVIDIA will provide survey-level ground truth mapping coverage to 500,000 kilometers (310, 685 miles) of roads in North America, Europe and Asia by 2024. After this time, the map will be continue to be updated and expanded.

Drive Map’s three localization layers – for use with camera, LiDAR, and radar sensors – work to support the expected redundancy and versatility of many AI drivers on the market today. The driver itself can localize to each layer independently.

The camera localization layer consists of map attributes such as lane dividers, road markings, road boundaries, traffic lights, signs and poles. The radar layer is an aggregate point cloud of radar returns. Nvidia highlighted its responsiveness in poor lighting conditions and poor weather conditions, which can prove challenging for cameras and LiDARS, as key use cases. Similarly highlighted was the layer’s capabilities in suburban areas, where the AI driver localizes based on the surrounding objects that generate a radar return.

The platform’s LiDAR voxel layer works to provide detailed, accurate, and reliable representations of its surroundings, developing 3D representation at 5-centerimer resolution in the process. Once localized to the map, the AI can use the semantic information provided by the map to plan the route ahead and safely carry out key driving decisions.

DRIVE Map is built with two map engines — ground truth survey map engine and crowdsourced map engine — to gather and maintain a collective memory of an Earth-scale fleet. The ground truth engine is based on the DeepMap survey map engine, while the AI-based crowdsource engine collects and uploads data from millions of cars to the cloud. This data is aggregated in Nvidia’s Omniverse before it is utilized to update the map, enabling over-the-air map updates in a short time span.

The platform’s data interface, Drive Mapstream, allows any passenger car that meets its requirements to update the map in real-time through the vehicle’s camera, radar, and LiDAR data. The platform’s capabilities extend similarly to accelerating the deployment of autonomous vehicles. Here, it can generate ground-truth training data for deep neural network training alongside testing and validation.

These workflows are centered on the Omniverse platform which maintains an Earth-scale representation of the digital twin that is continuously updated, and expanded, by survey map and passenger vehicles.

Using automated content generation tools built on Omniverse, the map is converted into a drivable simulation environment that can be used with Drive Sim, Nvidia’s autonomous simulation software. Through it, road elevation, road markings, islands, traffic signals, signs, and vertical posts are replicated at centimeter-level accuracy.

With physically based sensor simulation and domain randomization, AV developers can use the simulated environment to generate training scenarios that aren’t available in real data. AV developers can also apply scenario generation tools to test autonomous software on digital twin environments before deploying them in the real world. For fleet operators, the digital twin provides a virtual view of the vehicle’s location – supporting remote operation when needed.