Scale AI is pleased to announce the full release of nuScenes by Aptiv —a large-scale open source dataset for autonomous driving. Scale AI was involved as Aptiv’s data labeling partner on this initiative. On March 28, 2019, Aptiv announced the full dataset release of nuScenes, an initiative intended to broadly support research into computer vision and autonomous driving by AV innovators and academic researchers to further advance the mobility industry.
One key bottleneck in the advancement of deep learning is access to labeled data. Most machine learning algorithms learn from large amounts of examples to understand the world. These incredible algorithms are powered behind the scenes by billions of hand-labeled examples of what humans can easily understand. While we like to believe as computer scientists that it is all about the algorithms, labeled data is incredibly impactful to progress. The current age of deep learning was ignited by the launch of the ImageNet dataset by Fei-Fei Li’s lab at Stanford in 2009. As Fei-Fei said once in an interview:
Our vision was that Big Data would change the way machine learning works. Data drives learning.
The release of the full 1000 scenes means nuScenes now includes 1.4M camera images, 390k LiDAR sweeps, 1.4M RADAR sweeps and 1.4M object bounding boxes in 40k keyframes. nuScenes is the largest multi-sensor dataset for autonomous vehicle data with LIDAR, RADAR, camera, IMU, and GPS, and builds upon great work in the field by others such as the KITTI dataset. Collected in Boston and Singapore, nuScenes data contains 7x more object annotations than KITTI and will enable researchers, students and developers to study challenging urban driving situations.
From conception, we partnered with Aptiv to produce the nuScenes dataset. By leveraging Aptiv’s autonomy stack and our sensor fusion and LIDAR annotation products, nuScenes sets a new standard for quality in public datasets, along with a web-based visualizer for LIDAR and camera data for exploring the dataset. It is crucial to machine learning development to be able to visualize data to develop intuitions about the problems at hand. We, like others, noticed a gap where previous options like rviz were not built for a web-based world. Our LiDAR visualizer allows point cloud data to be easily embedded into any webpage and shared. We’ve also built it to be developer-friendly and usable for a broad range of use cases.
We hope that the nuScenes dataset will become the new standard in the industry and allow researchers to develop safer autonomous driving technology. There is a large gap today in the machine learning community—the larger organizations have significantly larger datasets and compute budgets than smaller organizations or universities. Our hope is to bridge that gap and democratize access to state-of-the-art data in autonomous vehicles. With the launch of nuScenes, we hope to do for the autonomous vehicle community what ImageNet did for computer vision.