BDD100K is one of the most diverse open video datasets collected by a driving platform. It is best suited for automotive applications and owns the following priorities: it’s large-scale, provides data variation, temporal information, and annotated footage from the streets.
The dataset, as the name suggests, consists of 100K videos, 40 seconds each. Each video also contains GPS/ IMU information to feature approximate route trajectories.
The data is collected from more than 50K rides in the United States and has a considerable scene variation, including city streets, residential areas, highways, diverse weather conditions recorded at different times of the day, etc. A similar variation is especially helpful for imitation learning of driving policies. BDD100K can provide you with the following:
- Semantic segmentation
- Instance segmentation
- Object tracking
- Lane detection
The dataset provides annotations for a sample keyframe at the 10th second from every video. These keyframes undergo several levels of labeling: image tagging, road object bounding boxes, drivable areas, full-frame instance segmentation, and lane marking.
BDD100K gives the opportunity to explore and make use of the diversity of the data in different scenes to benefit the perception algorithms. The data is then used to train models for object recognition and evaluate further performance.
Annotation Type: instance segmentation, lane detection, object detection
Created By: Fisher Yu
License: BSD 3-Clause License
Dataset Size: 110,000 images