|Submission Deadline||May 31, 2019|
The tasks are based on BDD100K, the largest driving video dataset to date. It contains 100,000 videos representing more than 1000 hours of driving experience with more than 100 million frames. The videos comes with GPU/IMU data for trajectory information. They are manually tagged with weather, time of the day, and scene types. We also labeled bounding boxes of all the objects on the road, lane markings, drivable areas and detailed full-frame instance segmentation. We are also providing tracking labels, as part of our future release of tracking based on BDD100K, to evaluate the models trained on D2-City to test domain adaptation. To obtain the dataset, please log in and go to the download pages. More information about D2-City and evaluation can be found at https://outreach.didichuxing.com/d2city.
You can use our evaluation code to check your results on validation set. We will put up the leaderboard when the challenges conclude. For all tasks, it is a fair game to pre-train your network with ImageNet, but if other datasets are used, please note in the submission description. We will rank the methods without using external datasets except ImageNet.
All the bounding boxes information is stored in the label files in .json format.
Training images: bdd100k/images/100k/train
Training labels: bdd100k/labels/100k/train
Validation data: D2-City
The evaluation will be based on testing results of the images in D2-City.
First download the tracking teaser from the downloading page after you log in on this site.
Training set: D2-City
Validation images: bdd100k/tracking_cvpr2019/images/val
Validation labels: bdd100k/tracking_cvpr2019/bdd100k_tracking_cvpr2019_val.json
The evaluation will be based on testing results of the images in bdd100k/tracking_cvpr2019/images/test.