WAD 2018 Challenges

Berkeley DeepDrive is hosting three challenge tasks for CVPR 2018 Workshop on Autonomous Driving: Road Object Detection, Drivable Area Segmentation, and Domain Adaptation of Semantic Segmentation.

Submission Deadline June 11, 2018
Result Announcement June 16, 2018

BDD100K Dataset

BDD Dataset

The competition is based on BDD100K, the largest driving video dataset to date. It contains 100,000 videos representing more than 1000 hours of driving experience with more than 100 million frames. The videos comes with GPU/IMU data for trajectory information. They are manually tagged with weather, time of the day, and scene types. We also labeled bounding boxes of all the objects on the road, lane markings, drivable areas and detailed full-frame instance segmentation. To obtain the dataset, please log in and go to the download pages. The challenges include three tasks based on BDD100K: road object detection, drivable area segmentation and full-frame semantic segmentationThere are 70,000 training and 10,000 validation images for the first two tasks. 7,000 training and 1,000 validation images are provided for the third task.

Evaluation

In the our portal, you can upload your results of challenges in Submission tab. The evaluation metric is documented in our github repo.

Task 1: Road Object Detection

All the bounding boxes information is stored in the label files in .json format.

Training images: bdd100k/images/100k/train
Training labels: bdd100k/labels/100k/train
Validation images: bdd100k/images/100k/val
Validation labels: bdd100k/labels/100k/val

The evaluation will be based on testing results of the images in bdd100k/images/100k/test.

Task 2: Drivable Area Segmentation

All the drivable area annotations are also stored in the label files. But we also provide “Drivable Maps”, which provide the drivable area annotation in label map format. They can be used directly in semantic segmentation training.

Training images: bdd100k/images/100k/train
Training labels: bdd100k/drivable_maps/100k/train
Validation images: bdd100k/images/100k/val
Validation labels: bdd100k/drivable_maps/100k/val

The evaluation will be based on testing results of the images in bdd100k/images/100k/test.

Task 3: Domain adaptation of Semantic Segmentation

The training annotations for semantic segmentation is provided in label map format. They currently follow the same “train_id” with Cityscapes dataset. That is, each pixel has ids from 0-18 for training categories or 255 for ignored regions.

Training images: bdd100k/images/10k/train
Training labels: bdd100k/seg_maps/10k/train

Testing and validation: Video Segmentation Challenge.