Please visit BDD100K documentation for the details of the downloaded files. To cite the data in your paper
@InProceedings{bdd100k,
author = {Yu, Fisher and Chen, Haofeng and Wang, Xin and Xian, Wenqi and Chen, Yingying and Liu, Fangchen and Madhavan, Vashisht and Darrell, Trevor},
title = {BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
| HTTP Link An online folder containing all the training and validation videos and sensor info for imitation learning. |
| Google Drive Link Alternative downloading source from Google Drive. |
To cite the data in your paper
@inproceedings{xu2017end,
title={End-to-end learning of driving models from large-scale video datasets},
author={Xu, Huazhe and Gao, Yang and Yu, Fisher and Darrell, Trevor},
booktitle={The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year={2017}
}
|
Videos Over 1,000 driving videos accompanied by driver attention maps and GPS measurements. Size: 5GB |
To cite the data in your paper
@inproceedings{xia2018predicting,
title={Predicting driver attention in critical situations},
author={Xia, Ye and Zhang, Danqing and Kim, Jinkyu and Nakayama, Ken and Zipser, Karl and Whitney, David},
booktitle={Asian conference on computer vision},
pages={658--674},
year={2018},
organization={Springer}
}