DG-Labeler and DGL-MOTS Dataset: Boost the Autonomous Driving Perception

Yiming Cui, Zhiwen Cao, Yixin Xie, Xingyu Jiang, Feng Tao, Yingjie Victor Chen, Lin Li, Dongfang Liu; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2022, pp. 58-67


Multi-object tracking and segmentation (MOTS) is a critical task for autonomous driving applications. The existing MOTS studies face two critical challenges: 1) the published datasets inadequately capture the real-world complexity for network training to address various driving settings; 2) the working pipeline annotation tool is under-studied in the literature to improve the quality of MOTS learning examples. In this work, we introduce the DG-Labeler and DGL-MOTS dataset to facilitate the training data annotation for the MOST task and accordingly improve network training accuracy and efficiency. To the best of our knowledge, our DG-Labeler is the first tool publicly available for MOTS data annotation. DG-Labeler uses the novel Depth-Granularity Module to depict the instance spatial relations and produce fine-grained instance masks. Annotated by DG-Labeler, our DGL-MOTS dataset exceeds the prior effort (i.e., KITTI MOTS and BDD100K) in data diversity, annotation quality, and temporal representations. Results on extensive cross-dataset evaluations indicate significant performance improvements for several state-of-the-art methods trained on our DGL-MOTS dataset. We believe our DGL-MOTS Dataset and DG-Labeler hold valuable potential to boost the visual perception of future transportation. Our dataset and code are available.

Related Material

[pdf] [arXiv]
@InProceedings{Cui_2022_WACV, author = {Cui, Yiming and Cao, Zhiwen and Xie, Yixin and Jiang, Xingyu and Tao, Feng and Chen, Yingjie Victor and Li, Lin and Liu, Dongfang}, title = {DG-Labeler and DGL-MOTS Dataset: Boost the Autonomous Driving Perception}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2022}, pages = {58-67} }