MOTSynth: How Can Synthetic Data Help Pedestrian Detection and Tracking?

Matteo Fabbri, Guillem Brasó, Gianluca Maugeri, Orcun Cetintas, Riccardo Gasparini, Aljoša Ošep, Simone Calderara, Laura Leal-Taixé, Rita Cucchiara; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 10849-10859

Abstract


Deep learning-based methods for video pedestrian detection and tracking require large volumes of training data to achieve good performance. However, data acquisition in crowded public environments raises data privacy concerns -- we are not allowed to simply record and store data without the explicit consent of all participants. Furthermore, the annotation of such data for computer vision applications usually requires a substantial amount of manual effort, especially in the video domain. Labeling instances of pedestrians in highly crowded scenarios can be challenging even for human annotators and may introduce errors in the training data. In this paper, we study how we can advance different aspects of multi-person tracking using solely synthetic data. To this end, we generate MOTSynth, a large, highly diverse synthetic dataset for object detection and tracking using a rendering game engine. Our experiments show that MOTSynth can be used as a replacement for real data on tasks such as pedestrian detection, re-identification, segmentation, and tracking.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Fabbri_2021_ICCV, author = {Fabbri, Matteo and Bras\'o, Guillem and Maugeri, Gianluca and Cetintas, Orcun and Gasparini, Riccardo and O\v{s}ep, Aljo\v{s}a and Calderara, Simone and Leal-Taix\'e, Laura and Cucchiara, Rita}, title = {MOTSynth: How Can Synthetic Data Help Pedestrian Detection and Tracking?}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {10849-10859} }