Multi-Camera Tracking by Candidate Intersection Ratio Tracklet Matching

Yun-Lun Li, Zhi-Yi Chin, Ming-Ching Chang, Chen-Kuo Chiang; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 4103-4111

Abstract


Multi-camera vehicle tracking at the city scale is an essential task in traffic management for smart cities. Large-scale video analytics is challenging due to the vehicle variabilities, view variations, frequent occlusions, degraded pixel quality, and appearance differences. In this work, we develop a multi-target multi-camera (MTMC) vehicle tracking system based on a newly proposed Candidates Intersection Ratio (CIR) metric that can effectively evaluate vehicle tracklets for matching across views. Our system consists of four modules: (1) Faster-RCNN vehicle detection, (2) detection association based on re-identification feature matching, (3) single-camera tracking (SCT) to produce initial tracklets, (4) multi-camera vehicle tracklet matching and re-identification that creates longer, consistent tracklets across the city scale. Based on popular DNN object detection and SCT modules, we focus on the development of tracklet creation, association, and linking in SCT and MTMC. Specifically, SCT filters are proposed to effectively eliminate unreliable tracklets. The CIR metric improves robust vehicle tracklet linking across visually distinct views. Our system obtains IDF1 score of 0.1343 on the AI City 2021 Challenge Track 3 public leaderboard.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2021_CVPR, author = {Li, Yun-Lun and Chin, Zhi-Yi and Chang, Ming-Ching and Chiang, Chen-Kuo}, title = {Multi-Camera Tracking by Candidate Intersection Ratio Tracklet Matching}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2021}, pages = {4103-4111} }