Cluster Self-Refinement for Enhanced Online Multi-Camera People Tracking

Jeongho Kim, Wooksu Shin, Hancheol Park, Donghyuk Choi; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 7190-7197

Abstract


Recently there has been a significant amount of research on Multi-Camera People Tracking (MCPT). MCPT presents more challenges compared to Multi-Object Single Camera Tracking leading many existing studies to address them using offline methods. However offline methods can only analyze pre-recorded videos which presents less practical application in real industries compared to online methods. Therefore we aimed to focus on resolving major problems that arise when using the online approach. Specifically to address problems that could critically affect the per- formance of the online MCPT such as storing inaccurate or low-quality appearance features and situations where a person is assigned multiple IDs we proposed a Cluster Self- Refinement module. We achieved a third-place at the 2024 AI City Challenge Track 1 with a HOTA score of 60.9261% and our code is available at https://github.com/ nota-github/AIC2024_Track1_Nota.

Related Material


[pdf]
[bibtex]
@InProceedings{Kim_2024_CVPR, author = {Kim, Jeongho and Shin, Wooksu and Park, Hancheol and Choi, Donghyuk}, title = {Cluster Self-Refinement for Enhanced Online Multi-Camera People Tracking}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {7190-7197} }