Learning to Track: Online Multi-Object Tracking by Decision Making

Yu Xiang, Alexandre Alahi, Silvio Savarese; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2015, pp. 4705-4713

Abstract


Online Multi-Object Tracking (MOT) has wide applications in time-critical video analysis scenarios, such as robot navigation and autonomous driving. In tracking-by-detection, a major challenge of online MOT is how to robustly associate noisy object detections on a new video frame with previously tracked objects. In this work, we formulate the online MOT problem as decision making in Markov Decision Processes (MDPs), where the lifetime of an object is modeled with a MDP. Learning a similarity function for data association is equivalent to learning a policy for the MDP, and the policy learning is approached in a reinforcement learning fashion which benefits from both advantages of offline-learning and online-learning for data association. Moreover, our framework can naturally handle the birth/death and appearance/disappearance of targets by treating them as state transitions in the MDP while leveraging existing online single object tracking methods. We conduct experiments on the MOT Benchmark to verify the effectiveness of our method.

Related Material


[pdf]
[bibtex]
@InProceedings{Xiang_2015_ICCV,
author = {Xiang, Yu and Alahi, Alexandre and Savarese, Silvio},
title = {Learning to Track: Online Multi-Object Tracking by Decision Making},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2015}
}