Detecting Dynamic Objects with Multi-view Background Subtraction

Raul Diaz, Sam Hallman, Charless C. Fowlkes; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 273-280

Abstract


The confluence of robust algorithms for structure from motion along with high-coverage mapping and imaging of the world around us suggests that it will soon be feasible to accurately estimate camera pose for a large class photographs taken in outdoor, urban environments. In this paper, we investigate how such information can be used to improve the detection of dynamic objects such as pedestrians and cars. First, we show that when rough camera location is known, we can utilize detectors that have been trained with a scene-specific background model in order to improve detection accuracy. Second, when precise camera pose is available, dense matching to a database of existing images using multi-view stereo provides a way to eliminate static backgrounds such as building facades, akin to background-subtraction often used in video analysis. We evaluate these ideas using a dataset of tourist photos with estimated camera pose. For template-based pedestrian detection, we achieve a 50 percent boost in average precision over baseline.

Related Material


[pdf]
[bibtex]
@InProceedings{Diaz_2013_ICCV,
author = {Diaz, Raul and Hallman, Sam and Fowlkes, Charless C.},
title = {Detecting Dynamic Objects with Multi-view Background Subtraction},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}