Modeling Self-Occlusions in Dynamic Shape and Appearance Tracking

Yanchao Yang, Ganesh Sundaramoorthi; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2013, pp. 201-208

Abstract


We present a method to track the precise shape of a dynamic object in video. Joint dynamic shape and appearance models, in which a template of the object is propagated to match the object shape and radiance in the next frame, are advantageous over methods employing global image statistics in cases of complex object radiance and cluttered background. In cases of complex 3D object motion and relative viewpoint change, self-occlusions and disocclusions of the object are prominent, and current methods employing joint shape and appearance models are unable to accurately adapt to new shape and appearance information, leading to inaccurate shape detection. In this work, we model self-occlusions and dis-occlusions in a joint shape and appearance tracking framework. Experiments on video exhibiting occlusion/dis-occlusion, complex radiance and background show that occlusion/dis-occlusion modeling leads to superior shape accuracy compared to recent methods employing joint shape/appearance models or employing global statistics.

Related Material


[pdf]
[bibtex]
@InProceedings{Yang_2013_ICCV,
author = {Yang, Yanchao and Sundaramoorthi, Ganesh},
title = {Modeling Self-Occlusions in Dynamic Shape and Appearance Tracking},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {December},
year = {2013}
}