Exploiting the Complementarity of Audio and Visual Data in Multi-Speaker Tracking

Yutong Ban, Laurent Girin, Xavier Alameda-Pineda, Radu Horaud; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 446-454

Abstract


Multi-speaker tracking is a central problem in humanrobot interaction. In this context, exploiting auditory and visual information is gratifying and challenging at the same time. Gratifying because the complementary nature of auditory and visual information allows us to be more robust against noise and outliers than unimodal approaches. Challenging because how to properly fuse auditory and visual information for multi-speaker tracking is far from being a solved problem. In this paper we propose a probabilistic generative model that tracks multiple speakers by jointly exploiting auditory and visual features in their own representation spaces. Importantly, the method is robust to missing data and is therefore able to track even when observations from one of the modalities are absent. Quantitative and qualitative results on the AVDIAR dataset are reported.

Related Material


[pdf]
[bibtex]
@InProceedings{Ban_2017_ICCV,
author = {Ban, Yutong and Girin, Laurent and Alameda-Pineda, Xavier and Horaud, Radu},
title = {Exploiting the Complementarity of Audio and Visual Data in Multi-Speaker Tracking},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2017}
}