Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events

Sanjeel Parekh, Slim Essid, Alexey Ozerov, Ngoc Q. K. Duong, Patrick Perez, Gael Richard; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2018, pp. 2518-2519

Abstract


Audio-visual representation learning is an important task from the perspective of designing machines with the ability to understand complex events. To this end, we propose a novel multimodal framework that instantiates multiple instance learning. We show that the learnt representations are useful for classifying events and localizing their characteristic audio-visual elements. The system is trained using only video-level event labels without any timing information. An important feature of our method is its capacity to learn from unsynchronized audio-visual events. We achieve state-of-the-art results on a large-scale dataset of weakly-labeled audio event videos. Visualizations of localized visual regions and audio segments substantiate our system's efficacy, especially when dealing with noisy situations where modality-specific cues appear asynchronously.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Parekh_2018_CVPR_Workshops,
author = {Parekh, Sanjeel and Essid, Slim and Ozerov, Alexey and Duong, Ngoc Q. K. and Perez, Patrick and Richard, Gael},
title = {Weakly Supervised Representation Learning for Unsynchronized Audio-Visual Events},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2018}
}