SoundingActions: Learning How Actions Sound from Narrated Egocentric Videos

Changan Chen, Kumar Ashutosh, Rohit Girdhar, David Harwath, Kristen Grauman; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 27252-27262

Abstract


We propose a novel self-supervised embedding to learn how actions sound from narrated in-the-wild egocentric videos. Whereas existing methods rely on curated data with known audio-visual correspondence our multimodal contrastive-consensus coding (MC3) embedding reinforces the associations between audio language and vision when all modality pairs agree while diminishing those associations when any one pair does not. We show our approach can successfully discover how the long tail of human actions sound from egocentric video outperforming an array of recent multimodal embedding techniques on two datasets (Ego4D and EPIC-Sounds) and multiple cross-modal tasks.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chen_2024_CVPR, author = {Chen, Changan and Ashutosh, Kumar and Girdhar, Rohit and Harwath, David and Grauman, Kristen}, title = {SoundingActions: Learning How Actions Sound from Narrated Egocentric Videos}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {27252-27262} }