-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Chen_2021_CVPR, author = {Chen, Yanbei and Xian, Yongqin and Koepke, A. Sophia and Shan, Ying and Akata, Zeynep}, title = {Distilling Audio-Visual Knowledge by Compositional Contrastive Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {7016-7025} }
Distilling Audio-Visual Knowledge by Compositional Contrastive Learning
Abstract
Having access to multi-modal cues (e.g. vision and audio) empowers some cognitive tasks to be done faster compared to learning from a single modality. In this work, we propose to transfer knowledge across heterogeneous modalities, even though these data modalities may not be semantically correlated. Rather than directly aligning the representations of different modalities, we compose audio, image, and video representations across modalities to uncover the richer multi-modal knowledge. Our main idea is to learn a compositional embedding that closes the cross-modal semantic gap and captures the task-relevant semantics, which facilitates pulling together representations across modalities by compositional contrastive learning. We establish a new, comprehensive multi-modal distillation benchmark on three video datasets: UCF101, ActivityNet, and VGGSound. Moreover, we demonstrate that our model significantly outperforms a variety of existing knowledge distillation methods in transferring audio-visual knowledge to improve video representation learning.
Related Material