Audio-Visual Event Localization via Recursive Fusion by Joint Co-Attention

Bin Duan, Hao Tang, Wei Wang, Ziliang Zong, Guowei Yang, Yan Yan; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2021, pp. 4013-4022

Abstract


The major challenge in audio-visual event localization task lies in how to fuse information from multiple modalities effectively. Recent works have shown that the attention mechanism is beneficial to the fusion process. In this paper, we propose a novel joint attention mechanism with multimodal fusion methods for audio-visual event localization. Particularly, we present a concise yet valid architecture that effectively learns representations from multiple modalities in a joint manner. Initially, visual features are combined with auditory features and then turned into joint representations. Next, we make use of the joint representations to attend to visual features and auditory features, respectively. With the help of this joint co-attention, new visual and auditory features are produced, and thus both features can enjoy the mutually improved benefits from each other. It is worth noting that the joint co-attention unit is recursive meaning that it can be performed multiple times for obtaining better joint representations progressively. Extensive experiments on the public AVE dataset have shown that the proposed method achieves significantly better results than the state-of-the-art methods.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Duan_2021_WACV, author = {Duan, Bin and Tang, Hao and Wang, Wei and Zong, Ziliang and Yang, Guowei and Yan, Yan}, title = {Audio-Visual Event Localization via Recursive Fusion by Joint Co-Attention}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2021}, pages = {4013-4022} }