Audiovisual Transformer with Instance Attention for Audio-Visual Event Localization

Yan-Bo Lin, Yu-Chiang Frank Wang; Proceedings of the Asian Conference on Computer Vision (ACCV), 2020

Abstract


Audio-visual event localization requires one to identify the event label across video frames by jointly observing visual and audio information. To address this task, we propose a deep learning framework of cross-modality co-attention for video event localization. Our proposed audiovisual transformer (AV-transformer) is able to exploit intra and inter-frame visual information, with audio features jointly observed to perform co-attention over the above three modalities. With visual, temporal, and audio information observed across consecutive video frames, our model achieves promising capability in extracting informative spatial/temporal features for improved event localization. Moreover, our model is able to produce instance-level attention, which would identify image regions at the instance level which are associated with the sound/event of interest. Experiments on a benchmark dataset confirm the effectiveness of our proposed framework, with ablation studies performed to verify the design of our propose network model.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Lin_2020_ACCV, author = {Lin, Yan-Bo and Wang, Yu-Chiang Frank}, title = {Audiovisual Transformer with Instance Attention for Audio-Visual Event Localization}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {November}, year = {2020} }