AlignNet: A Unifying Approach to Audio-Visual Alignment

Jianren Wang, Zhaoyuan Fang, Hang Zhao; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2020, pp. 3309-3317

Abstract


We present AlignNet, a model that synchronizes videos with reference audios under non-uniform and irregular misalignments. AlignNet learns the end-to-end dense correspondence between each frame of a video and an audio. Our method is designed according to simple and well-established principles: attention, pyramidal processing, warping, and affinity function. Together with the model, we release a dancing dataset Dance50 for training and evaluation. Qualitative, quantitative and subjective evaluation results on dance-music alignment and speech-lip alignment demonstrate that our method far outperforms the state-of-the-art methods. Code, dataset and sample videos are available at our project page.

Related Material


[pdf]
[bibtex]
@InProceedings{Wang_2020_WACV,
author = {Wang, Jianren and Fang, Zhaoyuan and Zhao, Hang},
title = {AlignNet: A Unifying Approach to Audio-Visual Alignment},
booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)},
month = {March},
year = {2020}
}