Cross-Attention Transformer for Video Interpolation

Hannah Halin Kim, Shuzhi Yu, Shuai Yuan, Carlo Tomasi; Proceedings of the Asian Conference on Computer Vision (ACCV) Workshops, 2022, pp. 320-337

Abstract


We propose TAIN (Transformers and Attention for video INterpolation), a residual neural network for video interpolation, which aims to interpolate an intermediate frame given two consecutive image frames around it. We first present a novel vision transformer module, named Cross-Similarity (CS), to globally aggregate input image features with similar appearance as those of the predicted interpolated frame. These CS features are then used to refine the interpolated prediction. To account for occlusions in the CS features, we propose an Image Attention (IA) module to allow the network to focus on CS features from one frame over those of the other. TAIN outperforms existing methods that do not require flow estimation and performs comparably to flow-based methods while being computationally efficient in terms of inference time on Vimeo90k, UCF101, and SNU-FILM benchmarks.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Kim_2022_ACCV, author = {Kim, Hannah Halin and Yu, Shuzhi and Yuan, Shuai and Tomasi, Carlo}, title = {Cross-Attention Transformer for Video Interpolation}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV) Workshops}, month = {December}, year = {2022}, pages = {320-337} }