-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Lu_2022_CVPR, author = {Lu, Liying and Wu, Ruizheng and Lin, Huaijia and Lu, Jiangbo and Jia, Jiaya}, title = {Video Frame Interpolation With Transformer}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {3532-3542} }
Video Frame Interpolation With Transformer
Abstract
Video frame interpolation (VFI), which aims to synthesize intermediate frames of a video, has made remarkable progress with development of deep convolutional networks over past years. Existing methods built upon convolutional networks generally face challenges of handling large motion due to the locality of convolution operations. To overcome this limitation, we introduce a novel framework, which takes advantage of Transformer to model long-range pixel correlation among video frames. Further, our network is equipped with a novel cross-scale window-based attention mechanism, where cross-scale windows interact with each other. This design effectively enlarges the receptive field and aggregates multi-scale information. Extensive quantitative and qualitative experiments demonstrate that our method achieves new state-of-the-art results on various benchmarks.
Related Material