-
[pdf]
[arXiv]
[bibtex]@InProceedings{Isobe_2022_CVPR, author = {Isobe, Takashi and Jia, Xu and Tao, Xin and Li, Changlin and Li, Ruihuang and Shi, Yongjie and Mu, Jing and Lu, Huchuan and Tai, Yu-Wing}, title = {Look Back and Forth: Video Super-Resolution With Explicit Temporal Difference Modeling}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {17411-17420} }
Look Back and Forth: Video Super-Resolution With Explicit Temporal Difference Modeling
Abstract
Temporal modeling is crucial for video super-resolution. Most of the video super-resolution methods adopt the optical flow or deformable convolution for explicitly motion compensation. However, such temporal modeling techniques increase the model complexity and might fail in case of occlusion or complex motion, resulting in serious distortion and artifacts. In this paper, we propose to explore the role of explicit temporal difference modeling in both LR and HR space. Instead of directly feeding consecutive frames into a VSR model, we propose to compute the temporal difference between frames and divide those pixels into two subsets according to the level of difference. They are separately processed with two branches of different receptive fields in order to better extract complementary information. To further enhance the super-resolution result, not only spatial residual features are extracted, but the difference between consecutive frames in high-frequency domain is also computed. It allows the model to exploit intermediate SR results in both future and past to refine the current SR output. The difference at different time steps could be cached such that information from further distance in time could be propagated to the current frame for refinement. Experiments on several video super-resolution benchmark datasets demonstrate the effectiveness of the proposed method and its favorable performance against state-of-the-art methods.
Related Material