Robust Video Super-Resolution With Learned Temporal Dynamics
Ding Liu, Zhaowen Wang, Yuchen Fan, Xianming Liu, Zhangyang Wang, Shiyu Chang, Thomas Huang; Proceedings of the IEEE International Conference on Computer Vision (ICCV), 2017, pp. 2507-2515
Abstract
Video super-resolution (SR) aims to generate a high-resolution (HR) frame from multiple low-resolution (LR) frames. The inter-frame temporal relation is as crucial as the intra-frame spatial relation for tackling this problem. However, how to utilize temporal information efficiently and effectively remains challenging since complex motion is difficult to model and can introduce adverse effects if not handled properly. We address this problem from two aspects. First, we propose a temporal adaptive neural network that can adaptively determine the optimal scale of temporal dependency. Filters on various temporal scales are applied to the input LR sequence before their responses are adaptively aggregated. Second, we reduce the complexity of motion between neighboring frames using a spatial alignment network that is much more robust and efficient than competing alignment methods and can be jointly trained with the temporal adaptive network in an end-to-end manner. Our proposed models with learned temporal dynamics are systematically evaluated on public video datasets and achieve state-of-the-art SR results compared with other recent video SR approaches. Both of the temporal adaptation and the spatial alignment modules are demonstrated to considerably improve SR quality over their plain counterparts.
Related Material
[pdf]
[
bibtex]
@InProceedings{Liu_2017_ICCV,
author = {Liu, Ding and Wang, Zhaowen and Fan, Yuchen and Liu, Xianming and Wang, Zhangyang and Chang, Shiyu and Huang, Thomas},
title = {Robust Video Super-Resolution With Learned Temporal Dynamics},
booktitle = {Proceedings of the IEEE International Conference on Computer Vision (ICCV)},
month = {Oct},
year = {2017}
}