Non-Local ConvLSTM for Video Compression Artifact Reduction

Yi Xu, Longwen Gao, Kai Tian, Shuigeng Zhou, Huyang Sun; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 7043-7052


Video compression artifact reduction aims to recover high-quality videos from low-quality compressed videos. Most existing approaches use a single neighboring frame or a pair of neighboring frames (preceding and/or following the target frame) for this task. Furthermore, as frames of high quality overall may contain low-quality patches, and high-quality patches may exist in frames of low quality overall, current methods focusing on nearby peak-quality frames (PQFs) may miss high-quality details in low-quality frames. To remedy these shortcomings, in this paper we propose a novel end-to-end deep neural network called non-local ConvLSTM (NL-ConvLSTM in short) that exploits multiple consecutive frames. An approximate non-local strategy is introduced in NL-ConvLSTM to capture global motion patterns and trace the spatiotemporal dependency in a video sequence. This approximate strategy makes the non-local module work in a fast and low space-cost way. Our method uses the preceding and following frames of the target frame to generate a residual, from which a higher quality frame is reconstructed. Experiments on two datasets show that NL-ConvLSTM outperforms the existing methods.

Related Material

author = {Xu, Yi and Gao, Longwen and Tian, Kai and Zhou, Shuigeng and Sun, Huyang},
title = {Non-Local ConvLSTM for Video Compression Artifact Reduction},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}