Spatially-Adaptive Feature Modulation for Efficient Image Super-Resolution

Long Sun, Jiangxin Dong, Jinhui Tang, Jinshan Pan; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 13190-13199

Abstract


Although deep learning-based solutions have achieved impressive reconstruction performance in image super-resolution (SR), these models are generally large, with complex architectures, making them incompatible with low-power devices with many computational and memory constraints. To overcome these challenges, we propose a spatially-adaptive feature modulation (SAFM) mechanism for efficient SR design. In detail, the SAFM layer uses independent computations to learn multi-scale feature representations and aggregates these features for dynamic spatial modulation. As the SAFM prioritizes exploiting non-local feature dependencies, we further introduce a convolutional channel mixer (CCM) to encode local contextual information and mix channels simultaneously. Extensive experimental results show that the proposed method is 3x smaller than state-of-the-art efficient SR methods, e.g., IMDN, and yields comparable performance with much less memory usage. Our source codes and pre-trained models are available at: https://github.com/sunny2109/SAFMN.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Sun_2023_ICCV, author = {Sun, Long and Dong, Jiangxin and Tang, Jinhui and Pan, Jinshan}, title = {Spatially-Adaptive Feature Modulation for Efficient Image Super-Resolution}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {13190-13199} }