Reparameterized Residual Feature Network for Lightweight Image Super-Resolution
In order to solve the problem of deploying super-resolution technology on resource-limited devices, this paper explores the differences in performance and efficiency between information distillation mechanism and residual learning mechanism used in lightweight super-resolution, and proposes a lightweight super-resolution network structure based on reparameterization, named RepRFN, which can effectively reduce GPU memory consumption and improve inference speed. A multi-scale feature fusion structure is designed so that the network can learn and integrate features of various scales and high-frequency edges. We rethought the redundancy existing in the overall network framework, and removed some redundant modules without affecting the overall performance as much as possible to further reduce the complexity of the model. In addition, we introduced a loss function based on Fourier transform to transform the spatial domain of the image into the frequency domain, so that the network can supervise and learn the frequency part of the image. The experimental results show that the RepRFN designed in this paper achieves relatively low complexity while ensuring certain performance, which is conducive to the deployment of Edge devices. Code is available at https://github.com/laonafahaodange/RepRFN.