-
[pdf]
[bibtex]@InProceedings{Mehta_2025_CVPR, author = {Mehta, Nancy and Dudhane, Akshay and Murala, Subrahmanyam and Timofte, Radu}, title = {KernFusNet: Implicit Kernel Modulation and Fusion for Blind Super-resolution}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR) Workshops}, month = {June}, year = {2025}, pages = {832-842} }
KernFusNet: Implicit Kernel Modulation and Fusion for Blind Super-resolution
Abstract
Convolutional neural network-based super-resolution (SR) methods have achieved significant success on ideal, predefined downsampling (bicubic) kernels. However, these algorithms struggle with unknown degradations in real-world data, which often follow a spatially variant and unknown distribution. Recently proposed blind SR studies address this issue by estimating degradation kernels, but their results often exhibit artifacts and detail deformation due to redundant information being considered in kernel estimation. Additionally, effective merging of the estimated kernels into the feature space of the SR network is challenging. To overcome these issues, we introduce a novel network, KernFusNet which simultaneously learns the kernel degradation and the relevant content information to adapt to the blur characteristics in real-world images. Specifically, KernFusNet comprises two components: an Implicit Kernel Estimation (IKE) module and a Kernel-Prior Oriented Detail Fusion (KPDF) module. The IKE module estimates the degradation kernel from low-resolution contexts, while the KPDF module effectively merge the relevant information on the basis of the learned degradations in both high-resolution and low-resolution spaces. Comprehensive experiments on the real-world and synthetic datasets demonstrate that our network achieves state-of-the-art performance in the task of blind SR.
Related Material