-
[pdf]
[bibtex]@InProceedings{Ko_2025_ICCV, author = {Ko, Donggeun and Kwak, Youngsang and Kim, San and Kwak, Jaehwa and Kim, Jaekwang}, title = {Multi-Scale Contrastive-Adversarial Distillation for Super-Resolution}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2025}, pages = {5069-5078} }
Multi-Scale Contrastive-Adversarial Distillation for Super-Resolution
Abstract
Knowledge distillation (KD) is a powerful technique for model compression, enabling the creation of compact and efficient "student" models by transferring knowledge from large-scale, pre-trained "teacher" models. However, the application of traditional KD methods in this domain is considerably more challenging than in high-level tasks like classification, as the SISR task is to reconstruct image pixels a regression problem. Hence, to effectively distill the knowledge of a teacher model in SR, we propose MCAD-KD, Multi-Scale Contrastive-Adversarial Distillation for Super-Resolution. We utilize a novel hybrid contrastive learning framework that operates on both global (image-level) and local (patch-level) scales. Furthermore, we integrate adversarial guidance, which pushes the student's output towards the manifold of realistic images, allowing it to potentially surpass the perceptual quality of the teacher by learning directly from the ground-truth data distribution. Our comprehensive framework synergistically combines these components to train a lightweight student model that achieves a superior trade-off between perceptual quality and computational efficiency.
Related Material
