-
[pdf]
[supp]
[bibtex]@InProceedings{Mi_2025_CVPR, author = {Mi, Ze-Yu and Yang, Yu-Bin}, title = {ADD: Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {23101-23110} }
ADD: Attribution-Driven Data Augmentation Framework for Boosting Image Super-Resolution
Abstract
Data augmentation (DA) stands out as a powerful technique to enhance the generalization capabilities of deep neural networks across diverse tasks. However, in low-level vision tasks, DA remains rudimentary (i.e., vanilla DA), facing a critical bottleneck due to information loss. In this paper, we introduce a novel Calibrated Attribution Maps (CAM) to generate saliency masks, followed by two saliency-based DA methods-- Attribution-Driven Data augmentation (ADD) and ADD+--designed to address this issue. CAM leverages integrated gradients and incorporates two key innovations: a global feature detector and calibrated integrated gradients. Based on CAM and the proposed methods, we have two new insights for low-level vision tasks: (1) increasing pixel diversity, as seen in vanilla DA, can improve performance, and (2) focusing on salient features while minimizing the impact of irrelevant pixels, as seen in saliency-based DA, more effectively enhances model performance. Additionally, we find and highlight the key guiding principle for designing saliency-based DA: a wider spectrum of degradation patterns. Extensive experiments demonstrate the compatibility and consistency of our method, as well as the significant performance improvement across various SR tasks and networks.
Related Material