-
[pdf]
[supp]
[bibtex]@InProceedings{Li_2025_CVPR, author = {Li, Xingjian and Zhao, Qiming and Bisht, Neelesh and Uddin, Mostofa Rafid and Kim, Jin Yu and Zhang, Bryan and Xu, Min}, title = {DiffCAM: Data-Driven Saliency Maps by Capturing Feature Differences}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2025}, pages = {10327-10337} }
DiffCAM: Data-Driven Saliency Maps by Capturing Feature Differences
Abstract
In recent years, the interpretability of Deep Neural Networks (DNNs) has garnered significant attention, particularly due to their widespread deployment in critical domains like healthcare, finance, and autonomous systems. To address the challenge of understanding how DNNs make decisions, Explainable AI (XAI) methods, such as saliency maps, have been developed to provide insights into the inner workings of these models. This paper introduces DiffCAM, a novel XAI method designed to overcome limitations in existing Class Activation Map (CAM)-based techniques, which often rely on decision boundary gradients to estimate feature importance. DiffCAM differentiates itself by considering the actual data distribution of the reference class, identifying feature importance based on how a target example differs from reference examples. This approach captures the most discriminative features without relying on decision boundaries or prediction results, making DiffCAM applicable to a broader range of models, including foundation models. Through extensive experiments, we demonstrate the superior performance and flexibility of DiffCAM in providing meaningful explanations across diverse datasets and scenarios.
Related Material