On the Complexity-Faithfulness Trade-off of Gradient-Based Explanations

Amir Mehrpanah, Matteo Gamba, Kevin Smith, Hossein Azizpour; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 3531-3541

Abstract


ReLU networks, while prevalent for visual data, have sharp transitions, sometimes relying on individual pixels for predictions, making vanilla gradient-based explanations noisy and difficult to interpret. Existing methods, such as GradCAM, smooth these explanations by producing surrogate models at the cost of faithfulness. We introduce a unifying spectral framework to systematically analyze and quantify smoothness, faithfulness, and their trade-off in explanations.Using this framework, we quantify and regularize the contribution of ReLU networks to high-frequency information, providing a principled approach to identifying this trade-off. Our analysis characterizes how surrogate-based smoothing distorts explanations, leading to an "explanation gap" that we formally define and measure for different post-hoc methods.Finally, we validate our theoretical findings across different design choices, datasets, and ablations.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Mehrpanah_2025_ICCV, author = {Mehrpanah, Amir and Gamba, Matteo and Smith, Kevin and Azizpour, Hossein}, title = {On the Complexity-Faithfulness Trade-off of Gradient-Based Explanations}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {3531-3541} }