CTC: Contribution to Classification of Complex Features

Sophia Kalanovska, Michael Luck, Christopher Hampson; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2025, pp. 4361-4370

Abstract


Deep convolutional neural networks have achieved remarkable performance, yet their internal decision-making processes often remain opaque. A key challenge in post-hoc explainability is balancing interpretability and fidelity: overly granular explanations (e.g., at the pixel level) can overwhelm users, while approaches that determine the relevance of aggregated input regions often oversimplify explanations, resulting in a loss of faithfulness to the model's true behaviour. In this paper, we propose a novel framework that (i) modifies the Segment Anything Model (SAM) to identify meaningful complex input features, (ii) introduces a technique, Contribution To Classification (CTC), which employs a modified forward pass to assess the relevance of these features rather than relying solely on pixel-level relevance, and incorporates a scaling mechanism to preserve the contribution signal despite propagating only a subset of activations (iii) demonstrates improved input invariance and sensitivity to meaningful perturbations through extensive evaluations on architectures including VGG, ResNet, Inception, and DenseNet, and (iv) releases \href https://github.com/SophiaKalanovska/Contribution-To-Classification the CTC open-source codebase to facilitate further research.

Related Material


[pdf]
[bibtex]
@InProceedings{Kalanovska_2025_CVPR, author = {Kalanovska, Sophia and Luck, Michael and Hampson, Christopher}, title = {CTC: Contribution to Classification of Complex Features}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2025}, pages = {4361-4370} }