-
[pdf]
[bibtex]@InProceedings{T._2023_CVPR, author = {T., Akash Guna R. and Benitez, Raul and K., Sikha O.}, title = {Ante-Hoc Generation of Task-Agnostic Interpretation Maps}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2023}, pages = {3764-3769} }
Ante-Hoc Generation of Task-Agnostic Interpretation Maps
Abstract
Existing explainability approaches for convolutional neural networks (CNNs) are mainly applied after training (post-hoc) which is generally unreliable. Ante-hoc explainers trained simultaneously with the CNN are more reliable. However, current ante-hoc explanation methods mainly generate inexplicit concept-based explanations tailored to specific tasks. To address these limitations, we propose a task-agnostic ante-hoc framework that can generate interpretation maps to visually explain any CNN. Our framework simultaneously trains the CNN and a weighting network - an explanation generation module. The generated maps are self-explanatory, eliminating the need for manual identification of concepts. We demonstrate that our method can interpret tasks such as classification, facial landmark detection, and image captioning. We show that our framework is explicit, faithful, and stable through experiments. To the best of our knowledge, this is the first ante-hoc CNN explanation strategy that produces visual explanations generic to CNNs for different tasks.
Related Material