CAVLI - Using Image Associations To Produce Local Concept-Based Explanations
While explainability is becoming increasingly crucial in computer vision and machine learning, producing explanations that are able to link decisions made by deep neural networks to concepts that are easily understood by humans still remains a challenge. To address this challenge, we propose a framework that produces local concept based explanations for the classification decisions made by a deep neural network. Our framework is based on the intuition that if there is a high overlap between the regions of the image that the model associates the most with the concept and the regions of the image that are useful for decision-making then the decision is highly dependent on the concept. Our proposed CAVLI framework combines a global approach (TCAV) with a local approach (LIME). To test the effectiveness of our approach, we conducted experiments on both the ImageNet and CelebA datasets. These experiments demonstrated that our framework can produce explanations that are easy for humans to understand. By providing local concept-based explanations, our framework has the potential to improve the transparency and interpretability of deep neural networks in a variety of applications.