ZEBRA: Explaining Rare Cases Through Outlying Interpretable Concepts
Anomaly detection methods can detect outliers, but what are the properties of an outlier? In this paper, we propose ZEBRA, a novel framework for generating explanations of an outlier based on the analysis of feature rarity in an interpretable feature space. The contributions of our work include: (a) a modular model-agnostic framework for explanations of outliers; (b) a statistical explanation method based on a rarity score and weighted aggregation functions; (c) multimodal explanations combining visual, textual, and numeric explanations. ZEBRA simplifies the mapping of low-level features to high-level concepts to generate multimodal and human-readable explanations of outliers.