-
[pdf]
[supp]
[bibtex]@InProceedings{Cho_2023_WACV, author = {Cho, Wonwoo and Park, Jeonghoon and Choo, Jaegul}, title = {Training Auxiliary Prototypical Classifiers for Explainable Anomaly Detection in Medical Image Segmentation}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2023}, pages = {2624-2633} }
Training Auxiliary Prototypical Classifiers for Explainable Anomaly Detection in Medical Image Segmentation
Abstract
Machine learning-based algorithms using fully convolutional networks (FCNs) have been a promising option for medical image segmentation. However, such deep networks silently fail if input samples are drawn far from the training data distribution, thus causing critical problems in automatic data processing pipelines. To overcome such out-of-distribution (OoD) problems, we propose a novel OoD score formulation and its regularization strategy by applying an auxiliary add-on classifier to an intermediate layer of an FCN, where the auxiliary module is helfpul for analyzing the encoder output features by taking their class information into account. Our regularization strategy train the module along with the FCN via the principle of outlier exposure so that our model can be trained to distinguish OoD samples from normal ones without modifying the original network architecture. Our extensive experiment results demonstrate that the proposed approach can successfully conduct effective OoD detection without loss of segmentation performance. In addition, our module can provide reasonable explanation maps along with OoD scores, which can enable users to analyze the reliability of predictions.
Related Material