-
[pdf]
[bibtex]@InProceedings{Anglada-Rotger_2024_CVPR, author = {Anglada-Rotger, David and Sala, Julia and Marques, Ferran and Salembier, Philippe and Pard\`as, Montse}, title = {Enhancing Ki-67 Cell Segmentation with Dual U-Net Models: A Step Towards Uncertainty-Informed Active Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {5026-5035} }
Enhancing Ki-67 Cell Segmentation with Dual U-Net Models: A Step Towards Uncertainty-Informed Active Learning
Abstract
The diagnosis and prognosis of breast cancer relies on histopathology image analysis where markers such as Ki-67 are increasingly important. The diagnosis using this marker is based on quantification of proliferation which implies counting of Ki-67 positive and negative tumoral cells excluding stromal cells. A common problem for automatic quantification of these images derives from overlapping and clustering of cells. We propose in this paper an automatic segmentation and classification system that overcomes this problem using two Convolutional Neural Networks (Dual U-Net) whose results are combined with a watershed algorithm. Taking into account that a major issue for the development of reliable neural networks is the availability of labeled databases we also introduce an approach for epistemic uncertainty estimation that can be used for active learning in instance segmentation applications. We use Monte Carlo Dropout within our networks to quantify the model's confidence across its predictions offering insights into areas of high uncertainty. Our results show how the postprocessed uncertainty maps can be used to refine ground truth annotations and to generate new labeled data with reduced annotation effort. To initialize the labeling and further reduce this effort we propose a tool for groundtruth generation which is based on candidate generation with maxtree. Candidates are filtered based on extracted features which can be adjusted for the specific image typology thereby facilitating precise model training and evaluation.
Related Material