Analyzing U-Net Robustness for Single Cell Nucleus Segmentation From Phase Contrast Images

Chenyi Ling, Michael Majurski, Michael Halter, Jeffrey Stinson, Anne Plant, Joe Chalfoun; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 966-967

Abstract


We quantify the robustness of the semantic segmentation model U-Net, applied to single cell nuclei detection, with respect to the following factors: (1) automated vs manual training annotations, (2) quantity of training data, and (3) microscope image focus. The difficulty of obtaining sufficient volumes of accurate manually annotated training data to create an accurate Convolutional Neural Networks (CNN) model is overcome by the temporary use of fluorescent labels to automate the creation of training datasets using traditional image processing algorithms. The accuracy measurement is computed with respect to manually annotated masks which were also created to evaluate the effectiveness of using automated training set generation via the fluorescent images. The metric to compute the accuracy is the false positive/negative rate of cell nuclei detection. The goal is to maximize the true positive rate while minimizing the false positive rate. We found that automated segmentation of fluorescently labeled nuclei provides viable training data without the need for manual segmentation. A training dataset size of four large stitched images with medium cell density was enough to reach a true positive rate above 88 % and a false positive rate below 20%.

Related Material


[pdf] [video]
[bibtex]
@InProceedings{Ling_2020_CVPR_Workshops,
author = {Ling, Chenyi and Majurski, Michael and Halter, Michael and Stinson, Jeffrey and Plant, Anne and Chalfoun, Joe},
title = {Analyzing U-Net Robustness for Single Cell Nucleus Segmentation From Phase Contrast Images},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}