Virtual Perturbations to Assess Explainability of Deep-Learning Based Cell Fate Predictors

Christopher J. Soelistyo, Guillaume Charras, Alan R. Lowe; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 3971-3980

Abstract


Explainable deep learning holds significant promise in extracting scientific insights from experimental observations. This is especially so in the field of bio-imaging, where the raw data is often voluminous, yet extremely variable and difficult to study. However, one persistent challenge in deep learning assisted scientific discovery is that the workings of artificial neural networks are often difficult to interpret. Here we present a simple technique for investigating the behavior of trained neural networks: virtual perturbation. By making precise and systematic alterations to input data or internal representations thereof, we are able to discover causal relationships in the outputs of a deep learning model, and by extension, in the underlying phenomenon itself. As an exemplar, we use a recently described deep-learning based cell fate prediction model. We trained the network to predict the fate of less fit cells in an experimental model of mechanical cell competition. By applying virtual perturbation to the trained network, we discover causal relationships between a cell's environment and eventual fate. We compare these with known properties of the biological system under investigation to demonstrate that the model faithfully captures insights.

Related Material


[pdf]
[bibtex]
@InProceedings{Soelistyo_2023_ICCV, author = {Soelistyo, Christopher J. and Charras, Guillaume and Lowe, Alan R.}, title = {Virtual Perturbations to Assess Explainability of Deep-Learning Based Cell Fate Predictors}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2023}, pages = {3971-3980} }