Discovering Interpretable Models of Scientific Image Data with Deep Learning

Christopher J. Soelistyo, Alan R. Lowe; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 6884-6893

Abstract


In this study we demonstrate the possibility of finding interpretable domain-appropriate models of biological images and propose that such a strategy can be used to derive scientific insight in domains involving raw data. This is achieved by the novel concerted application of existing methods namely disentangled representation learning sparse deep neural network training and symbolic regression. We demonstrate their relevance to the field of bioimaging using a well-studied test problem of classifying cell states in microscopy data. We find that such methods can produce highly parsimonious models that achieve 98% of the accuracy of black-box benchmark models with a tiny fraction of the complexity and greater domain-appropriateness as tested by adversarial attacks. As such we provide proof of concept that interpretable high-performing models can be used to produce scientific explanations of some underlying biological phenomenon.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Soelistyo_2024_CVPR, author = {Soelistyo, Christopher J. and Lowe, Alan R.}, title = {Discovering Interpretable Models of Scientific Image Data with Deep Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {6884-6893} }