ContIG: Self-Supervised Multimodal Contrastive Learning for Medical Imaging With Genetics

Aiham Taleb, Matthias Kirchler, Remo Monti, Christoph Lippert; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 20908-20921

Abstract


High annotation costs are a substantial bottleneck in applying modern deep learning architectures to clinically relevant medical use cases, substantiating the need for novel algorithms to learn from unlabeled data. In this work, we propose ContIG, a self-supervised method that can learn from large datasets of unlabeled medical images and genetic data. Our approach aligns images and several genetic modalities in the feature space using a contrastive loss. We design our method to integrate multiple modalities of each individual person in the same model end-to-end, even when the available modalities vary across individuals. Our procedure outperforms state-of-the-art self-supervised methods on all evaluated downstream benchmark tasks. We also adapt gradient-based explainability algorithms to better understand the learned cross-modal associations between the images and genetic modalities. Finally, we perform genome-wide association studies on the features learned by our models, uncovering interesting relationships between images and genetic data.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Taleb_2022_CVPR, author = {Taleb, Aiham and Kirchler, Matthias and Monti, Remo and Lippert, Christoph}, title = {ContIG: Self-Supervised Multimodal Contrastive Learning for Medical Imaging With Genetics}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {20908-20921} }