BERTHop: An Effective Vision-and-Language Model for Chest X-Ray Disease Diagnosis

Masoud Monajatipoor, Mozhdeh Rouhsedaghat, Liunian Harold Li, Aichi Chien, C.-C. Jay Kuo, Fabien Scalzo, Kai-Wei Chang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 3334-3343

Abstract


Vision-and-language (V&L) models take image and text as input and learn to capture the associations between them. Prior studies show that pre-trained V&L models can significantly improve the model performance for downstream tasks such as Visual Question Answering (VQA). However, V&L models are less effective when applied in the medical domain (e.g., on X-ray images and clinical notes) due to the domain gap. In this paper, we investigate the challenges of applying pre-trained V&L models in medical applications. In particular, we identify that the visual representation in general V&L models is not suitable for processing medical data. To overcome this limitation, we propose BERTHop, a transformer-based model based on PixelHop++ and VisualBERT, for better capturing the associations between the two modalities. Experiments on the OpenI dataset, a commonly used thoracic disease diagnosis benchmark, show that BERTHop achieves an average Area Under the Curve (AUC) of 98.12% which is 1.62% higher than state-of-the-art (SOTA) while it is trained on a 9x smaller dataset.

Related Material


[pdf]
[bibtex]
@InProceedings{Monajatipoor_2021_ICCV, author = {Monajatipoor, Masoud and Rouhsedaghat, Mozhdeh and Li, Liunian Harold and Chien, Aichi and Kuo, C.-C. Jay and Scalzo, Fabien and Chang, Kai-Wei}, title = {BERTHop: An Effective Vision-and-Language Model for Chest X-Ray Disease Diagnosis}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {3334-3343} }