Interpreting Mechanisms of Prediction for Skin Cancer Diagnosis Using Multi-Task Learning

Davide Coppola, Hwee Kuan Lee, Cuntai Guan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2020, pp. 734-735

Abstract


One of the key issues in deep learning is the difficulty in the interpretation of mechanisms for the final predictions. Hence the real-world application of deep learning in skin cancer still proves limited, in spite of the solid performances achieved. We present a way to better interpret predictions on a skin lesion dataset by the use of a multi-task learning framework and a set of learnable gates. The model detects a set of clinically significant attributes in addition to the final diagnosis and learns the association between tasks by selecting which features to share among them. Conventional multi-task learning algorithms generally share all the features among tasks and lack a way of determining the amount of sharing between tasks. On the other hand, this method provides a simple way to inspect which features are being shared between tasks in the form of gates that can be learned in an end-to-end fashion. Experiments have been carried out on the publicly available Derm7pt dataset, which provides diagnosis information as well as the attributes needed for the well-known 7-point checklist method.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Coppola_2020_CVPR_Workshops,
author = {Coppola, Davide and Lee, Hwee Kuan and Guan, Cuntai},
title = {Interpreting Mechanisms of Prediction for Skin Cancer Diagnosis Using Multi-Task Learning},
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {June},
year = {2020}
}