Studying the Effects of Self-Attention for Medical Image Analysis

Adrit Rao, Jongchan Park, Sanghyun Woo, Joon-Young Lee, Oliver Aalami; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 3416-3425

Abstract


When the trained physician interprets medical images, they understand the clinical importance of visual features. By applying cognitive attention, they apply greater focus onto clinically relevant regions while disregarding unnecessary features. The use of computer vision to automate the classification of medical images is widely studied. However, the standard convolutional neural network (CNN) does not necessarily employ subconscious feature relevancy evaluation techniques similar to the trained medical specialist and evaluates features more generally. Self-attention mechanisms enable CNNs to focus more on semantically important regions or aggregated relevant context with long-range dependencies. By using attention, medical image analysis systems can potentially become more robust by focusing on more important clinical feature regions. In this paper, we provide a comprehensive comparison of various state-of-the-art self-attention mechanisms across multiple medical image analysis tasks. Through both quantitative and qualitative evaluations along with a clinical user-centric survey study, we aim to provide a deeper understanding of the effects of self-attention in medical computer vision tasks.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Rao_2021_ICCV, author = {Rao, Adrit and Park, Jongchan and Woo, Sanghyun and Lee, Joon-Young and Aalami, Oliver}, title = {Studying the Effects of Self-Attention for Medical Image Analysis}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops}, month = {October}, year = {2021}, pages = {3416-3425} }