-
[pdf]
[supp]
[bibtex]@InProceedings{V_2025_CVPR, author = {V, Madhumitha and Padhye, Sunayna and Madarkar, Shanawaj S and Agrawal, Susmit and Mopuri, Konda Reddy}, title = {Rel-SA: Alzheimer's Disease Detection using Relevance-augmented Self Attention by Inducing Domain Priors in Vision Transformers}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2025}, pages = {2801-2810} }
Rel-SA: Alzheimer's Disease Detection using Relevance-augmented Self Attention by Inducing Domain Priors in Vision Transformers
Abstract
Neurodegenerative diseases like Alzheimer's Disease (AD) present unique clinical challenges due to complex, progressive brain atrophy patterns. Structural Magnetic Resonance Imaging (sMRI) is a critical tool for diagnosis of such neurodegenerative diseases. However, current methods often lack explainability and fail to highlight clinically meaningful regions from a clinician's perspective. Identifying key biomarkers that effectively distinguish patients with AD from healthy individuals using 3D sMRI scans thus remains a central challenge. To address this, we propose Relevance-augmented Self-Attention (Rel-SA), a neuroclinical knowledge-informed attention mechanism for Vision Transformers (ViTs). Rel-SA introduces a Relevance Bias (Rel-Bias), integrating insights from the AAL3 and JHU WM brain atlases to guide the model toward regions implicated in AD progression. Through qualitative and quantitative evaluations, we demonstrate that Rel-SA not only boosts diagnostic accuracy over ViT-base by 4% but also enhances model interpretability efficiently with an addition of only 24 parameters. Our work highlights the importance of incorporating clinical priors into model design and provides an effective approach to embed domain knowledge into existing architectures, resulting in more robust and interpretable deep learning solutions for neuroimaging.
Related Material