Adaptive Dual Attention into Diffusion for 3D Medical Image Segmentation

Nhu-Tai Do, Van-Hung Bui, Quoc-Huy Nguyen; Proceedings of the Asian Conference on Computer Vision (ACCV) Workshops, 2024, pp. 351-364

Abstract


Denoising diffusion models have recently demonstrated great success in generating detailed pixel-wise representations for image generation. Applications like Dall-E, Stable Diffusion, and Midjourney have showcased impressive image-generation capabilities, sparking significant discussion within the community. Recent studies have also highlighted the utility of these models in various other vision tasks, including image deblurring, super-resolution, and image segmentation. This work introduces a novel Adaptive Dual Attention into Diffusion model for 3D medical image segmentation. Applying diffusion models to 3D medical image segmentation presents significant challenges. The alignment of semantic features necessary for conditioning the diffusion process with noise embedding is often inadequate. Additionally, traditional U-Net backbones in diffusion models are not sufficiently sensitive to the contextual information required for accurate pixel-level segmentation during reverse diffusion. Our method, which integrates Adaptive Dual Attention into Diffusion, addresses these issues by capturing local and global contextual information, enhancing the precision and robustness of 3D image segmentation. Our approach surpasses current state-of-the-art methods on the BraTS2020 dataset, achieving higher segmentation accuracy. This improved performance can significantly aid in diagnosing and treating medical conditions by enabling highly accurate segmentation of anatomical structures in 3D medical images.

Related Material


[pdf]
[bibtex]
@InProceedings{Do_2024_ACCV, author = {Do, Nhu-Tai and Bui, Van-Hung and Nguyen, Quoc-Huy}, title = {Adaptive Dual Attention into Diffusion for 3D Medical Image Segmentation}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV) Workshops}, month = {December}, year = {2024}, pages = {351-364} }