-
[pdf]
[supp]
[bibtex]@InProceedings{Cao_2024_ACCV, author = {Cao, Chengwei and Zhang, Jinhui and Gao, Yueyang and Li, Zheng}, title = {PARNet: Aortic Reconstruction from Orthogonal X-rays Using Pre-Trained Generative Adversarial Networks}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2024}, pages = {852-869} }
PARNet: Aortic Reconstruction from Orthogonal X-rays Using Pre-Trained Generative Adversarial Networks
Abstract
The three-dimensional reconstruction of the aorta plays a crucial role in assisting minimally invasive vascular interventions to treat coronary artery disease, aiding surgeons in finding the optimal procedural angles for locating and delivering intervention devices. However, existing reconstruction methods face challenges such as weak imaging capability for low-density tissues in X-rays, limiting the accurate capture and reconstruction of the aorta and other blood vessels. To address these challenges, we propose PARNet, a deep-learning approach for 3D aortic reconstruction from orthogonal X-rays. PARNet leverages pre-training information to extract global and local features using Aortic Reconstruction with Background X-rays (ARB) module and Aortic Reconstruction with Mask X-rays (ARMask) module, respectively, thereby enhancing the model's reconstruction performance with more aortic details. Additionally, customized loss functions are introduced to adapt to the low-density characteristics of the aorta. The results demonstrate that our method outperforms existing approaches, producing results that are visually closest to the ground truth on mainstream datasets.
Related Material