Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images

Tianhao Wu, Chuanxia Zheng, Frank Guan, Andrea Vedaldi, Tat-Jen Cham; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2025, pp. 9181-9193

Abstract


Most existing image-to-3D models assume that objects are fully visible, ignoring occlusions that commonly occur in real-world scenarios. In this paper, we introduce Amodal3R, a conditional image-to-3D model designed to reconstruct plausible 3D geometry and appearance from partial observations. We extend a "foundation" 3D generator by introducing a visible mask-weighted attention mechanism and an occlusion-aware attention layer that explicitly leverages visible and occlusion priors to guide the reconstruction process. We demonstrate that, by training solely on synthetic data, Amodal3R learns to recover full 3D objects even in the presence of occlusions in real scenes. It substantially outperforms state-of-the-art methods that independently perform 2D amodal completion followed by 3D reconstruction, thereby establishing a new benchmark for occlusion-aware 3D reconstruction.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Wu_2025_ICCV, author = {Wu, Tianhao and Zheng, Chuanxia and Guan, Frank and Vedaldi, Andrea and Cham, Tat-Jen}, title = {Amodal3R: Amodal 3D Reconstruction from Occluded 2D Images}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2025}, pages = {9181-9193} }