-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Kim_2023_ICCV, author = {Kim, Hyunsu and Lee, Gayoung and Choi, Yunjey and Kim, Jin-Hwa and Zhu, Jun-Yan}, title = {3D-aware Blending with Generative NeRFs}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {22906-22918} }
3D-aware Blending with Generative NeRFs
Abstract
Image blending aims to combine multiple images seamlessly. It remains challenging for existing 2D-based methods, especially when input images are misaligned due to differences in 3D camera poses and object shapes. To tackle these issues, we propose a 3D-aware blending method using generative Neural Radiance Fields (NeRF), including two key components: 3D-aware alignment and 3D-aware blending. For 3D-aware alignment, we first estimate the camera pose of the reference image with respect to generative NeRFs and then perform pose alignment for objects. To further leverage 3D information of the generative NeRF, we propose 3D-aware blending that utilizes volume density and blends on the NeRF's latent space, rather than raw pixel space. Collectively, our method outperforms existing 2D baselines, as validated by extensive quantitative and qualitative evaluations with FFHQ and AFHQ-Cat.
Related Material