CodeNeRF: Disentangled Neural Radiance Fields for Object Categories

Wonbong Jang, Lourdes Agapito; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 12949-12958

Abstract


CodeNeRF is an implicit 3D neural representation that learns the variation of object shapes and textures across a category and can be trained, from a set of posed images, to synthesize novel views of unseen objects. Unlike the original NeRF, which is scene specific, CodeNeRF learns to disentangle shape and texture by learning separate embeddings. At test time, given a single unposed image of an unseen object, CodeNeRF jointly estimates camera viewpoint, and shape and appearance codes via optimization. Unseen objects can be reconstructed from a single image, and then rendered from new viewpoints or their shape and texture edited by varying the latent codes. We conduct experiments on the SRN benchmark, which show that CodeNeRF generalises well to unseen objects and achieves on-par performance with methods that require known camera pose at test time. Our results on real-world images demonstrate that CodeNeRF can bridge the sim-to-real gap. Project page: https://github.com/wayne1123/code-nerf

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Jang_2021_ICCV, author = {Jang, Wonbong and Agapito, Lourdes}, title = {CodeNeRF: Disentangled Neural Radiance Fields for Object Categories}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12949-12958} }