SymmNeRF: Learning to Explore Symmetry Prior for Single-View View Synthesis

Xingyi Li, Chaoyi Hong, Yiran Wang, Zhiguo Cao, Ke Xian, Guosheng Lin; Proceedings of the Asian Conference on Computer Vision (ACCV), 2022, pp. 1726-1742

Abstract


We study the problem of novel view synthesis of objects from a single image. Existing methods have demonstrated the potential in single-view view synthesis. However, they still fail to recover the fine appearance details, especially in self-occluded areas. This is because a single view only provides limited information. We observe that manmade objects usually exhibit symmetric appearances, which introduce additional prior knowledge. Motivated by this, we investigate the potential performance gains of explicitly embedding symmetry into the scene representation. In this paper, we propose SymmNeRF, a neural radiance field (NeRF) based framework that combines local and global conditioning under the introduction of symmetry priors. In particular, SymmNeRF takes the pixel-aligned image features and the corresponding symmetric features as extra inputs to the NeRF, whose parameters are generated by a hypernetwork. As the parameters are conditioned on the image-encoded latent codes, SymmNeRF is thus scene-independent and can generalize to new scenes. Experiments on synthetic and real-world datasets show that SymmNeRF synthesizes novel views with more details regardless of the pose transformation, and demonstrates good generalization when applied to unseen objects. Code is available at: https://github.com/xingyi-li/SymmNeRF.

Related Material


[pdf] [supp] [arXiv] [code]
[bibtex]
@InProceedings{Li_2022_ACCV, author = {Li, Xingyi and Hong, Chaoyi and Wang, Yiran and Cao, Zhiguo and Xian, Ke and Lin, Guosheng}, title = {SymmNeRF: Learning to Explore Symmetry Prior for Single-View View Synthesis}, booktitle = {Proceedings of the Asian Conference on Computer Vision (ACCV)}, month = {December}, year = {2022}, pages = {1726-1742} }