VQ3D: Learning a 3D-Aware Generative Model on ImageNet

Kyle Sargent, Jing Yu Koh, Han Zhang, Huiwen Chang, Charles Herrmann, Pratul Srinivasan, Jiajun Wu, Deqing Sun; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 4240-4250

Abstract


Recent work has shown the possibility of training generative models of 3D content from 2D image collections on small datasets corresponding to a single object class, such as human faces, animal faces, or cars. However, these models struggle on larger, more complex datasets. To model diverse and unconstrained image collections such as ImageNet, we present VQ3D, which introduces a NeRF-based decoder into a two-stage vector-quantized autoencoder. Our Stage 1 allows for the reconstruction of an input image and the ability to change the camera position around the image, and our Stage 2 allows for the generation of new 3D scenes. VQ3D is capable of generating and reconstructing 3D-aware images from the 1000-class ImageNet dataset of 1.2 million training images, and achieves a competitive ImageNet generation FID score of 16.8.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Sargent_2023_ICCV, author = {Sargent, Kyle and Koh, Jing Yu and Zhang, Han and Chang, Huiwen and Herrmann, Charles and Srinivasan, Pratul and Wu, Jiajun and Sun, Deqing}, title = {VQ3D: Learning a 3D-Aware Generative Model on ImageNet}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {4240-4250} }