-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Raj_2023_ICCV, author = {Raj, Amit and Kaza, Srinivas and Poole, Ben and Niemeyer, Michael and Ruiz, Nataniel and Mildenhall, Ben and Zada, Shiran and Aberman, Kfir and Rubinstein, Michael and Barron, Jonathan and Li, Yuanzhen and Jampani, Varun}, title = {DreamBooth3D: Subject-Driven Text-to-3D Generation}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {2349-2359} }
DreamBooth3D: Subject-Driven Text-to-3D Generation
Abstract
We present DreamBooth3D, an approach to personalize text-to-3D generative models from as few as 3-6 casually captured images of a subject. Our approach combines recent advances in personalizing text-to-image models (DreamBooth) with text-to-3D generation (DreamFusion). We find that naively combining these methods fails to yield satisfactory subject-specific 3D assets due to personalized text-to-image models overfitting to the input viewpoints of the subject. We overcome this through a 3-stage optimization strategy where we jointly leverage the 3D consistency of neural radiance fields together with the personalization capability of text-to-image models. Our method can produce high-quality, subject-specific 3D assets with text-driven modifications such as novel poses, colors and attributes that are not seen in any of the input images of the subject.
Related Material