-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Zhu_2024_WACV, author = {Zhu, Zhen and Li, Yijun and Lyu, Weijie and Singh, Krishna Kumar and Shu, Zhixin and Pirk, S\"oren and Hoiem, Derek}, title = {Consistent Multimodal Generation via a Unified GAN Framework}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {5048-5057} }
Consistent Multimodal Generation via a Unified GAN Framework
Abstract
We investigate how to generate multimodal image outputs, such as RGB, depth, and surface normals, with a single generative model. The challenge is to produce outputs that are realistic, and also consistent with each other. Our solution builds on the StyleGAN3 architecture, with a shared backbone and modality-specific branches in the last layers of the synthesis network, and we propose per-modality fidelity discriminators and a cross-modality consistency discriminator. In experiments on the Stanford2D3D dataset, we demonstrate realistic and consistent generation of RGB, depth, and normal images. We also show a training recipe to easily extend our pretrained model on a new domain, even with a few pairwise data. We further evaluate the use of synthetically generated RGB and depth pairs for training or fine-tuning depth estimators. Code will be available at https://github.com/jessemelpolio/MultimodalGAN.
Related Material