Geometry-Free View Synthesis: Transformers and No 3D Priors

Robin Rombach, Patrick Esser, Björn Ommer; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 14356-14366

Abstract


Is a geometric model required to synthesize novel views from a single image? Being bound to local convolutions, CNNs need explicit 3D biases to model geometric transformations. In contrast, we demonstrate that a transformer-based model can synthesize entirely novel views without any hand-engineered 3D biases. This is achieved by (i) a global attention mechanism for implicitly learning long-range 3D correspondences between source and target views, and (ii) a probabilistic formulation necessary to capture the ambiguity inherent in predicting novel views from a single image, thereby overcoming the limitations of previous approaches that are restricted to relatively small viewpoint changes. We evaluate various ways to integrate 3D priors into a transformer architecture. However, our experiments show that no such geometric priors are required and that the transformer is capable of implicitly learning 3D relationships between images. Furthermore, this approach outperforms the state of the art in terms of visual quality while covering the full distribution of possible realizations.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Rombach_2021_ICCV, author = {Rombach, Robin and Esser, Patrick and Ommer, Bj\"orn}, title = {Geometry-Free View Synthesis: Transformers and No 3D Priors}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {14356-14366} }