-
[pdf]
[arXiv]
[bibtex]@InProceedings{Gao_2021_ICCV, author = {Gao, Chen and Saraf, Ayush and Kopf, Johannes and Huang, Jia-Bin}, title = {Dynamic View Synthesis From Dynamic Monocular Video}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {5712-5721} }
Dynamic View Synthesis From Dynamic Monocular Video
Abstract
We present an algorithm for generating novel views at arbitrary viewpoints and any input time step given a monocular video of a dynamic scene. Our work builds upon recent advances in neural implicit representation and uses continuous and differentiable functions for modeling the time-varying structure and the appearance of the scene. We jointly train a time-invariant static NeRF and a time-varying dynamic NeRF, and learn how to blend the results in an unsupervised manner. However, learning this implicit function from a single video is highly ill-posed (with infinitely many solutions that match the input video). To resolve the ambiguity, we introduce regularization losses to encourage a more physically plausible solution. We show extensive quantitative and qualitative results of dynamic view synthesis from casually captured videos.
Related Material