-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Lou_2024_CVPR, author = {Lou, Ange and Planche, Benjamin and Gao, Zhongpai and Li, Yamin and Luan, Tianyu and Ding, Hao and Chen, Terrence and Noble, Jack and Wu, Ziyan}, title = {DaReNeRF: Direction-aware Representation for Dynamic Scenes}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {5031-5042} }
DaReNeRF: Direction-aware Representation for Dynamic Scenes
Abstract
Addressing the intricate challenge of modeling and re-rendering dynamic scenes most recent approaches have sought to simplify these complexities using plane-based explicit representations overcoming the slow training time issues associated with methods like Neural Radiance Fields (NeRF) and implicit representations. However the straightforward decomposition of 4D dynamic scenes into multiple 2D plane-based representations proves insufficient for re-rendering high-fidelity scenes with complex motions. In response we present a novel direction-aware representation (DaRe) approach that captures scene dynamics from six different directions. This learned representation undergoes an inverse dual-tree complex wavelet transformation (DTCWT) to recover plane-based information. DaReNeRF computes features for each space-time point by fusing vectors from these recovered planes. Combining DaReNeRF with a tiny MLP for color regression and leveraging volume rendering in training yield state-of-the-art performance in novel view synthesis for complex dynamic scenes. Notably to address redundancy introduced by the six real and six imaginary direction-aware wavelet coefficients we introduce a trainable masking approach mitigating storage issues without significant performance decline. Moreover DaReNeRF maintains a 2x reduction in training time compared to prior art while delivering superior performance.
Related Material