-
[pdf]
[supp]
[bibtex]@InProceedings{Chen_2025_CVPR, author = {Chen, Kang and Zhang, Jiyuan and Hao, Zecheng and Zheng, Yajing and Huang, Tiejun and Yu, Zhaofei}, title = {USP-Gaussian: Unifying Spike-based Image Reconstruction, Pose Correction and Gaussian Splatting}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {16609-16618} }
USP-Gaussian: Unifying Spike-based Image Reconstruction, Pose Correction and Gaussian Splatting
Abstract
Spike camera, as an innovative type of neuromorphic camera that captures scenes with 0-1 bit stream at 40 kHz, is increasingly being employed for the novel view synthesis task building on the techniques such as Neural Radiance Fields (NeRF) and 3D Gaussian Splatting (3DGS). Previous spike-based approaches typically follow a three-stage pipeline: I. Spike-to-image reconstruction based on established algorithms. II. Camera poses estimation. III. Novel view synthesis. However, the cascading framework suffers from substantial cumulative errors, i.e., the quality of the initially reconstructed images will impact pose estimation, ultimately limiting the fidelity of the 3D reconstruction. To address this limitation, we propose a synergistic optimization framework USP-Gaussian, which unifies spike-to-image reconstruction, pose correction, and gaussian splatting into an end-to-end pipeline. Leveraging the multi-view consistency afforded by 3DGS and the motion capture capability of the spike camera, our framework enables iterative optimization between the spike-to-image reconstruction network and 3DGS. Experiments on synthetic datasets demonstrate that our method surpasses previous approaches by effectively eliminating cascading errors. Moreover, in real-world scenarios, our method achieves robust 3D reconstruction benefiting from the integration of pose optimization. Our code, data, and trained models are available at https://github.com/chenkang455/USP-Gaussian.
Related Material