Rendering Humans from Object-Occluded Monocular Videos

Tiange Xiang, Adam Sun, Jiajun Wu, Ehsan Adeli, Li Fei-Fei; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 3239-3250

Abstract


3D understanding and rendering of moving humans from monocular videos is a challenging task. Although recent progress has enabled this task to some extent, it is still difficult to guarantee satisfactory results in real-world scenarios, where obstacles may block the camera view and cause partial occlusions in the captured videos. Existing methods cannot handle such defects due to two reasons. Firstly, the standard rendering strategy relies on point-point mapping, which could lead to dramatic disparities between the visible and occluded areas of the body. Secondly, the naive direct regression approach does not consider any feasibility criteria (i.e., prior information) for rendering under occlusions. To tackle the above drawbacks, we present OccNeRF, a neural rendering method that achieves better rendering of humans in severely occluded scenes. As direct solutions to the two drawbacks, we propose surface-based rendering by integrating geometry and visibility priors. We validate our method on both simulated and real-world occlusions and demonstrate our method's superiority.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Xiang_2023_ICCV, author = {Xiang, Tiange and Sun, Adam and Wu, Jiajun and Adeli, Ehsan and Fei-Fei, Li}, title = {Rendering Humans from Object-Occluded Monocular Videos}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {3239-3250} }