DNA: Deformable Neural Articulations Network for Template-Free Dynamic 3D Human Reconstruction From Monocular RGB-D Video
In this paper, we present a novel Deformable Neural Articulations Network (DNA-Net), which is a template-free learning-based method for dynamic 3D human reconstruction from a single RGB-D sequence. Our proposed DNA-Net includes a Neural Articulation Prediction Network (NAP-Net), which is capable of representing non-rigid motions of a human by learning to predict a set of articulated bones to follow movements of the human in the input video. Moreover, DNA-Net also include Signed Distance Field Network (SDF-Net) and Apearance Network (Color-Net), which take advantage of the powerful neural implicit functions in modeling 3D geometries and appearance. Finally, to avoid the reliance on external optical flow estimators to obtain deformation cues like previous related works, we propose a novel training loss, namely Easy-to-Hard Geometric-based, which is a simple strategy that inherits the merits of Chamfer distance to achieve good deformation guidance while still avoiding its limitation of local mismatches sensitivity. DNA-Net is trained end-to-end in a self-supervised manner directly on the input video to obtain 3D reconstructions of the input objects. Quantitative results on videos of DeepDeform dataset show that DNA-Net outperforms related state-of-the-art methods with an adequate gaps, qualitative results additionally prove that our method can reconstruct human shapes with high fidelity and details.