4DComplete: Non-Rigid Motion Estimation Beyond the Observable Surface

Yang Li, Hikari Takehara, Takafumi Taketomi, Bo Zheng, Matthias Nießner; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 12706-12716

Abstract


Tracking non-rigidly deforming scenes using range sensors has numerous applications including computer vision, AR/VR, and robotics. However, due to occlusions and physical limitations of range sensors, existing methods only handle the visible surface, thus causing discontinuities and incompleteness in the motion field. To this end, we introduce 4DComplete, a novel data-driven approach that estimates the non-rigid motion for the unobserved geometry. 4DComplete takes as input a partial shape and motion observation, extracts 4D time-space embedding, and jointly infers the missing geometry and motion field using a sparse fully-convolutional network. For network training, we constructed a large-scale synthetic dataset called DeformingThings4D, which consists of 1,972 animation sequences spanning 31 different animals or humanoid categories with dense 4D annotation. Experiments show that 4DComplete 1) reconstructs high-resolution volumetric shape and motion field from a partial observation, 2) learns an entangled 4D feature representation that benefits both shape and motion estimation, 3) yields more accurate and natural deformation than classic non-rigid priors such as As-RigidAs-Possible (ARAP) deformation, and 4) generalizes well to unseen objects in real-world sequences.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Li_2021_ICCV, author = {Li, Yang and Takehara, Hikari and Taketomi, Takafumi and Zheng, Bo and Nie{\ss}ner, Matthias}, title = {4DComplete: Non-Rigid Motion Estimation Beyond the Observable Surface}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {12706-12716} }