Scalable Dense Non-Rigid Structure-From-Motion: A Grassmannian Perspective

Suryansh Kumar, Anoop Cherian, Yuchao Dai, Hongdong Li; Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2018, pp. 254-263

Abstract


This paper addresses the task of dense non-rigid structure-from-motion (NRSfM) using multiple images. State-of-the-art methods to this problem are often hurdled by scalability, expensive computations, and noisy measurements. Further, recent methods to NRSfM usually either assume a small number of sparse feature points or ignore local non-linearities of shape deformations, and thus cannot reliably model complex non-rigid deformations. To address these issues, in this paper, we propose a new approach for dense NRSfM by modeling the problem on a Grassmann manifold. Specifically, we assume the complex non-rigid deformations lie on a union of local linear subspaces both spatially and temporally. This naturally allows for a compact representation of the complex non-rigid deformation over frames. We provide experimental results on several synthetic and real benchmark datasets. The procured results clearly demonstrate that our method, apart from being scalable and more accurate than state-of-the-art methods, is also more robust to noise and generalizes to highly non-linear deformations.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Kumar_2018_CVPR,
author = {Kumar, Suryansh and Cherian, Anoop and Dai, Yuchao and Li, Hongdong},
title = {Scalable Dense Non-Rigid Structure-From-Motion: A Grassmannian Perspective},
booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2018}
}