Efficient Dense Point Cloud Object Reconstruction using Deformation Vector Fields

Kejie Li, Trung Pham, Huangying Zhan, Ian Reid; Proceedings of the European Conference on Computer Vision (ECCV), 2018, pp. 497-513

Abstract


Most existing CNN-based methods for single-view 3D object reconstruction represent a 3D object as either a 3D voxel occupancy grid or multiple depth-mask image pairs. However, these representations are inefficient since empty voxels or background pixels are wasteful. We propose a novel approach that addresses this limitation by replacing masks with ''deformation-fields''. Given a single image at an arbitrary viewpoint, a CNN predicts multiple surfaces, each in a canonical location relative to the object. Each surface comprises a depth-map and corresponding deformation-field that ensures every pixel-depth pair in the depth-map lies on the object surface. These surfaces are then fused to form the full 3D shape. During training, we use a combination of per-view and multi-view losses. The novel multi-view loss encourages the 3D points back-projected from a particular view to be consistent across views. Extensive experiments demonstrate the efficiency and efficacy of our method on single-view 3D object reconstruction.

Related Material


[pdf]
[bibtex]
@InProceedings{Li_2018_ECCV,
author = {Li, Kejie and Pham, Trung and Zhan, Huangying and Reid, Ian},
title = {Efficient Dense Point Cloud Object Reconstruction using Deformation Vector Fields},
booktitle = {Proceedings of the European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}