Render4Completion: Synthesizing Multi-View Depth Maps for 3D Shape Completion

Tao Hu, Zhizhong Han, Abhinav Shrivastava, Matthias Zwicker; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2019, pp. 0-0


We propose a novel approach for 3D shape completion by synthesizing multi-view depth maps. While previous work for shape completion relies on volumetric representations, meshes, or point clouds, we propose to use multi-view depth maps from a set of fixed viewing angles as our shape representation. This allows us to be free of the memory limitations of volumetric representations and point clouds by casting shape completion into an image-to-image translation problem. Specifically, we render depth maps of the incomplete shape from a fixed set of viewpoints, and perform depth map completion in each view. Different from image-to-image translation networks that process each view separately, our novel multi-view completion net (MVCN) leverages information from all views of a 3D shape to help the completion of each single view. This enables MVCN to leverage more information from different depth views to achieve high accuracy in single depth view completion, and improve the consistency among the completed depth images in different views. Benefiting from the multi-view representation and novel network structure, MVCN significantly improves the accuracy of 3D shape completion in large-scale benchmarks compared to the state of the art.

Related Material

author = {Hu, Tao and Han, Zhizhong and Shrivastava, Abhinav and Zwicker, Matthias},
title = {Render4Completion: Synthesizing Multi-View Depth Maps for 3D Shape Completion},
booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops},
month = {Oct},
year = {2019}