Joint Embedding of 3D Scan and CAD Objects

Manuel Dahnert, Angela Dai, Leonidas J. Guibas, Matthias Niessner; The IEEE International Conference on Computer Vision (ICCV), 2019, pp. 8749-8758

Abstract


3D scan geometry and CAD models often contain complementary information towards understanding environments, which could be leveraged through establishing a mapping between the two domains. However, this is a challenging task due to strong, lower-level differences between scan and CAD geometry. We propose a novel approach to learn a joint embedding space between scan and CAD geometry, where semantically similar objects from both domains lie close together. To achieve this, we introduce a new 3D CNN-based approach to learn a joint embedding space representing object similarities across these domains. To learn a shared space where scan objects and CAD models can interlace, we propose a stacked hourglass approach to separate foreground and background from a scan object, and transform it to a complete, CAD-like representation to produce a shared embedding space. This embedding space can then be used for CAD model retrieval; to further enable this task, we introduce a new dataset of ranked scan-CAD similarity annotations, enabling new, fine-grained evaluation of CAD model retrieval to cluttered, noisy, partial scans. Our learned joint embedding outperforms current state of the art for CAD model retrieval by 12% in instance retrieval accuracy.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Dahnert_2019_ICCV,
author = {Dahnert, Manuel and Dai, Angela and Guibas, Leonidas J. and Niessner, Matthias},
title = {Joint Embedding of 3D Scan and CAD Objects},
booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
month = {October},
year = {2019}
}