Pri3D: Can 3D Priors Help 2D Representation Learning?

Ji Hou, Saining Xie, Benjamin Graham, Angela Dai, Matthias Nießner; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2021, pp. 5693-5702

Abstract


Recent advances in 3D perception have shown impressive progress in understanding geometric structures of 3D shapes and even scenes. Inspired by these advances in geometric understanding, we aim to imbue image-based perception with representations learned under geometric constraints. We introduce an approach to learn view-invariant, geometry-aware representations for network pre-training, based on multi-view RGB-D data, that can then be effectively transferred to downstream 2D tasks. We propose to employ contrastive learning under both multi-view image constraints and image-geometry constraints to encode 3D priors into learned 2D representations. This results not only in improvement over 2D-only representation learning on the image-based tasks of semantic segmentation, instance segmentation, and object detection on real-world indoor datasets, but moreover, provides significant improvement in the low data regime. We show a significant improvement of 6.0% on semantic segmentation on full data as well as 11.9% on 20% data against our baselines on ScanNet.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Hou_2021_ICCV, author = {Hou, Ji and Xie, Saining and Graham, Benjamin and Dai, Angela and Nie{\ss}ner, Matthias}, title = {Pri3D: Can 3D Priors Help 2D Representation Learning?}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2021}, pages = {5693-5702} }