3DMiner: Discovering Shapes from Large-Scale Unannotated Image Datasets

Ta-Ying Cheng, Matheus Gadelha, Sören Pirk, Thibault Groueix, Radomír Měch, Andrew Markham, Niki Trigoni; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 9331-9341


We present 3DMiner -- a pipeline for mining 3D shapes from challenging large-scale unannotated image datasets. Unlike other unsupervised 3D reconstruction methods, we assume that, within a large-enough dataset, there must exist images of objects with similar shapes but varying backgrounds, textures, and viewpoints. Our approach leverages the recent advances in learning self-supervised image representations to cluster images with geometrically similar shapes and find common image correspondences between them. We then exploit these correspondences to obtain rough camera estimates as initialization for bundle-adjustment. Finally, for every image cluster, we apply a progressive bundle-adjusting reconstruction method to learn a neural occupancy field representing the underlying shape. We show that this procedure is robust to several types of errors introduced in previous steps (e.g., wrong camera poses, images containing dissimilar shapes, etc.), allowing us to obtain shape and pose annotations for images in-the-wild. When using images from Pix3D chairs, our method is capable of producing significantly better results than state-of-the-art unsupervised 3D reconstruction techniques, both quantitatively and qualitatively. Furthermore, we show how 3DMiner can be applied to in-the-wild data by reconstructing shapes present in images from the LAION-5B dataset.

Related Material

@InProceedings{Cheng_2023_ICCV, author = {Cheng, Ta-Ying and Gadelha, Matheus and Pirk, S\"oren and Groueix, Thibault and M\v{e}ch, Radom{\'\i}r and Markham, Andrew and Trigoni, Niki}, title = {3DMiner: Discovering Shapes from Large-Scale Unannotated Image Datasets}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {9331-9341} }