-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Huang_2024_CVPR, author = {Huang, Zixuan and Stojanov, Stefan and Thai, Anh and Jampani, Varun and Rehg, James M.}, title = {ZeroShape: Regression-based Zero-shot Shape Reconstruction}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {10061-10071} }
ZeroShape: Regression-based Zero-shot Shape Reconstruction
Abstract
We study the problem of single-image zero-shot 3D shape reconstruction. Recent works learn zero-shot shape reconstruction through generative modeling of 3D assets but these models are computationally expensive at train and inference time. In contrast the traditional approach to this problem is regression-based where deterministic models are trained to directly regress the object shape. Such regression methods possess much higher computational efficiency than generative methods. This raises a natural question: is generative modeling necessary for high performance or conversely are regression-based approaches still competitive? To answer this we design a strong regression-based model called ZeroShape based on the converging findings in this field and a novel insight. We also curate a large real-world evaluation benchmark with objects from three different real-world 3D datasets. This evaluation benchmark is more diverse and an order of magnitude larger than what prior works use to quantitatively evaluate their models aiming at reducing the evaluation variance in our field. We show that ZeroShape not only achieves superior performance over state-of-the-art methods but also demonstrates significantly higher computational and data efficiency.
Related Material