What Do Single-View 3D Reconstruction Networks Learn?

Maxim Tatarchenko, Stephan R. Richter, Rene Ranftl, Zhuwen Li, Vladlen Koltun, Thomas Brox; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 3405-3414

Abstract


Convolutional networks for single-view object reconstruction have shown impressive performance and have become a popular subject of research. All existing techniques are united by the idea of having an encoder-decoder network that performs non-trivial reasoning about the 3D structure of the output space. In this work, we set up two alternative approaches that perform image classification and retrieval respectively. These simple baselines yield better results than state-of-the-art methods, both qualitatively and quantitatively. We show that encoder-decoder methods are statistically indistinguishable from these baselines, thus indicating that the current state of the art in single-view object reconstruction does not actually perform reconstruction but image classification. We identify aspects of popular experimental procedures that elicit this behavior and discuss ways to improve the current state of research.

Related Material


[pdf]
[bibtex]
@InProceedings{Tatarchenko_2019_CVPR,
author = {Tatarchenko, Maxim and Richter, Stephan R. and Ranftl, Rene and Li, Zhuwen and Koltun, Vladlen and Brox, Thomas},
title = {What Do Single-View 3D Reconstruction Networks Learn?},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}