Learning View Selection for 3D Scenes

Yifan Sun, Qixing Huang, Dun-Yu Hsiao, Li Guan, Gang Hua; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 14464-14473

Abstract


Efficient 3D space sampling to represent an underlying3D object/scene is essential for 3D vision, robotics, and be-yond. A standard approach is to explicitly sample a densecollection of views and formulate it as a view selection prob-lem, or, more generally, a set cover problem. In this paper,we introduce a novel approach that avoids dense view sam-pling. The key idea is to learn a view prediction networkand a trainable aggregation module that takes the predictedviews as input and outputs an approximation of their genericscores (e.g., surface coverage, viewing angle from surfacenormals). This methodology allows us to turn the set coverproblem (or multi-view representation optimization) into acontinuous optimization problem. We then explain how toeffectively solve the induced optimization problem using con-tinuation, i.e., aggregating a hierarchy of smoothed scoringmodules. Experimental results show that our approach ar-rives at similar or better solutions with about 10 x speed upin running time, comparing with the standard methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Sun_2021_CVPR, author = {Sun, Yifan and Huang, Qixing and Hsiao, Dun-Yu and Guan, Li and Hua, Gang}, title = {Learning View Selection for 3D Scenes}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {14464-14473} }