Learning to Sample

Oren Dovrat, Itai Lang, Shai Avidan; The IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2019, pp. 2760-2769

Abstract


Processing large point clouds is a challenging task. Therefore, the data is often sampled to a size that can be processed more easily. The question is how to sample the data? A popular sampling technique is Farthest Point Sampling (FPS). However, FPS is agnostic to a downstream application (classification, retrieval, etc.). The underlying assumption seems to be that minimizing the farthest point distance, as done by FPS, is a good proxy to other objective functions. We show that it is better to learn how to sample. To do that, we propose a deep network to simplify 3D point clouds. The network, termed S-NET, takes a point cloud and produces a smaller point cloud that is optimized for a particular task. The simplified point cloud is not guaranteed to be a subset of the original point cloud. Therefore, we match it to a subset of the original points in a post-processing step. We contrast our approach with FPS by experimenting on two standard data sets and show significantly better results for a variety of applications. Our code is publicly available.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Dovrat_2019_CVPR,
author = {Dovrat, Oren and Lang, Itai and Avidan, Shai},
title = {Learning to Sample},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2019}
}