MetaSets: Meta-Learning on Point Sets for Generalizable Representations

Chao Huang, Zhangjie Cao, Yunbo Wang, Jianmin Wang, Mingsheng Long; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2021, pp. 8863-8872

Abstract


Deep learning techniques for point clouds have achieved strong performance on a range of 3D vision tasks. However, it is costly to annotate large-scale point sets, making it critical to learn generalizable representations that can transfer well across different point sets. In this paper, we study a new problem of 3D Domain Generalization (3DDG) with the goal to generalize the model to other unseen domains of point clouds without any access to them in the training process. It is a challenging problem due to the substantial geometry shift from simulated to real data, such that most existing 3D models underperform due to overfitting the complete geometries in the source domain. We propose to tackle this problem with MetaSets, which meta-learns point cloud representations from a set of classification tasks on carefully-designed transformed point sets containing specific geometry priors. The learned representations are more generalizable to various unseen domains of different geometries. We design two benchmarks for Sim-to-Real transfer of 3D point clouds. Experimental results show that MetaSets outperforms existing 3D deep learning methods by large margins.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Huang_2021_CVPR, author = {Huang, Chao and Cao, Zhangjie and Wang, Yunbo and Wang, Jianmin and Long, Mingsheng}, title = {MetaSets: Meta-Learning on Point Sets for Generalizable Representations}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2021}, pages = {8863-8872} }