Cross-Modal Self-Training: Aligning Images and Pointclouds to learn Classification without Labels

Amaya Dharmasiri, Muzammal Naseer, Salman Khan, Fahad Shahbaz Khan; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 708-717

Abstract


Large-scale vision 2D vision language models such as CLIP can be aligned with a 3D encoder to learn generalizable (open-vocabulary) 3D vision models. However current methods require supervised pre-training for such alignment and the performance of such 3D zero-shot models remains sub-optimal for real-world adaptation. In this work we propose an optimization framework: Cross-MoST: Cross-Modal Self-Training to improve the label-free classification performance of a zero-shot 3D vision model by simply leveraging unlabeled 3D data and their accompanying 2D views. We propose a student-teacher framework to simultaneously process 2D views and 3D point clouds and generate joint pseudo labels to train a classifier and guide cross-model feature alignment. Thereby we demonstrate that 2D vision language models such as CLIP can be used to complement 3D representation learning to improve classification performance without the need for expensive class annotations. Using synthetic and real-world 3D datasets we further demonstrate that Cross-MoST enables efficient cross-modal knowledge exchange resulting in both image and point cloud modalities learning from each other's rich representations.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Dharmasiri_2024_CVPR, author = {Dharmasiri, Amaya and Naseer, Muzammal and Khan, Salman and Khan, Fahad Shahbaz}, title = {Cross-Modal Self-Training: Aligning Images and Pointclouds to learn Classification without Labels}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {708-717} }