-
[pdf]
[bibtex]@InProceedings{J_2025_WACV, author = {J, Sanjay S and J, Akash and Rajan, Sreehari and A Shajahan, Dimple and Sharma, Charu}, title = {Adversarial Learning Based Knowledge Distillation on 3D Point Clouds}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {2932-2941} }
Adversarial Learning Based Knowledge Distillation on 3D Point Clouds
Abstract
The significant improvements in point cloud representation learning have increased its applicability in many real-life applications resulting in the need for lightweight better-performing models. One widely proposed efficient method is knowledge distillation where a lightweight model uses knowledge from large models. Very few works exist on distilling the knowledge for point clouds. Most of the work focuses on cross-modal-based approaches that make the method expensive to train. This paper proposes PointKAD an adversarial knowledge distillation framework for point cloud-based tasks. PointKAD includes adversarial feature distillation and response distillation with the help of discriminators to extract and distill the representation of feature maps and logits. We conduct extensive experimental studies on both synthetic (ModelNet40) and real (ScanObjectNN) datasets to show that PointKAD achieves state-of-the-art results compared to the existing knowledge distillation methods for point cloud classification. Additionally we present results on the part segmentation task highlighting the efficacy of the PointKAD framework. Our experiments further reveal that PointKAD is capable of transferring knowledge across different tasks and datasets showcasing its versatility. Furthermore we demonstrate that PointKAD can be applied to a cross-modal training setup achieving competitive performance with cross-modal-based point cloud methods for classification.
Related Material