KD-DETR: Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling

Yu Wang, Xin Li, Shengzhao Weng, Gang Zhang, Haixiao Yue, Haocheng Feng, Junyu Han, Errui Ding; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 16016-16025

Abstract


DETR is a novel end-to-end transformer architecture object detector which significantly outperforms classic detectors when scaling up. In this paper we focus on the compression of DETR with knowledge distillation. While knowledge distillation has been well-studied in classic detectors there is a lack of researches on how to make it work effectively on DETR. We first provide experimental and theoretical analysis to point out that the main challenge in DETR distillation is the lack of consistent distillation points. Distillation points refer to the corresponding inputs of the predictions for student to mimic which have different formulations in CNN detector and DETR and reliable distillation requires sufficient distillation points which are consistent between teacher and student. Based on this observation we propose the first general knowledge distillation paradigm for DETR(KD-DETR) with consistent distillation points sampling for both homogeneous and heterogeneous distillation. Specifically we decouple detection and distillation tasks by introducing a set of specialized object queries to construct distillation points for DETR. We further propose a general-to-specific distillation points sampling strategy to explore the extensibility of KD-DETR. Extensive experiments validate the effectiveness and generalization of KD-DETR. For both single-scale DAB-DETR and multis-scale Deformable DETR and DINO KD-DETR boost the performance of student model with improvements of 2.6%-5.2%. We further extend KD-DETR to heterogeneous distillation and achieves 2.1% improvement by distilling the knowledge from DINO to Faster R-CNN with ResNet-50 which is comparable with homogeneous distillation methods.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Wang_2024_CVPR, author = {Wang, Yu and Li, Xin and Weng, Shengzhao and Zhang, Gang and Yue, Haixiao and Feng, Haocheng and Han, Junyu and Ding, Errui}, title = {KD-DETR: Knowledge Distillation for Detection Transformer with Consistent Distillation Points Sampling}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {16016-16025} }