VLM-PL: Advanced Pseudo Labeling Approach for Class Incremental Object Detection via Vision-Language Model

Junsu Kim, Yunhoe Ku, Jihyeon Kim, Junuk Cha, Seungryul Baek; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2024, pp. 4170-4181

Abstract


In the field of Class Incremental Object Detection (CIOD) creating models that can continuously learn like humans is a major challenge. Pseudo-labeling methods although initially powerful struggle with multi-scenario incremental learning due to their tendency to forget past knowledge. To overcome this we introduce a new approach called Vision-Language Model assisted Pseudo-Labeling (VLM-PL). This technique uses Vision-Language Model (VLM) to verify the correctness of pseudo ground-truths (GTs) without requiring additional model training. VLM-PL starts by deriving pseudo GTs from a pre-trained detector. Then we generate custom queries for each pseudo GT using carefully designed prompt templates that combine image and text features. This allows the VLM to classify the correctness through its responses. Furthermore VLM-PL integrates refined pseudo and real GTs from upcoming training effectively combining new and old knowledge. Extensive experiments conducted on the Pascal VOC and MS COCO datasets not only highlight VLM-PL's exceptional performance in multi-scenario but also illuminate its effectiveness in dual-scenario by achieving state-of-the-art results in both.

Related Material


[pdf]
[bibtex]
@InProceedings{Kim_2024_CVPR, author = {Kim, Junsu and Ku, Yunhoe and Kim, Jihyeon and Cha, Junuk and Baek, Seungryul}, title = {VLM-PL: Advanced Pseudo Labeling Approach for Class Incremental Object Detection via Vision-Language Model}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops}, month = {June}, year = {2024}, pages = {4170-4181} }