-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Shangguan_2025_WACV, author = {Shangguan, Zeyu and Seita, Daniel and Rostami, Mohammad}, title = {Cross-Domain Multi-Modal Few-Shot Object Detection via Rich Text}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {6570-6580} }
Cross-Domain Multi-Modal Few-Shot Object Detection via Rich Text
Abstract
Cross-modal feature extraction and integration have led to steady performance improvements in few-shot learning tasks. However existing multi-modal object detection (MM-OD) methods degrade when facing significant domain shift and are sample insufficient. We hypothesize that rich text information could more effectively help the model to build a knowledge relationship between the vision instance and its language description and can help mitigate domain shift. Specifically we study the Cross-Domain few-shot generalization of MM-OD (CDMM-FSOD) and propose a meta-learning based multi-modal few-shot object detection method that utilizes rich text semantic information as an auxiliary modality to achieve domain adaptation. Our proposed novel neural network contains a multi-modal feature aggregation module that aligns the vision and language support feature embeddings and a rich text semantic rectify module that utilizes bidirectional text feature generation to reinforce multi-modal feature alignment and thus to enhance the model's language understanding capability. We evaluate our model on common standard cross-domain object detection datasets and demonstrate that our approach considerably outperforms existing FSOD methods. Our implementation is publicly available: https://github.com/zshanggu/CDMM
Related Material