-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Kong_2024_CVPR, author = {Kong, Fanjie and Chen, Yanbei and Cai, Jiarui and Modolo, Davide}, title = {Hyperbolic Learning with Synthetic Captions for Open-World Detection}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {16762-16771} }
Hyperbolic Learning with Synthetic Captions for Open-World Detection
Abstract
Open-world detection poses significant challenges as it requires the detection of any object using either object class labels or free-form texts. Existing related works often use large-scale manual annotated caption datasets for training which are extremely expensive to collect. Instead we propose to transfer knowledge from vision-language models (VLMs) to enrich the open-vocabulary descriptions automatically. Specifically we bootstrap dense synthetic captions using pre-trained VLMs to provide rich descriptions on different regions in images and incorporate these captions to train a novel detector that generalizes to novel concepts. To mitigate the noise caused by hallucination in synthetic captions we also propose a novel hyperbolic vision-language learning approach to impose a hierarchy between visual and caption embeddings. We call our detector "HyperLearner". We conduct extensive experiments on a wide variety of open-world detection benchmarks (COCO LVIS Object Detection in the Wild RefCOCO) and our results show that our model consistently outperforms existing state-of-the-art methods such as GLIP GLIPv2 and Grounding DINO when using the same backbone.
Related Material