-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Santamaria_2025_WACV, author = {Santamaria, Julian D. and Isaza, Claudia and Giraldo, Jhony H.}, title = {CATALOG: A Camera Trap Language-Guided Contrastive Learning Model}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {1197-1206} }
CATALOG: A Camera Trap Language-Guided Contrastive Learning Model
Abstract
Foundation Models (FMs) have been successful in various computer vision tasks like image classification object detection and image segmentation. However these tasks remain challenging when these models are tested on datasets with different distributions from the training dataset a problem known as domain shift. This is especially problematic for recognizing animal species in camera-trap images where we have variability in factors like lighting camoulage and occlusions. In this paper we propose the Camera Trap Language-guided Contrastive Learning (CATALOG) model to address these issues. Our approach combines multiple FMs to extract visual and textual features from camera-trap data and uses a contrastive loss function to train the model. We evaluate CATALOG on two benchmark datasets and show that it outperforms previous state-of-the-art methods in camera-trap image recognition especially when the training and testing data have different animal species or come from different geographical areas. Our approach demonstrates the potential of using FMs in combination with multi-modal fusion and contrastive learning for addressing domain shifts in camera-trap image recognition. The code of CATALOG is publicly available at https://github.com/Julian075/CATALOG.
Related Material