-
[pdf]
[supp]
[arXiv]
[bibtex]@InProceedings{Awais_2025_WACV, author = {Awais, Muhammad and Alharthi, Ali Husain Salem Abdulla and Kumar, Amandeep and Cholakkal, Hisham and Anwer, Rao Muhammad}, title = {AgroGPT: Efficient Agricultural Vision-Language Model with Expert Tuning}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {5687-5696} }
AgroGPT: Efficient Agricultural Vision-Language Model with Expert Tuning
Abstract
Significant progress has been made in advancing large multimodal conversational models (LMMs) capitalizing on vast repositories of image-text data available online. Despite this progress these models often encounter substantial domain gaps hindering their ability to engage in complex conversations across new domains. Recent efforts have aimed to mitigate this issue albeit relying on domain-specific image-text data to curate instruction-tuning data. However many domains such as agriculture lack such vision-language data. In this work we propose an approach to construct instruction-tuning data that harnesses vision-only data for the agriculture domain. We utilize diverse agricultural datasets spanning multiple domains curate class-specific information and employ large language models (LLMs) to construct an expert-tuning set resulting in a 70k AgroInstruct. Subsequently we expert-tuned and created AgroGPT an efficient LMM that can hold complex agriculture-related conversations and provide useful insights. We also develop AgroEvals for evaluation and compare AgroGPT's performance with large open and closed-source models. AgroGPT excels at identifying fine-grained agricultural concepts can act as an agriculture expert and provides helpful information for multimodal agriculture questions. The code datasets and models are available at https://github.com/awaisrauf/agroGPT.
Related Material