PC-GZSL: Prior Correction for Generalized Zero Shot Learning

S Divakar Bhat, Amit More, Mudit Soni, Bhuvan Aggarwal; Proceedings of the Winter Conference on Applications of Computer Vision (WACV), 2025, pp. 7173-7183

Abstract


Generalized Zero Shot Learning (GZSL) aims at achieving a good accuracy on both seen and unseen classes by relying on the information acquired from auxiliary attributes. Existing approaches have devised many frameworks to make this knowledge transfer more efficient and informative. Despite their effectiveness on boosting the overall performance there has always been a strong bias in the model towards the seen classes which makes GZSL problem more challenging. The effect of this bias on the model performance has never been properly explored. We observe that GZSL algorithms in literature have an evident bias towards the seen classes. Further we also show that techniques like calibrated stacking fall short of resolving this conflict between the seen and unseen classes effectively. In this work we analyze and develop a logit-adjustment approach in GZSL setting and propose a simple yet effective method to remove the bias from trained models in a post-hoc manner. Moreover as a consequence of the post-hoc nature of the proposed approach there is no additional training cost. We exhaustively compare the proposed method on both embedding-based and generative-based GZSL frameworks surpassing the SOTA results by 3.1% 4.6% and 3.1% on CUB SUN and AwA2 datasets. We also present theoretical analysis showing effectiveness of proposed approach.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Bhat_2025_WACV, author = {Bhat, S Divakar and More, Amit and Soni, Mudit and Aggarwal, Bhuvan}, title = {PC-GZSL: Prior Correction for Generalized Zero Shot Learning}, booktitle = {Proceedings of the Winter Conference on Applications of Computer Vision (WACV)}, month = {February}, year = {2025}, pages = {7173-7183} }