Generative Modeling of Class Probability for Multi-Modal Representation Learning

JungKyoo Shin, Bumsoo Kim, Eunwoo Kim; Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR), 2025, pp. 20737-20746

Abstract


Multi-modal understanding plays a crucial role in artificial intelligence by enabling models to jointly interpret inputs from different modalities. However, conventional approaches such as contrastive learning often struggle with modality discrepancies, leading to potential misalignments. In this paper, we propose a novel class anchor alignment approach that leverages class probability distributions for multi-modal representation learning. Our method, Class-anchor-ALigned generative Modeling (CALM), encodes class anchors as prompts to generate and align class probability distributions for each modality, enabling more flexible alignment. Furthermore, we introduce a cross-modal probabilistic variational autoencoder to model uncertainty in the alignment, enhancing the ability to capture deeper relationships between modalities and data variations. Extensive experiments on four benchmark datasets demonstrate that our approach significantly outperforms state-of-the-art methods, especially in out-of-domain evaluations. This highlights its superior generalization capabilities in multi-modal representation learning.

Related Material


[pdf] [arXiv]
[bibtex]
@InProceedings{Shin_2025_CVPR, author = {Shin, JungKyoo and Kim, Bumsoo and Kim, Eunwoo}, title = {Generative Modeling of Class Probability for Multi-Modal Representation Learning}, booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)}, month = {June}, year = {2025}, pages = {20737-20746} }