TriCoLo: Trimodal Contrastive Loss for Text To Shape Retrieval

Yue Ruan, Han-Hung Lee, Yiming Zhang, Ke Zhang, Angel X. Chang; Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), 2024, pp. 5815-5825

Abstract


Text-to-shape retrieval is an increasingly relevant problem with the growth of 3D shape data. Recent work on contrastive losses for learning joint embeddings over multimodal data has been successful at tasks such as retrieval and classification. Thus far, work on joint representation learning for 3D shapes and text has focused on improving embeddings through modeling of complex attention between representations, or multi-task learning. We propose a trimodal learning scheme over text, multi-view images and 3D shape voxels, and show that with large batch contrastive learning we achieve good performance on text-to-shape retrieval without complex attention mechanisms or losses. Our experiments serve as a foundation for follow-up work on building trimodal embeddings for text-image-shape.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Ruan_2024_WACV, author = {Ruan, Yue and Lee, Han-Hung and Zhang, Yiming and Zhang, Ke and Chang, Angel X.}, title = {TriCoLo: Trimodal Contrastive Loss for Text To Shape Retrieval}, booktitle = {Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV)}, month = {January}, year = {2024}, pages = {5815-5825} }