OmniVec2 - A Novel Transformer based Network for Large Scale Multimodal and Multitask Learning

Siddharth Srivastava, Gaurav Sharma; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024, pp. 27412-27424

Abstract


We present a novel multimodal multitask network and associated training algorithm. The method is capable of ingesting data from approximately 12 different modalities namely image video audio text depth point cloud time series tabular graph X-ray infrared IMU and hyperspectral. The proposed approach utilizes modality specialized tokenizers a shared transformer architecture and cross-attention mechanisms to project the data from different modalities into a unified embedding space. It addresses multimodal and multitask scenarios by incorporating modality-specific task heads for different tasks in respective modalities. We propose a novel pretraining strategy with iterative modality switching to initialize the network and a training algorithm which trades off fully joint training over all modalities with training on pairs of modalities at a time. We provide comprehensive evaluation across 25 datasets from 12 modalities and show state of the art performances demonstrating the effectiveness of the proposed architecture pretraining strategy and adapted multitask training.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Srivastava_2024_CVPR, author = {Srivastava, Siddharth and Sharma, Gaurav}, title = {OmniVec2 - A Novel Transformer based Network for Large Scale Multimodal and Multitask Learning}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2024}, pages = {27412-27424} }