UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding

Zhenyu Chen, Ronghang Hu, Xinlei Chen, Matthias Nießner, Angel X. Chang; Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 2023, pp. 18109-18119

Abstract


Performing 3D dense captioning and visual grounding requires a common and shared understanding of the underlying multimodal relationships. However, despite some previous attempts on connecting these two related tasks with highly task-specific neural modules, it remains understudied how to explicitly depict their shared nature to learn them simultaneously. In this work, we propose UniT3D, a simple yet effective fully unified transformer-based architecture for jointly solving 3D visual grounding and dense captioning. UniT3D enables learning a strong multimodal representation across the two tasks through a supervised joint pre-training scheme with bidirectional and seq-to-seq objectives. With a generic architecture design, UniT3D allows expanding the pre-training scope to more various training sources such as the synthesized data from 2D prior knowledge to benefit 3D vision-language tasks. Extensive experiments and analysis demonstrate that UniT3D obtains significant gains for 3D dense captioning and visual grounding.

Related Material


[pdf] [supp] [arXiv]
[bibtex]
@InProceedings{Chen_2023_ICCV, author = {Chen, Zhenyu and Hu, Ronghang and Chen, Xinlei and Nie{\ss}ner, Matthias and Chang, Angel X.}, title = {UniT3D: A Unified Transformer for 3D Dense Captioning and Visual Grounding}, booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)}, month = {October}, year = {2023}, pages = {18109-18119} }