Globetrotter: Connecting Languages by Connecting Images

Dídac Surís, Dave Epstein, Carl Vondrick; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 16474-16484

Abstract


Machine translation between many languages at once is highly challenging, since training with ground truth requires supervision between all language pairs, which is difficult to obtain. Our key insight is that, while languages may vary drastically, the underlying visual appearance of the world remains consistent. We introduce a method that uses visual observations to bridge the gap between languages, rather than relying on parallel corpora or topological properties of the representations. We train a model that aligns segments of text from different languages if and only if the images associated with them are similar and each image in turn is well-aligned with its textual description. We train our model from scratch on a new dataset of text in over fifty languages with accompanying images. Experiments show that our method outperforms previous work on unsupervised word and sentence translation using retrieval.

Related Material


[pdf] [supp]
[bibtex]
@InProceedings{Suris_2022_CVPR, author = {Sur{\'\i}s, D{\'\i}dac and Epstein, Dave and Vondrick, Carl}, title = {Globetrotter: Connecting Languages by Connecting Images}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {16474-16484} }