Jump to Content

Learning Multilingual Word Embeddings Using Image-Text Data

Karan Singhal
Balder ten Cate
Proceedings of 2019 NAACL HLT Workshop on Shortcomings in Vision and Language (SiVL)

Abstract

There has been significant interest recently in learning multilingual word embeddings -- in which semantically similar words across languages have similar embeddings. State-of-the-art approaches have relied on expensive labeled data, which is unavailable for low-resource languages, or have involved post-hoc unification of monolingual embeddings. In the present paper, we investigate the efficacy of multilingual embeddings learned from weakly-supervised image-text data. In particular, we propose methods for learning multilingual embeddings using image-text data, by enforcing similarity between the representations of the image and that of the text. Our experiments reveal that even without using any expensive labeled data, a bag-of-words-based embedding model trained on image-text data achieves performance comparable to the the state-of-the-art on crosslingual semantic similarity tasks.