Jump to Content

BiLex Rx: Lexical Data Augmentation for Massively Multilingual Machine Translation

Alex Jones
ArXiv (2023)
Google Scholar

Abstract

Neural machine translation (NMT) has progressed rapidly over the past several years, and modern models are able to achieve relatively high quality using only monolingual text data, an approach dubbed Unsupervised Machine Translation, or UNMT. However, these models still struggle in a variety of ways, including aspects of translation that for a human are the easiest---for instance, correctly translating common nouns. This work explores a cheap and abundant resource to combat this problem: bilingual lexicons (\textsc{BiLex}s). We test the efficacy of bilingual lexicons in a real-world set-up, on 200-language translation models trained on web-mined text. We present several findings: (1) we demonstrate the most effective ways to use this resource for MT by extensively experimenting with lexical data augmentation techniques, such as codeswitching and lexical prompting; (2) we pinpoint what settings and languages are benefited most from lexical data augmentation; and (3) we provide an empirical, per-language analysis of the quality of the public resource PanLex, a multilingual lexicon covering thousands of languages.

Research Areas