(Almost) Zero-Shot Cross-Lingual Spoken Language Understanding

Manaal Faruqui
Gokhan Tur
Dilek Hakkani-Tur
Larry Heck
Proceedings of the IEEE ICASSP (2018)
Google Scholar

Abstract

Spoken language understanding (SLU) is a component of
goal-oriented dialogue systems that aims to interpret user's natural language queries in system's semantic representation format. While current state-of-the-art SLU
approaches achieve high performance for English domains, the same is
not true for other languages. Approaches in the literature for
extending SLU models and grammars to new languages rely primarily on machine
translation. This poses a challenge in scaling to new languages, as
machine translation systems may not be reliable for several
(especially low resource) languages. In this work, we examine different
approaches to train a SLU component with little
supervision for two new languages -- Hindi and Turkish, and show that with
only a few hundred labeled examples we can surpass the approaches
proposed in the literature. Our experiments show that training a
model bilingually (i.e., jointly with English), enables faster
learning, in that the model requires fewer labeled instances in the
target language to generalize. Qualitative analysis shows that rare
slot types benefit the most from the bilingual training.