Google Research

Supervised Seeded Iterated Learning for Interactive Language Learning

Proc. of EMNLP (2020) (to appear)

Abstract

Language drift has been one of the major obstacles to train language models through interaction. When word-based conversational agents are trained towards completing a task, they tend to invent their language rather than leveraging natural language. In recent literature, two general methods partially counter this phenomenon: \emph{Supervised Selfplay (S2P)} and \emph{Seeded Iterated Learning (SIL)}. While S2P jointly trains interactive and supervised losses to counter the drift, SIL changes the training dynamics to prevent language drift from occurring. In this paper, we first highlight their respective weaknesses, i.e., late-stage training collapses and higher negative likelihood when evaluated on human corpus. Given these observations, we introduce \emph{SILS2P} to combine both methods to minimize their respective weaknesses. We then show the effectiveness of SILS2P in the language-drift translation game.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work