Google Research

MaskGAN: Better Text Generation via Filling in the ____

ICLR (2018)


Recurrent neural networks (RNNs) are a common method of generating text token by token. These models are typically trained via maximum likelihood (known in this context as teacher forcing). However, this approach frequently suffers from problems when using a trained model to generate new text since when generating words later in the sequence the model often conditions on a sequence of words that was never observed at training time. We explore methods for using Generative Adversarial Networks (GANs) as an alternative to teacher forcing to generate discrete sequences. In particular, we consider a conditional GAN that fills in missing text conditioned on the surrounding context. We show qualitatively and quantitatively evidence that this produces more realistic text samples compared to a maximum likelihood trained model. We also propose a new task that quantitatively measures the quality of RNN produced samples.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work