Jump to Content

Consistency Regularization for Generative Adversarial Networks

Augustus Odena
Han Zhang
Honglak Lee
International Conference on Learning Representations (2020)
Google Scholar

Abstract

Generative Adversarial Networks are plagued by training instability, despite considerable research effort. Progress has been made on this topic, but many of the proposed interventions are complicated, computationally expensive, or both. In this work, we propose a simple and effective training stabilizer based on the notion of Consistency Regularization - a popular technique in the Semi-Supervised Learning literature. In particular, we augment data passing into the GAN discriminator and penalize the sensitivity of the penultimate layer of the discriminator to these augmentations. This regularization increases the robustness of the discriminator to input perturbations and demonstrably reduces memorization of the training data. We conduct a series of ablation studies to demonstrate that consistency regularization is compatible with various GAN architectures and loss functions. Finally, we show that applying consistency regularization to GANs improves state-of-the-art FID scores on the ImageNet-2012 data set. Our code is open-sourced at \textbf{URL blinded for peer review}.