Jump to Content

On Self-Modulation for Generative Adversarial Networks

Ting Chen
Sylvain Gelly
International Conference on Learning Representations (2019)

Abstract

Training Generative Adversarial Networks (GANs) is a notoriously challenging task. In this work we propose and study an architectural modification, deemed self-modulation, which improves GAN performance across different data sets, architectures, losses, regularizers, and hyperparameter settings. Intuitively, with self-modulation one allows the intermediate feature maps to change as a function of the input z. While reminiscent of other conditioning techniques, it requires no labeled data. Nevertheless, this simple yet effective approach can be readily applied in the conditional setting if side information is available. In a large-scale empirical study we observe a relative decrease of 5%-35% in FID. Furthermore, everything else being equal, just adding this modification to the generator leads improved performance in ~86% of the studied settings which suggest that it can be applied without extensive hyperparameter optimization.

Research Areas