Clustering Meets Implicit Generative Models
Abstract
Clustering is a cornerstone of unsupervised learning which can be thought as disen-
tangling the multiple generative mechanisms underlying the data. In this paper we
introduce an algorithmic framework to train mixtures of implicit generative models
which we particularize for variational autoencoders. Relying on an additional set of
discriminators, we propose a competitive procedure in which the models only need
to approximate the portion of the data distribution from which they can produce
realistic samples. As a byproduct, each model is simpler to train, and a clustering
interpretation arises naturally from the partitioning of the training points among
the models. We empirically show that our approach splits the training distribution
in a reasonable way and increases the quality of the generated samples.