Ben Poole

Ben Poole

I'm a research scientist at Google Brain, where I work on deep generative models and understanding neural networks.

I did my PhD at Stanford University advised by Surya Ganguli in the Neural Dynamics and Computation lab. My thesis was on computational tools to develop a better understanding of both biological and aritficial neural networks. I did my undergrad at Carnegie Mellon University, where I was advised by Tai Sing Lee. I've worked at DeepMind, Google Research, Intel Research Pittsburgh, and the NYU Center for Neural Science.

Check out my website.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Preview abstract The paper introduces a new method for attempting to learn variational approximations to Bayesian posterior predictive distributions that doesn’t require (1) the posterior predic- tive distribution itself, (2) the posterior distribution (3) exact samples from the posterior (4) or any test time marginalization. View details
    Learning Energy-Based Models by Diffusion Recovery Likelihood
    Ruiqi Gao
    Yang Song
    Ying Nian Wu
    Diederik P. Kingma
    Proceedings of ICLR'21 (2021)
    Preview abstract While energy-based models (EBMs) exhibit a number of desirable properties, training and sampling on high-dimensional datasets remains challenging. Inspired by recent progress on diffusion probabilistic models, we present a diffusion recovery likelihood method to tractably learn and sample from a sequence of EBMs trained on increasingly noisy versions of a dataset. Each EBM is trained with recovery likelihood, which maximizes the conditional probability of the data at a certain noise level given their noisy versions at a higher noise level. Optimizing recovery likelihood is more tractable than marginal likelihood, as sampling from the conditional distributions is much easier than sampling from the marginal distributions. After training, synthesized images can be generated by the sampling process that initializes from Gaussian white noise distribution and progressively samples the conditional distributions at decreasingly lower noise levels. Our method generates high fidelity samples on various image datasets. On unconditional CIFAR-10 our method achieves FID 9.58 and inception score 8.30, superior to the majority of GANs. Moreover, we demonstrate that unlike previous work on EBMs, our long-run MCMC samples from the conditional distributions do not diverge and still represent realistic images, allowing us to accurately estimate the normalized density of data even for high-dimensional datasets. View details
    Variational Diffusion Models
    Diederik P. Kingma
    Jonathan Ho
    Advances in Neural Information Processing Systems 34 (NeurIPS 2021) (2021)
    Preview abstract Diffusion-based generative models have demonstrated a capacity for perceptually impressive synthesis, but can they also be great likelihood-based models? We answer this in the affirmative, and introduce a family of diffusion-based generative models that obtain state-of-the-art likelihoods on standard image density estimation benchmarks. Unlike other diffusion-based models, our method allows for efficient optimization of the noise schedule jointly with the rest of the model. We show that the variational lower bound (VLB) simplifies to a remarkably short expression in terms of the signal-to-noise ratio of the diffused data, thereby improving our theoretical understanding of this model class. Using this insight, we prove an equivalence between several models proposed in the literature. In addition, we show that the continuous-time VLB is invariant to the noise schedule, except for the signal-to-noise ratio at its endpoints. This enables us to learn a noise schedule that minimizes the variance of the resulting VLB estimator, leading to faster optimization. Combining these advances with architectural improvements, we obtain state-of-the-art likelihoods on image density estimation benchmarks, outperforming autoregressive models that have dominated these benchmarks for many years, with often significantly faster optimization. In addition, we show how to use the model as part of a bits-back compression scheme, and demonstrate lossless compression rates close to the theoretical optimum. View details
    Score-based generative modeling through stochastic differential equations
    Yang Song
    Jascha Sohl-dickstein
    Diederik P. Kingma
    Abhishek Kumar
    Stefano Ermon
    ICLR 2021 (2021) (to appear)
    Preview abstract Creating noise from data is easy; creating data from noise is generative modeling. We present a stochastic differential equation (SDE) that smoothly transforms a complex data distribution to a known prior distribution by slowly injecting noise, and a corresponding reverse-time SDE that transforms the prior distribution back into the data distribution by slowly removing the noise. Crucially, the reverse-time SDE depends only on the time-dependent gradient field (a.k.a., score) of the perturbed data distribution. By leveraging advances in score-based generative modeling, we can accurately estimate these scores with neural networks, and use numerical SDE solvers to generate samples. We show that this framework encapsulates previous approaches in score-based generative modeling and diffusion probabilistic modeling, allowing for new sampling procedures and new modeling capabilities. In particular, we introduce a predictor-corrector framework to correct errors in the evolution of the discretized reverse-time SDE. We also derive an equivalent neural ODE that samples from the same distribution as the SDE, but additionally enables exact likelihood computation, and improved sampling efficiency. In addition, we provide a new way to solve inverse problems with score-based models, as demonstrated with experiments on class-conditional generation, image inpainting, and colorization. Combined with multiple architectural improvements, we achieve record-breaking performance for unconditional image generation on CIFAR-10 with an Inception score of 9.89 and FID of 2.20, a competitive likelihood of 2.99 bits/dim, and demonstrate high fidelity generation of 1024x1024 images for the first time from a score-based generative model. View details
    On Implicit Regularization in β-VAE
    Abhishek Kumar
    Proceedings of the 37th International Conference on Machine Learning (ICML), 2020
    Preview abstract While the impact of variational inference (VI) on posterior inference in a fixed generative model is well-characterized, its role in regularizing a learned generative model when used in variational autoencoders (VAEs) is poorly understood. We study the regularizing effects of variational distributions on learning in generative models from two perspectives. First, we analyze the role that the choice of variational family plays in imparting uniqueness to the learned model by restricting the set of optimal generative models. Second, we study the regularization effect of the variational family on the local geometry of the decoding model. This analysis uncovers the regularizer implicit in the β-VAE objective, and leads to an approximation consisting of a deterministic autoencoding objective plus analytic regularizers that depend on the Hessian or Jacobian of the decoding model, unifying VAEs with recent heuristics proposed for training regularized autoencoders. We empirically verify these findings, observing that the proposed deterministic objective exhibits similar behavior to the β-VAE in terms of objective value and sample quality. View details
    Preview abstract Contrastive learning between multiple views of the data has recently dominated the field of self-supervised representation learning. Despite its success, the influence of different views is less studied. In this paper, we step towards understanding the importance of view selection with empirical analysis, and argue that we should reduce the mutual information (MI) between contrasted views while keeping their information bits that are relevant to the downstream task. To verify it, we devise an unsupervised and a semi-supervised framework to learn good views from the perspective of color space. We also view data augmentation as a way to reduce MI, and show that increasing data augmentation leads to decreasing MI but improved downstream classification accuracy. As a by-product, a new state-of-the-art accuracy is achieved on ImageNet linear readoff benchmark with ResNet-50. View details
    Discrete Flows: Invertible Generative Models of Discrete Data
    Keyon Vafa
    Kumar Krishna Agrawal
    Laurent Dinh
    NeurIPS (2019)
    Preview abstract While normalizing flows have led to significant advances in modeling high-dimensional continuous distributions, their applicability to discrete distributions remains unknown. In this paper, we show that flows can in fact be extended to discrete events---and under a simple change-of-variables formula not requiring log-determinant-Jacobian computations. Discrete flows have numerous applications. We consider two flow architectures: discrete autoregressive flows that enable bidirectionality, allowing, for example, tokens in text to depend on both left-to-right and right-to-left contexts in an exact language model; and discrete bipartite flows that enable efficient non-autoregressive generation as in RealNVP. Empirically, we find that discrete autoregressive flows outperform autoregressive baselines on synthetic discrete distributions, an addition task, and Potts models; and bipartite flows can obtain competitive performance with autoregressive baselines on character-level language modeling for Penn Tree Bank and text8. View details
    Preview abstract Estimating and optimizing Mutual Information (MI) is core to many problems in machine learning; however, bounding MI in high dimensions is challenging. To establish tractable and scalable objectives, recent work has turned to variational bounds parameterized by neural networks, but the relationships and tradeoffs between these bounds remains unclear. In this work, we unify these recent developments in a single framework. We find that the existing variational lower bounds degrade when the MI is large, exhibiting either high bias or high variance. To address this problem, we introduce a continuum of lower bounds that encompasses previous bounds and flexibly trades off bias and variance. On high-dimensional, controlled problems, we empirically characterize the bias and variance of the bounds and their gradients and demonstrate the effectiveness of our new bounds for estimation and representation learning. View details
    Preview abstract Due to the phenomenon of "posterior collapse," current latent variable generative models pose a challenging design choice that either weakens the capacity of the decoder or requires augmenting the objective so it does not only maximize the likelihood of the data. In this paper, we propose an alternative that utilizes the most powerful generative models as decoders, whilst optimising the variational lower bound all while ensuring that the latent variables preserve and encode useful information. Our proposed δ-VAEs achieve this by constraining the variational family for the posterior to have a minimum distance to the prior. For sequential latent variable models, our approach resembles the classic representation learning approach of slow feature analysis. We demonstrate the efficacy of our approach at modeling text on LM1B and modeling images: learning representations, improving sample quality, and achieving state of the art log-likelihood on CIFAR-10 and ImageNet 32×32. View details
    On the expressive power of deep neural net-works
    Maithra Raghu
    Jon Kleinberg
    Surya Ganguli
    Jascha Sohl-Dickstein
    ICML (2017)
    Preview abstract We propose a novel approach to the problem of neural network expressivity, which seeks to characterize how structural properties of a neural network family affect the functions it is able to compute. Understanding expressivity is a classical issue in the study of neural networks, but it has remained challenging at both a conceptual and a practical level. Our approach is based on an interrelated set of measures of expressivity, unified by the novel notion of trajectory length, which measures how the output of a network changes as the input sweeps along a one-dimensional path. We show how our framework provides insight both into randomly initialized networks (the starting point for most standard optimization methods) and for trained networks. Our findings can be summarized as follows: (1) The complexity of the computed function grows exponentially with depth. We design measures of expressivity that capture the non-linearity of the computed function. These measures grow exponentially with the depth of the network architecture, due to the way the network transforms its input. (2) All weights are not equal (initial layers matter more). We find that trained networks are far more sensitive to their lower (initial) layer weights: they are much less robust to noise in these layer weights, and also perform better when these weights are optimized well. View details