- Daniel Levy
- Jascha Sohl-dickstein
- Matt Hoffman
Abstract
We present a general-purpose method to train Markov Chain Monte Carlo kernels (parameterized by deep neural networks) that converge and mix quickly to their target distribution. Our method generalizes Hamiltonian Monte Carlo and is trained to maximize expected squared jump distance, a proxy for mixing speed. We demonstrate significant empirical gains (up to $124\times$ greater effective sample size) on a collection of simple but challenging distributions. Finally, we show quantitative and qualitative gains on a real-world task: latent-variable generative modeling. Python source code is included as supplemental material, and will be open-sourced with the camera-ready paper.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work