Srinivas Vasudevan
Research Areas
Authored Publications
Sort By
NeuTra-lizing Bad Geometry in Hamiltonian Monte Carlo Using Neural Transport
Josh Dillon
arXiv preprint (2019)
Preview abstract
Hamiltonian Monte Carlo is a powerful algorithm for sampling from difficult-to-normalize posterior distributions. However, when the geometry of the posterior is unfavorable, it may take many expensive evaluations of the target distribution and its gradient to converge and mix. We propose neural transport (NeuTra) HMC, a technique for learning to correct this sort of unfavorable geometry using inverse autoregressive flows (IAF), a powerful neural variational inference technique. The IAF is trained to minimize the KL divergence from an isotropic Gaussian to the warped posterior, and then HMC sampling is performed in the warped space. We evaluate NeuTra HMC on a variety of synthetic and real problems, and find that it significantly outperforms vanilla HMC both in time to reach the stationary distribution and asymptotic effective-sample-size rates.
View details
Simple, Distributed, and Accelerated Probabilistic Programming
Dave Moore
Christopher Gordon Suter
Alexey Radul
Matthew Johnson
NeurIPS (2018)
Preview abstract
We describe Edward2, a low-level probabilistic programming language. Edward2 distills the core of probabilistic programming down to a single abstraction—the random variable. By blurring the line between model and computation, Edward2 enables numerous applications not shown before: a model-parallel variational auto-encoder (VAE) with tensor processing units (TPUs); a data-parallel autoregressive model (Image Transformer) with TPUs; and multi-GPU No-U-Turn Sampler (NUTS). Edward2 achieves an optimal linear speedup from 4 to 256 TPUs. With VAEs, Edward2 sees up to a 20x speedup on TPUs over Pyro and Edward on GPUs; with Bayesian neural networks, Edward2 sees up to a 51x speedup. With NUTS, Edward2 sees a 20x speedup on GPUs over Stan and 7x over PyMC3.
View details
TensorFlow Distributions
Josh Dillon
Dustin Tran
Eugene Brevdo
Dave Moore
Workshop on Probabilistic Programming Languages, Semantics, and Systems (PPS 2018) (2017)
Preview abstract
The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation. Building on two basic abstractions, it offers flexible building blocks for probabilistic computation. Distributions provide fast, numerically stable methods for generating samples and computing statistics, e.g., log density. Bijectors provide composable volume-tracking transformations with automatic caching. Together these enable modular construction of high dimensional distributions and transformations not possible with previous libraries (e.g., pixelCNNs, autoregressive flows, and reversible residual networks). They are the workhorse behind deep probabilistic programming systems like Edward and empower fast black-box inference in probabilistic models built on deep-network components. TensorFlow Distributions has proven an important part of the TensorFlow toolkit within Google and in the broader deep learning community.
View details