Brian Patton
Authored Publications
Sort By
Sequential Monte Carlo Learning for Time Series Structure Discovery
Feras Saad
Matthew D. Hoffman
Vikash Mansinghka
Proceedings of the 40th International Conference on Machine Learning (2023), pp. 29473-29489
Preview abstract
This paper presents a new approach to automatically discovering accurate
models of complex time series data. Working within a Bayesian nonparametric
prior over a symbolic space of Gaussian process time series models, we
present a novel structure learning algorithm that integrates sequential
Monte Carlo (SMC) and involutive MCMC for highly effective posterior
inference. Our method can be used both in "online'' settings, where new
data is incorporated sequentially in time, and in "offline'' settings, by
using nested subsets of historical data to anneal the posterior. Empirical
measurements on a variety of real-world time series show that our method
can deliver 10x--100x runtime speedups over previous MCMC and greedy-search
structure learning algorithms for the same model family. We use our method
to perform the first large-scale evaluation of Gaussian process time series
structure learning on a widely used benchmark of 1,428 monthly econometric
datasets, showing that our method discovers sensible models that deliver
more accurate point forecasts and interval forecasts over multiple horizons
as compared to prominent statistical and neural baselines that struggle on
this challenging data.
View details
Automatically batching control-intensive programs for modern accelerators
Alexey Radul
Dougal Maclaurin
Matthew D. Hoffman
Third Conference on Systems and Machine Learning, Austin, TX (2020)
Preview abstract
We present a general approach to batching arbitrary computations for
GPU and TPU accelerators. We demonstrate the effectiveness of our
method with orders-of-magnitude speedups on the No U-Turn Sampler
(NUTS), a workhorse algorithm in Bayesian statistics. The central
challenge of batching NUTS and other Markov chain Monte Carlo
algorithms is data-dependent control flow and recursion. We overcome
this by mechanically transforming a single-example implementation into
a form that explicitly tracks the current program point for each batch
member, and only steps forward those in the same place. We present
two different batching algorithms: a simpler, previously published one
that inherits recursion from the host Python, and a more complex,
novel one that implmenents recursion directly and can batch across it.
We implement these batching methods as a general program
transformation on Python source. Both the batching system and the
NUTS implementation presented here are available as part of the
popular TensorFlow Probability software package.
View details
Universal Sound Separation
Ilya Kavalerov
Jonathan Le Roux
IEEE Workshop on Applications of Signal Processing to Audio and Acoustics (WASPAA) (2019)
Preview abstract
Recent deep learning approaches have achieved impressive performance on speech enhancement and separation tasks. However, these approaches have not been investigated for separating mixtures of arbitrary sounds of different types, a task we refer to as universal sound separation, and it is unknown whether performance on speech tasks carries over to non-speech tasks. To study this question, we develop a universal dataset of mixtures containing arbitrary sounds, and use it to investigate the space of mask-based separation architectures, varying both the overall network architecture and the framewise analysis-synthesis basis for signal transformations. These network architectures include convolutional long short-term memory networks and time-dilated convolution stacks inspired by the recent success of time-domain enhancement networks like ConvTasNet. For the latter architecture, we also propose novel modifications that further improve separation performance. In terms of the framewise analysis-synthesis basis, we explore using either a short-time Fourier transform (STFT) or a learnable basis, as used in ConvTasNet, and for both of these bases, we examine the effect of window size. In particular, for STFTs, we find that longer windows (25-50 ms) work best for speech/non-speech separation, while shorter windows (2.5 ms) work best for arbitrary sounds. For learnable bases, shorter windows (2.5 ms) work best on all tasks. Surprisingly, for universal sound separation, STFTs outperform learnable bases. Our best methods produce an improvement in scale-invariant signal-to-distortion ratio of over 13 dB for speech/non-speech separation and close to 10 dB for universal sound separation.
View details
Differentiable Consistency Constraints for Improved Deep Speech Enhancement
Jeremy Thorpe
Michael Chinen
IEEE International Conference on Acoustics, Speech, and Signal Processing (2019)
Preview abstract
In recent years, deep networks have led to dramatic improvements in speech enhancement by framing it as a data-driven pattern recognition problem. In many modern enhancement systems, large amounts of data are used to train a deep network to estimate masks for complex-valued short-time Fourier transforms (STFTs) to suppress noise and preserve speech. However, current masking approaches often neglect two important constraints: STFT consistency and mixture consistency. Without STFT consistency, the system’s output is not necessarily the STFT of a time-domain signal, and without mixture consistency, the sum of the estimated sources does not necessarily equal the input mixture. Furthermore, the only previous approaches that apply mixture consistency use real-valued masks; mixture consistency has been ignored for complex-valued masks. In this paper, we show that STFT consistency and mixture consistency can be jointly imposed by adding simple differentiable projection layers to the enhancement network. These layers are compatible with real or complex-valued masks. Using both of these constraints with complex-valued masks provides a 0.7 dB increase in scale-invariant signal-to-distortion ratio (SI-SDR) on a large dataset of speech corrupted by a wide variety of nonstationary noise across a range of input SNRs.
View details
Estimating the Spectral Density of Large Implicit Matrices
Ryan Adams
Jeffrey Pennington
Matthew Johnson
Jamie Smith
Yaniv Ovadia
James Saunderson
arXiv (2018)
Preview abstract
Many important problems are characterized by the eigenvalues of a large matrix. For example, the difficulty of many optimization problems, such as those arising from the fitting of large models in statistics and machine learning, can be investigated via the spectrum of the Hessian of the empirical loss function. Network data can be understood via the eigenstructure of a graph Laplacian matrix using spectral graph theory. Quantum simulations and other many-body problems are often characterized via the eigenvalues of the solution space, as are various dynamic systems. However, naive eigenvalue estimation is computationally expensive even when the matrix can be represented; in many of these situations the matrix is so large as to only be available implicitly via products with vectors. Even worse, one may only have noisy estimates of such matrix vector products. In this work, we combine several different techniques for randomized estimation and show that it is possible to construct unbiased estimators to answer a broad class of questions about the spectra of such implicit matrices, even in the presence of noise. We validate these methods on large-scale problems in which graph theory and random matrix theory provide ground truth.
View details
EXPLORING TRADEOFFS IN MODELS FOR LOW-LATENCY SPEECH ENHANCEMENT
Jeremy Thorpe
Michael Chinen
Proceedings of the 16th International Workshop on Acoustic Signal Enhancement (2018)
Preview abstract
We explore a variety of configurations of neural networks for one- and
two-channel spectrogram-mask-based speech enhancement. Our best model improves on
state-of-the-art performance on the CHiME2 speech enhancement task.
We examine trade-offs among non-causal lookahead, compute work, and parameter count versus enhancement performance and find that zero-lookahead models can achieve, on average, only 0.5 dB worse performance than our best bidirectional model. Further, we find that 200 milliseconds of lookahead is sufficient to achieve performance within about 0.2 dB from our best bidirectional model.
View details
TensorFlow Distributions
Josh Dillon
Dustin Tran
Eugene Brevdo
Srinivas Vasudevan
Dave Moore
Alex Alemi
Matt Hoffman
Workshop on Probabilistic Programming Languages, Semantics, and Systems (PPS 2018) (2017)
Preview abstract
The TensorFlow Distributions library implements a vision of probability theory adapted to the modern deep-learning paradigm of end-to-end differentiable computation. Building on two basic abstractions, it offers flexible building blocks for probabilistic computation. Distributions provide fast, numerically stable methods for generating samples and computing statistics, e.g., log density. Bijectors provide composable volume-tracking transformations with automatic caching. Together these enable modular construction of high dimensional distributions and transformations not possible with previous libraries (e.g., pixelCNNs, autoregressive flows, and reversible residual networks). They are the workhorse behind deep probabilistic programming systems like Edward and empower fast black-box inference in probabilistic models built on deep-network components. TensorFlow Distributions has proven an important part of the TensorFlow toolkit within Google and in the broader deep learning community.
View details
AutoMOS: Learning a non-intrusive assessor of naturalness-of-speech
Yannis Agiomyrgiannakis
NIPS 2016 End-to-end Learning for Speech and Audio Processing Workshop (to appear)
Preview abstract
Developers of text-to-speech synthesizers (TTS) often make use of
human raters to assess the quality of synthesized speech. We
demonstrate that we can model human raters' mean opinion scores
(MOS) of synthesized speech using a deep recurrent neural network
whose inputs consist solely of a raw waveform. Our best models
provide utterance-level estimates of MOS only moderately inferior to
sampled human ratings, as shown by Pearson and Spearman
correlations. When multiple utterances are scored and averaged,
a scenario common in synthesizer quality assessment,
we achieve correlations comparable to those of human raters.
This model has a number of applications, such as the
ability to automatically explore the parameter space of a speech
synthesizer without requiring a human-in-the-loop.
We explore a method of probing what the models have learned.
View details