Ananya Misra
Research Areas
Authored Publications
Sort By
Large-scale ASR Domain Adaptation by Self- and Semi-supervised Learning
David Qiu
ICASSP (2022) (to appear)
Preview abstract
Self- and Semi-supervised learning methods have been actively investigated to reduce labeled training data or enhance the model performance. However, the approach mostly focus on in-domain performance for public datasets. In this study, we utilize the combination of self- and semi-supervised learning methods to solve unseen domain adaptation problem in a large-scale production setting for online ASR model. This approach demonstrates that using the source domain data with a small fraction of the target domain data (3%) can recover the performance gap compared to a full data baseline: relative 13.5% WER improvement for target domain data.
View details
Improving Streaming ASR with Non-streaming Model Distillation on Unsupervised Data
Chung-Cheng Chiu
Liangliang Cao
Ruoming Pang
Thibault Doutre
Wei Han
Yu Zhang
Zhiyun Lu
ICASSP 2021 (to appear)
Preview abstract
Streaming end-to-end Automatic Speech Recognition (ASR) models are widely used on smart speakers and on-device applications. Since these models are expected to transcribe speech with minimal latency, they are constrained to be causal with no future context, compared to their non-streaming counterparts. Streaming models almost always perform worse than non-streaming models.
We propose a novel and effective learning method by leveraging a non-streaming ASR model as a teacher, generating transcripts on an arbitrary large data set, to better distill knowledge into streaming ASR models. This way, we are able to scale the training of streaming models to 3M hours of YouTube audio. Experiments show that our approach can significantly reduce the Word Error Rate (WER) of RNN-T models in four languages trained from YouTube data.
View details
Spectral distortion model for training phase-sensitive deep-neural networks for far-field speech recognition
Chanwoo Kim
Rajeev Nongpiur
ICASSP 2018 (2018)
Preview abstract
In this paper, we present an algorithm which introduces phaseperturbation
to the training database when training phase-sensitive
deep neural-network models. Traditional features such as log-mel or
cepstral features do not have have any phase-relevant information.
However more recent features such as raw-waveform or complex
spectra features contain phase-relevant information. Phase-sensitive
features have the advantage of being able to detect differences in
time of arrival across different microphone channels or frequency
bands. However, compared to magnitude-based features, phase
information is more sensitive to various kinds of distortions such
as variations in microphone characteristics, reverberation, and so
on. For traditional magnitude-based features, it is widely known
that adding noise or reverberation, often called Multistyle-TRaining
(MTR) , improves robustness. In a similar spirit, we propose an algorithm
which introduces spectral distortion to make the deep-learning
model more robust against phase-distortion. We call this approach
Spectral-Distortion TRaining (SDTR) and Phase-Distortion TRaining
(PDTR). In our experiments using a training set consisting of
22-million utterances, this approach has proved to be quite successful
in reducing Word Error Rates in test sets obtained with real
microphones on Google Home
View details
Domain Adaptation Using Factorized Hidden Layer for Robust Automatic Speech Recognition
Interspeech (2018), pp. 892-896
Preview abstract
Domain robustness is a challenging problem for automatic speech recognition (ASR). In this paper, we consider speech data collected for different applications as separate domains and investigate the robustness of acoustic models trained on multi-domain data on unseen domains. Specifically, we use Factorized Hidden Layer (FHL) as a compact low-rank representation to adapt a multi-domain ASR system to unseen domains. Experimental results on two unseen domains show that FHL is a more effective adaptation method compared to selectively fine-tuning part of the network, without dramatically increasing the model parameters. Furthermore, we found that using singular value decomposition to initialize the low-rank bases of an FHL model leads to a faster convergence and improved performance.
View details
TOWARD DOMAIN-INVARIANT SPEECH RECOGNITION VIA LARGE SCALE TRAINING
Mohamed (Mo) Elfeky
SLT, IEEE (2018)
Preview abstract
Current state-of-the-art automatic speech recognition systems are trained to work in specific ‘domains’, defined based on factors like application, sampling rate and codec. When such recognizers are used in conditions that do not match the training domain, performance significantly drops. In this paper, we explore the idea of building a single domain-invariant model that works well for varied use-cases. We do this by combining large scale training data from multiple application domains. Our final system is trained using 162,000 hours of speech. Additionally, each utterance is artificially distorted during training to simulate effects like background noise, codec distortion, and sampling rates. Our results show that, even at such a scale, a model thus trained works almost as well as those fine-tuned to specific subsets: A single model can be trained to be robust to multiple application domains, and other variations like codecs and noise. Such models also generalize better to unseen conditions and allow for rapid adaptation to new domains – we show that by using as little as 10 hours of data for adapting a domain-invariant model to a new domain, we can match performance of a domain-specific model trained from scratch using roughly 70 times as much data. We also highlight some of the limitations of such models and areas that need addressing in future work.
View details
Acoustic Modeling for Google Home
Joe Caroselli
Kean Chin
Chanwoo Kim
Mitchel Weintraub
Erik McDermott
INTERSPEECH 2017 (2017)
Preview abstract
This paper describes the technical and system building advances made to the Google Home multichannel speech recognition system, which was launched in November 2016. Technical advances include an adaptive dereverberation frontend, the use of neural network models that do multichannel processing jointly with acoustic modeling, and grid lstms to model frequency variations. On the system level, improvements include adapting the model using Google Home specific data. We present results on a variety of multichannel sets. The combination of technical and system advances result in a reduction of WER of over 18\% relative compared to the current production system.
View details
Generation of large-scale simulated utterances in virtual rooms to train deep-neural networks for far-field speech recognition in Google Home
Chanwoo Kim
Kean Chin
Thad Hughes
interspeech 2017 (2017), pp. 379-383
Preview abstract
We describe the structure and application of an acoustic room
simulator to generate large-scale simulated data for training
deep neural networks for far-field speech recognition. The system
simulates millions of different room dimensions, a wide
distribution of reverberation time and signal-to-noise ratios,
and a range of microphone and sound source locations. We
start with a relatively clean training set as the source and artificially
create simulated data by randomly sampling a noise
configuration for every new training example. As a result,
the acoustic model is trained using examples that are virtually
never repeated. We evaluate performance of this approach
based on room simulation using a factored complex Fast Fourier
Transform (CFFT) acoustic model introduced in our earlier
work, which uses CFFT layers and LSTM AMs for joint multichannel
processing and acoustic modeling. Results show that
the simulator-driven approach is quite effective in obtaining
large improvements not only in simulated test conditions, but
also in real / rerecorded conditions. This room simulation system
has been employed in training acoustic models including
the ones for the recently released Google Home.
View details
Multichannel Signal Processing with Deep Neural Networks for Automatic Speech Recognition
Kean Chin
Chanwoo Kim
IEEE /ACM Transactions on Audio, Speech, and Language Processing, 25 (2017), pp. 965 - 979
Preview abstract
Multichannel ASR systems commonly separate speech enhancement, including localization, beamforming and postfiltering, from acoustic modeling. In this paper, we perform multichannel enhancement jointly with acoustic modeling in a deep neural network framework. Inspired by beamforming, which leverages differences in the fine time structure of the signal at different microphones to filter energy arriving from different directions, we explore modeling the raw time-domain waveform directly. We introduce a neural network architecture which performs multichannel filtering in the first layer of the network and show that this network learns to be robust to varying target speaker direction of arrival, performing as well as a model that is given oracle knowledge of the true target speaker direction.
%
Next, we show how performance can be improved by \emph{factoring} the first layer to separate the multichannel spatial filtering operation from a single channel filterbank which computes a frequency decomposition.
%
We also introduce an adaptive variant, which updates the spatial filter coefficients at each time frame based on the previous inputs.
%
Finally we demonstrate that these approaches can be implemented more efficiently in the frequency domain. Overall, we find that such multichannel neural networks give a relative word error rate improvement of more than 5\% compared to a traditional beamforming-based multichannel ASR system and more than 10\% compared to a single channel waveform model.
View details
Raw Multichannel Processing Using Deep Neural Networks
Kean Chin
Chanwoo Kim
New Era for Robust Speech Recognition: Exploiting Deep Learning, Springer (2017)
Preview abstract
Multichannel ASR systems commonly separate speech enhancement, including localization, beamforming and postfiltering, from acoustic modeling. In this chapter, we perform multi-channel enhancement jointly with acoustic modeling in a deep neural network framework. Inspired by beamforming, which leverages differences in the fine time structure of the signal at different microphones to filter energy arriving from different directions, we explore modeling the raw time-domain waveform directly. We introduce a neural network architecture which performs multichannel filtering in the first layer of the network and show that this network learns to be robust to varying target speaker direction of arrival, performing as well as a model that is given oracle knowledge of the true target speaker direction. Next, we show how performance can be improved by factoring the first layer to separate the multichannel spatial filtering operation from a single channel filterbank which computes a frequency decomposition. We also introduce an adaptive variant, which updates the spatial filter coefficients at each time frame based on the previous inputs. Finally we demonstrate that these approaches can be implemented more efficiently in the frequency domain. Overall, we find that such multichannel neural networks give a relative word error rate improvement of more than 5% compared to a traditional beamforming-based multichannel ASR system and more than 10% compared to a single channel waveform model.
View details
Preview abstract
Recently, it was shown that the performance of supervised time-frequency masking based robust automatic speech recognition techniques can be improved by training them jointly with the acoustic model [1]. The system in [1], termed deep neural network based joint adaptive training, used fully-connected feed-forward deep neural networks for estimating time-frequency masks and for acoustic modeling; stacked log mel spectra was used as features and training minimized cross entropy loss. In this work, we extend such jointly trained systems in several ways. First, we use recurrent neural networks based on long short-term memory (LSTM) units — this allows the use of unstacked features, simplifying joint optimization. Next, we use a sequence discriminative training criterion for optimizing parameters. Finally, we conduct experiments on large scale data and show that joint adaptive training can provide gains over a strong baseline. Systematic evaluations on noisy voice-search data show relative improvements ranging from 2% at 15 dB to 5.4% at -5 dB over a sequence discriminative, multi-condition trained LSTM acoustic model.
View details