- Yanzhang He
- Tara Sainath
- Rohit Prabhavalkar
- Ian McGraw
- Raziel Alvarez
- Ding Zhao
- David Rybach
- Anjuli Kannan
- Yonghui Wu
- Ruoming Pang
- Qiao Liang
- Deepti Bhatia
- Yuan Shangguan
- Bo Li
- Golan Pundak
- Khe Chai Sim
- Tom Bagby
- Shuo-yiin Chang
- Kanishka Rao
- Alex Gruenstein
Abstract
End-to-end (E2E) models, which directly predict output character sequences given input speech, are good candidates for on-device speech recognition. E2E models, however, present numerous challenges: In order to be truly useful, such models must decode speech utterances in a streaming fashion, in real time; they must be robust to the long tail of use cases; they must be able to leverage user-specific context (e.g., contact lists); and above all, they must be extremely accurate. In this work, we describe our efforts at building an E2E speech recognizer using a recurrent neural network transducer. In experimental evaluations, we find that the proposed approach can outperform a conventional CTC-based model in terms of both latency and accuracy in a number of evaluation categories.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work