- Joris Pelemans
- Noam M. Shazeer
- Ciprian Chelba
Proceedings of Interspeech 2015, ISCA, pp. 1433-1437
In this paper we present a pruning algorithm and experimental results for our recently proposed Sparse Non-negative Matrix (SNM) family of language models (LMs).
We have uncovered a bug in the experimental setup for SNM pruning; see Errata section for correct results.
We also illustrate a method for converting an SNMLM to ARPA back-off format which can be readily used in a single-pass decoder for Automatic Speech Recognition.
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work