Google Research

Pruning Sparse Non-negative Matrix N-gram Language Models

Proceedings of Interspeech 2015, ISCA, pp. 1433-1437

Abstract

In this paper we present a pruning algorithm and experimental results for our recently proposed Sparse Non-negative Matrix (SNM) family of language models (LMs).

We have uncovered a bug in the experimental setup for SNM pruning; see Errata section for correct results.

We also illustrate a method for converting an SNMLM to ARPA back-off format which can be readily used in a single-pass decoder for Automatic Speech Recognition.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work