Google Research

Entropy Based Pruning of Backoff MaxEnt Language Models with Contextual Features

  • Diamantino A. Caseiro
  • Pat Rondon
  • Tongzhou Chen
Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Proceesing, Calgary, Canada (2018)

Abstract

In this paper, we present a pruning technique for maximum entropy (MaxEnt) language models. It is based on computing the exact entropy loss when removing each feature from the model, and it explicitly supports backoff features by replacing each removed feature with its backoff. The algorithm computes the loss on the training data, so it is not restricted to models with n-gram like features, allowing models with any feature, including long range skips, triggers, and contextual features such as device location.

Results on the 1-billion word corpus show large perplexity improvements relative for frequency pruned models of comparable size. Automatic speech recognition (ASR) experiments show up to 0.2\% absolute WER improvements in a large-scale cloud based mobile ASR system for Italian.

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work