Jump to Content

Launch and Iterate: Reducing Prediction Churn

Quentin Cormier
Mahdi Milani Fard
Maya Gupta
NIPS (2016)

Abstract

Practical applications of machine learning often involve successive training iterations with ever improving features and increasing training examples. Ideally, changes in the output of any new model should only be improvements (wins) over the previous iteration, but in practice the predictions may change neutrally for many examples, resulting in extra net-zero wins and losses, that we refer to as churn. These changes in the predictions are problematic for usability for some applications, and make it harder to measure if a change is statistically significant positive. In this paper, we formulate the problem and present a stabilization operator to regularize a classifier towards a previous classifier. We use a Markov chain Monte Carlo stabilization operator to produce a model with more consistent predictions but without degrading accuracy. We investigate the properties of the proposal with theoretical analysis. Experiments on benchmark datasets for three different classification algorithms demonstrate the method and the range of churn reduction it can provide.

Research Areas