- Kevin Canini
- Andy Cotter
- Mahdi Milani Fard
- Maya Gupta
- Jan Pfeifer
Abstract
In many real-world machine learning problems, there are some inputs that are known should be positively (or negatively) related to the output, and in such cases constraining the trained model to respect that monotonic relationship can provide regularization, and makes the model more interpretable. However, flexible monotonic functions are computationally challenging to learn beyond a few features. We break through this barrier by learning ensembles of monotonic calibrated look-up tables (lattices). A key contribution is an automated algorithm for selecting feature subsets for the ensemble base models. We demonstrate that compared to random forests, these ensembles produce similar or better accuracy, while providing guaranteed monotonicity consistent with prior knowledge, smaller model size and faster evaluation.
Research Areas
Learn more about how we do research
We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work