Diminishing Returns Shape Constraints for Interpretability and Regularization
Abstract
We investigate machine learning models that can provide diminishing returns
and accelerating returns guarantees to capture prior knowledge or policies
about how outputs should depend on inputs. We show that one can build
flexible, nonlinear, multi-dimensional models using lattice functions with any
combination of concavity/convexity and monotonicity constraints on any
subsets of features, and compare to new shape-constrained neural networks.
We demonstrate on real-world examples that these shape constrained models
can provide tuning-free regularization and improve model understandability.
and accelerating returns guarantees to capture prior knowledge or policies
about how outputs should depend on inputs. We show that one can build
flexible, nonlinear, multi-dimensional models using lattice functions with any
combination of concavity/convexity and monotonicity constraints on any
subsets of features, and compare to new shape-constrained neural networks.
We demonstrate on real-world examples that these shape constrained models
can provide tuning-free regularization and improve model understandability.