Deep Lattice Networks and Partial Monotonic Functions
Abstract
We propose learning deep models that are monotonic with respect to a user
specified set of inputs by alternating layers of linear embeddings, ensembles of
lattices, and calibrators (piecewise linear functions), with appropriate constraints
for monotonicity, and jointly training the resulting network. We implement the
layers and projections with new computational graph nodes in TensorFlow and use
the ADAM optimizer and batched stochastic gradients. Experiments on benchmark
and real-world datasets show that six-layer monotonic deep lattice networks achieve
state-of-the art performance for classification and regression with monotonicity
guarantees.
specified set of inputs by alternating layers of linear embeddings, ensembles of
lattices, and calibrators (piecewise linear functions), with appropriate constraints
for monotonicity, and jointly training the resulting network. We implement the
layers and projections with new computational graph nodes in TensorFlow and use
the ADAM optimizer and batched stochastic gradients. Experiments on benchmark
and real-world datasets show that six-layer monotonic deep lattice networks achieve
state-of-the art performance for classification and regression with monotonicity
guarantees.