Contextual Explanation Networks
Abstract
Modern learning algorithms excel at producing accurate but complex models of the data. However, deploying such models in the real-world requires extra care: we must ensure their reliability, robustness, and absence of undesired biases. This motivates the development of models that are equally accurate but can be also easily inspected and assessed beyond their predictive performance. To this end, we introduce \emph{contextual explanation networks} ({\CENs})---a class of architectures that learn to predict by generating and utilizing intermediate, simplified probabilistic models. Specifically, {\CENs} generate parameters for intermediate graphical models which are further used for prediction and play the role of explanations. Contrary to the existing \emph{post-hoc} model-explanation tools, {\CENs} learn to predict and to explain simultaneously. Our approach offers two major advantages: (i) for each prediction, valid, instance-specific explanation is generated with no computational overhead and (ii) prediction via explanation acts as a regularizer and boosts performance in data-scarce settings. We analyze the proposed framework theoretically and experimentally. Our results on image and text classification and survival analysis tasks demonstrate that {\CENs} are not only competitive with the state-of-the-art methods but also offer additional insights behind each prediction, that can be valuable for decision support. We also show that while post-hoc methods may produce misleading explanations in certain cases, {\CENs} are consistent and allow to detect such cases systematically.