Training Confidence-calibrated Classifiers for Detecting Out-of-Distribution Samples

Kimin Lee
Honglak Lee
Kibok Lee
Jinwoo Shin
ICLR(2018)

Abstract

The problem of detecting whether a test sample is from in-distribution (i.e., train-
ing distribution by a classifier) or out-of-distribution sufficiently different from it
arises in many real-world machine learning applications. However, the state-of-art
deep neural networks are known to be highly overconfident in their predictions,
i.e., do not distinguish in- and out-of-distributions. Recently, to handle this issue,
several threshold-based detectors have been proposed given pre-trained neural classifiers.
However, the performance of prior works highly depends on how
to train the classifiers since they only focus on improving inference procedures.
In this paper, we develop a novel training method for classifiers so that such inference
algorithms can work better. In particular, we suggest two additional terms
added to the original loss (e.g., cross entropy). The first one forces samples from
out-of-distribution less confident by the classifier and the second one is for (implicitly)
generating most effective training samples for the first one. In essence,
our method jointly trains both classification and generative neural networks for
out-of-distribution. We demonstrate its effectiveness using deep convolutional
neural networks on various popular image datasets.

Research Areas