Jump to Content

Distance-Based Learning from Errors for Confidence Calibration

Chen Xing
ICLR (2020)
Google Scholar

Abstract

Deep neural networks (DNNs) yield poorly-calibrated confidence estimates when their raw predicted posterior estimates are considered. Towards obtaining perfectly-calibrated confidence estimates, we propose a novel framework, named as `Distance-Based Learning from Errors' (DBLE). DBLE is based on two fundamental principles: (i) learning a representation space where distances correspond to relatedness of samples, and (ii) efficient feedback from the training errors to accurately model distances to ground truth centroids. For (i), we adapt prototypical learning such that pairwise distances determine the predicted posteriors during training, and the related samples, ideally from the same class, should be grouped together. For (ii), we propose a simple yet effective solution of relying updates on the samples that yielded the inaccurate decisions during training, with the goal of efficiently fitting a model that represents the variance of prediction in the decision manifold. On four datasets, we demonstrate that DBLE significantly outperforms alternative approaches that are based on a single DNN, in confidence calibration. DBLE is on par with ensemble approaches that contain multiple DNNs, without even doubling the training time and yielding negligible increase in the number of parameters.

Research Areas