Learning and Evaluating Representations for Deep One-Class Classification
Abstract
We present a two-stage framework for deep one-class classification, where in the first stage, we learn self-supervised deep representations from one-class data, and then we build a classifier using generative or discriminative models on learned representations. In particular, we present a novel distribution-augmented contrastive learning by extending training distributions via data augmentation to obstruct the uniformity of vanilla contrastive representations, yielding more suitable representations for one-class classification. Moreover, we argue that classifiers inspired by the statistical perspective in generative or discriminative ways are more effective than existing approaches, such as an average of normality scores from a surrogate classifier. In experiments, we demonstrate state-of-the-art performance on visual domain one-class classification benchmarks. Not only learning a better representation, the proposed framework permits building one-class classifiers more faithful to the target task. Finally, we present visual explanations, confirming that the decision making process of our deep one-class classifier is human-intuitive.