Jump to Content

ProtoAttend: Attention-based prototypical learning

Journal of Machine Learning Research (2020)
Google Scholar

Abstract

We propose a novel inherently interpretable machine learning method that bases decisions on few relevant examples that we call prototypes. Our method, ProtoAttend, can be integrated into a wide range of neural network architectures including pre-trained models. It utilizes an attention mechanism that relates the encoded representations to samples in order to determine prototypes. Without sacrificing ac- curacy of the original model, ProtoAttend yields superior results in: sample-based interpretability, confidence estimation and distribution mismatch detection.

Research Areas