Google Research

Transfer and Marginalize: Explaining Away Label Noise with Privileged Information

ICML 2021 Workshop on Uncertainty & Robustness in Deep Learning (2021) (to appear)

Abstract

Supervised learning datasets often have privileged information, in the form of features which are available at training time but are not available at test time e.g. the ID of the annotator that provided the label. We argue that privileged information is useful for explaining away label noise, thereby reducing the harmful impact of noisy labels. We develop a simple and efficient method for supervised neural networks: it transfers the knowledge learned with privileged information via weight sharing and approximately marginalizes over privileged information at test time. Our method, TRAM (TRansfer and Marginalize), has the same test time computational cost as not using privileged information, and performs strongly on CIFAR-10H and ImageNet benchmarks.

Research Areas

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work