Google Research

Deep Learning Through the Lens of Example Difficulty

NeurIPS (2021) (to appear)

Abstract

Existing work on understanding deep learning often employs measures that compress all data-dependent information into a few numbers. In this work, we argue that a perspective based on the role of individual data-points could be beneficial in understanding multiple aspects of deep learning. We introduce the concept of the layer at which a data point is learned in a deep neural network by building upon k-nearest neighbor probes in the hidden representations of the network. Our investigation reveals the relationship between layer learned and other known notions of example difficulty such as iteration learned and consistency score. Our study further leads to connecting separately reported phenomena in the literature: early layers generalize while later layers memorize; networks converge from input layer towards output layer; and networks learn simple patterns first.

Research Areas

Learn more about how we do research

We maintain a portfolio of research projects, providing individuals and teams the freedom to emphasize specific types of work