On Robustness and Transferability of Convolutional Neural Networks

Josip Djolonga
Jessica Yung
Michael Tschannen
Rob Romijnders
Lucas Beyer
Alexander Kolesnikov
Matthias Minderer
Dan Moldovan
Sylvain Gelly
Neil Houlsby
Xiaohua Zhai
Conference on Computer Vision and Pattern Recognition (2021)

Abstract

Modern deep convolutional networks (CNNs) are often criticized for their failure to generalize under distributional shifts. However, several recent breakthroughs in transfer learning suggest that these networks can cope with severe distribution shifts and successfully adapt to new tasks from a few training examples. In this work we revisit the out-of-distribution and transfer performance of modern image classification CNNs and investigate the impact of the pre-training data scale, the model scale, and the data preprocessing pipeline. We find that increasing both the training set and model sizes significantly improve the robustness to distribution shifts. Furthermore, we show that, perhaps surprisingly, simple changes in the preprocessing such as modifying the image resolution can significantly mitigate robustness issues in some cases. Finally, we outline the shortcomings of existing robustness evaluation datasets and introduce a synthetic dataset for fine-grained robustness analysis.

Research Areas