Jump to Content

Supervised Transfer Learning at Scale for Medical Imaging

Aaron Loh
Basil Mustafa
Jan Freyberg
Patricia MacWilliams
Megan Wilson
Scott Mayer McKinney
Peggy Bui
Umesh Telang
ArXiV (2021)

Abstract

Transfer learning is a standard building block of successful medical imaging models, yet previous efforts suggest that at limited scale of pre-training data and model capacity, benefits of transfer learning to medical imaging are insubstantial. In this work, we explore whether scaling up pre-training can help improve transfer to medical tasks. In particular, we show that when using the Big Transfer recipe to further scale up pre-training, we can indeed considerably improve transfer performance across three popular yet diverse medical imaging tasks - interpretation of chest radiographs, breast cancer detection from mammograms and skin condition detection from smartphone images. Despite pre-training on unrelated source domains, we show that scaling up the model capacity and pre-training data yields performance improvements regardless of how much downstream medical data is available. In particular, we show suprisingly large improvements to zero-shot generalisation under distribution shift. Probing and quantifying other aspects of model performance relevant to medical imaging and healthcare, we demonstrate that these gains do not come at the expense of model calibration or fairness.