This paper argues that the technical landscape of clinical machine learning is shifting in ways that destabilize these pervasive assumptions about the nature and causes of algorithmic bias. On the one hand, the dominant paradigm in clinical machine learning is specialist in the sense that models are trained on biomedical datasets for particular clinical tasks such as diagnosis and treatment recommendation. On the other hand, the emerging paradigm is generalist in the sense that general-purpose language models such as Google’s BERT and Meta’s OPT are increasingly being adapted for clinical use cases via fine-tuning on biomedical datasets. Many of these next-generation models provide substantial performance gains over prior clinical models, but at the same time introduce novel kinds of algorithmic bias and complicate the explanatory relationship between algorithmic biases and biases in training data. This paper articulates how and in what respects biases in generalist models differ from biases in prior clinical models, and draws out practical recommendations for algorithmic bias mitigation in medical machine learning technologies built using generalist language models.