Jump to Content

Beyond Predictions: Explainability and Learning from Machine Learning

Chih-Ying Deng
Akinori Mitani
Christina Chen
Lily Peng
Digital Eye Care and Teleophthalmology, Springer (2023)

Abstract

The intense interest in developing machine learning (ML) models for applications in ophthalmology has produced many potentially useful tools for disease detection, grading, and prognostication. However, though many of these efforts have produced well-validated models, the inner workings of these methods may not be easily understood by many clinicians, patients, and even ML practitioners. In this chapter, we focus on ML model explainability, and begin by first highlighting the utility and importance of explainability before presenting a clinician-accessible explanation of the commonly used methods and the type of insights these methods provide. Next, we present several case studies of ML studies incorporating explainability and describe these studies’ strengths as well as limitations. Finally, we discuss the important work that lies ahead, and how explainability may eventually help push the frontiers of scientific knowledge by enabling human experts to learn from what the machine has learned.