Second Opinion Needed: Communicating Uncertainty in Medical Artificial Intelligence

Andrew Beam
Ben Kompa
NPJ Digital Medicine, 4(2021)
Google Scholar

Abstract

Artificial Intelligence (AI) based on deep learning has made enormous progress on a wide variety of medical applications1. As these advancements translate into real-world clinical decision tools, many are taking stock of what capabilities these systems presently lack, especially in light of some mixed results from prospective validation efforts2. Several promising extensions to the traditional deep learning framework have been proposed to improve the clinical utility and safety of medical AI. For example, many have argued3,4 that medical AI must be imbued with notions of cause and effect to protect them from learning predictive rules based on spurious correlation instead of true disease etiology. While it seems clear that causal models have a key role to play in the future of medical AI, this viewpoint highlights a primitive deficiency that is readily addressable today. This missing ability is both easily stated and easily understood: medical AI algorithms should have the ability to say “I don’t know” and abstain from providing a diagnosis when there is a large amount of uncertainty for a given patient. With this ability, additional human expertise can be sought or additional data can be collected to reduce the uncertainty and make a better diagnosis.

Research Areas