Marcin Sieniek
Marcin is a senior software engineer at Google Health. He currently leads a mammography engineering team that is tasked with furthering the breast cancer AI effort via research studies of the investigational AI model. He has published 10 peer reviewed publications, including co-lead authorship of a paper in Nature. He received his PhD in Computer Science from AGH University of Science and Technology in Krakow, Poland, while also completing a second major in Business Administration from Cracow University of Economics. Prior to Google he co-founded a VC-backed startup, MailGrupowy.pl.
Authored Publications
Sort By
General Geospatial Inference with a Population Dynamics Foundation Model
Chaitanya Kamath
Prithul Sarker
Joydeep Paul
Yael Mayer
Sheila de Guia
Jamie McPike
Adam Boulanger
David Schottlander
Yao Xiao
Manjit Chakravarthy Manukonda
Monica Bharel
Von Nguyen
Luke Barrington
Niv Efron
Krish Eswaran
Shravya Shetty
(2024) (to appear)
Preview abstract
Supporting the health and well-being of dynamic populations around the world requires governmental agencies, organizations, and researchers to understand and reason over complex relationships between human behavior and local contexts. This support includes identifying populations at elevated risk and gauging where to target limited aid resources. Traditional approaches to these classes of problems often entail developing manually curated, task-specific features and models to represent human behavior and the natural and built environment, which can be challenging to adapt to new, or even related tasks. To address this, we introduce the Population Dynamics Foundation Model (PDFM), which aims to capture the relationships between diverse data modalities and is applicable to a broad range of geospatial tasks. We first construct a geo-indexed dataset for postal codes and counties across the United States, capturing rich aggregated information on human behavior from maps, busyness, and aggregated search trends, and environmental factors such as weather and air quality. We then model this data and the complex relationships between locations using a graph neural network, producing embeddings that can be adapted to a wide range of downstream tasks using relatively simple models. We evaluate the effectiveness of our approach by benchmarking it on 27 downstream tasks spanning three distinct domains: health indicators, socioeconomic factors, and environmental measurements. The approach achieves state-of-the-art performance on geospatial interpolation across all tasks, surpassing existing satellite and geotagged image based location encoders. In addition, it achieves state-of-the-art performance in extrapolation and super-resolution for 25 of the 27 tasks. We also show that the PDFM can be combined with a state-of-the-art forecasting foundation model, TimesFM, to predict unemployment and poverty, achieving performance that surpasses fully supervised forecasting. The full set of embeddings and sample code are publicly available for researchers. In conclusion, we have demonstrated a general purpose approach to geospatial modeling tasks critical to understanding population dynamics by leveraging a rich set of complementary globally available datasets that can be readily adapted to previously unseen machine learning tasks.
View details
Enhancing diagnostic accuracy of medical AI systems via selective deferral to clinicians
Dj Dvijotham
Jim Winkens
Melih Barsbey
Sumedh Ghaisas
Robert Stanforth
Nick Pawlowski
Patricia Strachan
Zahra Ahmed
Yoram Bachrach
Laura Culp
Jan Freyberg
Christopher Kelly
Atilla Kiraly
Timo Kohlberger
Scott Mayer McKinney
Basil Mustafa
Krzysztof Geras
Jan Witowski
Zhi Zhen Qin
Jacob Creswell
Shravya Shetty
Terry Spitz
Taylan Cemgil
Nature Medicine (2023)
Preview abstract
AI systems trained using deep learning have been shown to achieve expert-level identification of diseases in multiple medical imaging settings1,2. While these results are impressive, they don’t accurately reflect the impact of deployment of such systems in a clinical context. Due to the safety-critical nature of this domain and the fact that AI systems are not perfect and can make inaccurate assessments, they are predominantly deployed as assistive tools for clinical experts3. Although clinicians routinely discuss the diagnostic nuances of medical images with each other, weighing human diagnostic confidence against that of an AI system remains a major unsolved barrier to collaborative decision-making4. Furthermore, it has been observed that diagnostic AI models have complementary strengths and weaknesses compared to clinical experts. Yet, complementarity and the assessment of relative confidence between the members of a diagnostic team has remained largely unexploited in how AI systems are currently used in medical settings5.
In this paper, we study the behavior of a team composed of diagnostic AI model(s) and clinician(s) in diagnosing disease. To go beyond the performance level of a standalone AI system, we develop a novel selective deferral algorithm that can learn to decide when to rely on a diagnostic AI model and when to defer to a clinical expert. Using this algorithm, we demonstrate that the composite AI+human system has enhanced accuracy (both sensitivity and specificity) relative to a human-only or an AI-only baseline. We decouple the development of the deferral AI model from training of the underlying diagnostic AI model(s). Development of the deferral AI model only requires i) the predictions of a model(s) on a tuning set of medical images (separate from the diagnostic AI models’ training data), ii) the diagnoses made by clinicians on these images and iii) the ground truth disease labels corresponding to those images.
Our extensive analysis shows that the selective deferral (SD) system exceeds the performance of either clinicians or AI alone in multiple clinical settings: breast and lung cancer screening. For breast cancer screening, double-reading with arbitration (two readers interpreting each mammogram invoking an arbitrator if needed) is a “gold standard” for performance, never previously exceeded using AI6. The SD system exceeds the accuracy of double-reading with arbitration in a large representative UK screening program (25% reduction in false positives despite equivalent true-positive detection and 66% reduction in the requirement for clinicians to read an image), as well as exceeding the performance of a standalone state-of-art AI system (40% reduction in false positives with equivalent detection of true positives). In a large US dataset the SD system exceeds the accuracy of single-reading by board-certified radiologists and a standalone state-of-art AI system (32% reduction in false positives despite equivalent detection of true positives and 55% reduction in the clinician workload required). The SD system further outperforms both clinical experts alone, and AI alone for the detection of lung cancer in low-dose Computed Tomography images from a large national screening study, with 11% reduction in false positives while maintaining sensitivity given 93% reduction in clinician workload required. Furthermore, the SD system allows controllable trade-offs between sensitivity and specificity and can be tuned to target either specificity or sensitivity as desired for a particular clinical application, or a combination of both.
The system generalizes to multiple distribution shifts, retaining superiority to both the AI system alone and human experts alone. We demonstrate that the SD system retains performance gains even on clinicians not present in the training data for the deferral AI. Furthermore, we test the SD system on a new population where the standalone AI system’s performance significantly degrades. We showcase the few-shot adaptation capability of the SD system by demonstrating that the SD system can obtain superiority to both the standalone AI system and the clinician on the new population after being trained on only 40 cases from the new population.
Our comprehensive assessment demonstrates that a selective deferral system could significantly improve clinical outcomes in multiple medical imaging applications, paving the way for higher performance clinical AI systems that can leverage the complementarity between clinical experts and medical AI tools.
View details
ELIXR: Towards a general purpose X-ray artificial intelligence system through alignment of large language models and radiology vision encoders
Shawn Xu
Lin Yang
Christopher Kelly
Timo Kohlberger
Martin Ma
Atilla Kiraly
Sahar Kazemzadeh
Zakkai Melamed
Jungyeon Park
Patricia MacWilliams
Chuck Lau
Christina Chen
Mozziyar Etemadi
Sreenivasa Raju Kalidindi
Kat Chou
Shravya Shetty
Daniel Golden
Rory Pilgrim
Krish Eswaran
arxiv (2023)
Preview abstract
Our approach, which we call Embeddings for Language/Image-aligned X-Rays, or ELIXR, leverages a language-aligned image encoder combined or grafted onto a fixed LLM, PaLM 2, to perform a broad range of tasks. We train this lightweight adapter architecture using images paired with corresponding free-text radiology reports from the MIMIC-CXR dataset. ELIXR achieved state-of-the-art performance on zero-shot chest X-ray (CXR) classification (mean AUC of 0.850 across 13 findings), data-efficient CXR classification (mean AUCs of 0.893 and 0.898 across five findings (atelectasis, cardiomegaly, consolidation, pleural effusion, and pulmonary edema) for 1% (~2,200 images) and 10% (~22,000 images) training data), and semantic search (0.76 normalized discounted cumulative gain (NDCG) across nineteen queries, including perfect retrieval on twelve of them). Compared to existing data-efficient methods including supervised contrastive learning (SupCon), ELIXR required two orders of magnitude less data to reach similar performance. ELIXR also showed promise on CXR vision-language tasks, demonstrating overall accuracies of 58.7% and 62.5% on visual question answering and report quality assurance tasks, respectively. These results suggest that ELIXR is a robust and versatile approach to CXR AI.
View details
Development of a Machine Learning Model for Sonographic Assessment of Gestational Age
Chace Lee
Angelica Willis
Christina Chen
Amber Watters
Bethany Stetson
Akib Uddin
Jonny Wong
Rory Pilgrim
Kat Chou
Shravya Ramesh Shetty
Ryan Gomes
JAMA Network Open (2023)
Preview abstract
Importance: Fetal ultrasonography is essential for confirmation of gestational age (GA), and accurate GA assessment is important for providing appropriate care throughout pregnancy and for identifying complications, including fetal growth disorders. Derivation of GA from manual fetal biometry measurements (ie, head, abdomen, and femur) is operator dependent and time-consuming.
Objective: To develop artificial intelligence (AI) models to estimate GA with higher accuracy and reliability, leveraging standard biometry images and fly-to ultrasonography videos.
Design, Setting, and Participants: To improve GA estimates, this diagnostic study used AI to interpret standard plane ultrasonography images and fly-to ultrasonography videos, which are 5- to 10-second videos that can be automatically recorded as part of the standard of care before the still image is captured. Three AI models were developed and validated: (1) an image model using standard plane images, (2) a video model using fly-to videos, and (3) an ensemble model (combining both image and video models). The models were trained and evaluated on data from the Fetal Age Machine Learning Initiative (FAMLI) cohort, which included participants from 2 study sites at Chapel Hill, North Carolina (US), and Lusaka, Zambia. Participants were eligible to be part of this study if they received routine antenatal care at 1 of these sites, were aged 18 years or older, had a viable intrauterine singleton pregnancy, and could provide written consent. They were not eligible if they had known uterine or fetal abnormality, or had any other conditions that would make participation unsafe or complicate interpretation. Data analysis was performed from January to July 2022.
Main Outcomes and Measures: The primary analysis outcome for GA was the mean difference in absolute error between the GA model estimate and the clinical standard estimate, with the ground truth GA extrapolated from the initial GA estimated at an initial examination.
Results: Of the total cohort of 3842 participants, data were calculated for a test set of 404 participants with a mean (SD) age of 28.8 (5.6) years at enrollment. All models were statistically superior to standard fetal biometry–based GA estimates derived from images captured by expert sonographers. The ensemble model had the lowest mean absolute error compared with the clinical standard fetal biometry (mean [SD] difference, −1.51 [3.96] days; 95% CI, −1.90 to −1.10 days). All 3 models outperformed standard biometry by a more substantial margin on fetuses that were predicted to be small for their GA.
Conclusions and Relevance: These findings suggest that AI models have the potential to empower trained operators to estimate GA with higher accuracy.
View details
A mobile-optimized artificial intelligence system for gestational age and fetal malpresentation assessment
Ryan Gomes
Bellington Vwalika
Chace Lee
Angelica Willis
Joan T. Price
Christina Chen
Margaret P. Kasaro
James A. Taylor
Elizabeth M. Stringer
Scott Mayer McKinney
Ntazana Sindano
William Goodnight, III
Justin Gilmer
Benjamin H. Chi
Charles Lau
Terry Spitz
Kris Liu
Jonny Wong
Rory Pilgrim
Akib Uddin
Lily Hao Yi Peng
Kat Chou
Jeffrey S. A. Stringer
Shravya Ramesh Shetty
Communications Medicine (2022)
Preview abstract
Background
Fetal ultrasound is an important component of antenatal care, but shortage of adequately trained healthcare workers has limited its adoption in low-to-middle-income countries. This study investigated the use of artificial intelligence for fetal ultrasound in under-resourced settings.
Methods
Blind sweep ultrasounds, consisting of six freehand ultrasound sweeps, were collected by sonographers in the USA and Zambia, and novice operators in Zambia. We developed artificial intelligence (AI) models that used blind sweeps to predict gestational age (GA) and fetal malpresentation. AI GA estimates and standard fetal biometry estimates were compared to a previously established ground truth, and evaluated for difference in absolute error. Fetal malpresentation (non-cephalic vs cephalic) was compared to sonographer assessment. On-device AI model run-times were benchmarked on Android mobile phones.
Results
Here we show that GA estimation accuracy of the AI model is non-inferior to standard fetal biometry estimates (error difference -1.4 ± 4.5 days, 95% CI -1.8, -0.9, n=406). Non-inferiority is maintained when blind sweeps are acquired by novice operators performing only two of six sweep motion types. Fetal malpresentation AUC-ROC is 0.977 (95% CI, 0.949, 1.00, n=613), sonographers and novices have similar AUC-ROC. Software run-times on mobile phones for both diagnostic models are less than 3 seconds after completion of a sweep.
Conclusions
The gestational age model is non-inferior to the clinical standard and the fetal malpresentation model has high AUC-ROCs across operators and devices. Our AI models are able to run on-device, without internet connectivity, and provide feedback scores to assist in upleveling the capabilities of lightly trained ultrasound operators in low resource settings.
View details
Supervised Transfer Learning at Scale for Medical Imaging
Aaron Loh
Basil Mustafa
Jan Freyberg
Neil Houlsby
Patricia MacWilliams
Megan Wilson
Scott Mayer McKinney
Jim Winkens
Peggy Bui
Umesh Telang
ArXiV (2021)
Preview abstract
Transfer learning is a standard building block of successful medical imaging models, yet previous efforts suggest that at limited scale of pre-training data and model capacity, benefits of transfer learning to medical imaging are insubstantial. In this work, we explore whether scaling up pre-training can help improve transfer to medical tasks. In particular, we show that when using the Big Transfer recipe to further scale up pre-training, we can indeed considerably improve transfer performance across three popular yet diverse medical imaging tasks - interpretation of chest radiographs, breast cancer detection from mammograms and skin condition detection from smartphone images. Despite pre-training on unrelated source domains, we show that scaling up the model capacity and pre-training data yields performance improvements regardless of how much downstream medical data is available. In particular, we show suprisingly large improvements to zero-shot generalisation under distribution shift. Probing and quantifying other aspects of model performance relevant to medical imaging and healthcare, we demonstrate that these gains do not come at the expense of model calibration or fairness.
View details
International evaluation of an AI system for breast cancer screening
Scott Mayer McKinney
Varun Yatindra Godbole
Jonathan Godwin
Natasha Antropova
Hutan Ashrafian
Trevor John Back
Mary Chesus
Ara Darzi
Mozziyar Etemadi
Florencia Garcia-Vicente
Fiona J Gilbert
Mark D Halling-Brown
Demis Hassabis
Sunny Jansen
Christopher Kelly
Dominic King
David Melnick
Hormuz Mostofi
Lily Hao Yi Peng
Joshua Reicher
Bernardino Romera Paredes
Richard Sidebottom
Mustafa Suleyman
Kenneth C. Young
Jeffrey De Fauw
Shravya Ramesh Shetty
Nature (2020)
Preview abstract
Screening mammography aims to identify breast cancer at earlier stages of the disease, when treatment can be more successful. Despite the existence of screening programmes worldwide, the interpretation of mammograms is affected by high rates of false positives and false negatives. Here we present an artificial intelligence (AI) system that is capable of surpassing human experts in breast cancer prediction. To assess its performance in the clinical setting, we curated a large representative dataset from the UK and a large enriched dataset from the USA. We show an absolute reduction of 5.7% and 1.2% (USA and UK) in false positives and 9.4% and 2.7% in false negatives. We provide evidence of the ability of the system to generalize from the UK to the USA. In an independent study of six radiologists, the AI system outperformed all of the human readers: the area under the receiver operating characteristic curve (AUC-ROC) for the AI system was greater than the AUC-ROC for the average radiologist by an absolute margin of 11.5%. We ran a simulation in which the AI system participated in the double-reading process that is used in the UK, and found that the AI system maintained non-inferior performance and reduced the workload of the second reader by 88%. This robust assessment of the AI system paves the way for clinical trials to improve the accuracy and efficiency of breast cancer screening.
View details