Publications
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.
Sort By
1 - 15 of 406 publications
Preview abstract
As artificial intelligence (AI) is rapidly integrated into healthcare, ensuring that this innovation helps to combat health inequities requires engaging marginalized communities in health AI futuring. However, little research has examined Black populations’ perspectives on the use of AI in health contexts, despite the widespread health inequities they experience–inequities that are already perpetuated by AI. Addressing this research gap, through qualitative workshops with 18 Black adults, we characterize participants’ cautious optimism for health AI addressing structural well-being barriers (e.g., by providing second opinions that introduce fairness into an unjust healthcare system), and their concerns that AI will worsen health inequities (e.g., through health AI biases they deemed inevitable and the problematic reality of having to trust healthcare providers to use AI equitably). We advance health AI research by articulating previously-unreported health AI perspectives from a population experiencing significant health inequities, and presenting key considerations for future work.
View details
Performance analysis of updated Sleep Tracking algorithms across Google and Fitbit wearable devices
Arno Charton
Linda Lei
Siddhant Swaroop
Marius Guerard
Michael Dixon
Logan Niehaus
Shao-Po Ma
Logan Schneider
Ross Wilkinson
Ryan Gillard
Conor Heneghan
Pramod Rudrapatna
Mark Malhotra
Shwetak Patel
Google, Google, 1600 Amphitheatre Parkway
Mountain View, CA 94043 (2026) (to appear)
Preview abstract
Background: The general public has increasingly adopted consumer wearables for sleep tracking over the past 15 years, but reports on performance versus gold standards such as polysomnogram (PSG), high quality sleep diaries and at-home portable EEG systems still show potential for improved performance. Two aspects in particular are worthy of consideration: (a) improved recognition of sleep sessions (times when a person is in bed and has attempted to sleep), and (b) improved accuracy on recognizing sleep stages relative to an accepted standard such as PSG.
Aims: This study aimed to: 1) provide an update on the methodology and performance of a system for correctly recognizing valid sleep sessions, and 2) detail an updated description of how sleep stages are calculated using accelerometer and inter-beat intervals
Methods: Novel machine learning algorithms were developed to recognize sleep sessions and sleep stages using accelerometer sensors and inter-beat intervals derived from the watch or tracker photoplethysmogram. Algorithms were developed on over 3000 nights of human-scored free-living sleep sessions from a representative population of 122 subjects, and then tested on an independent validation set of 47 users. Within sleep sessions, an algorithm was developed to recognize periods when the user was attempting to sleep (Time-Attempting-To-Sleep = TATS). For sleep stage estimation, an algorithm was trained on human expert-scored polysomnograms, and then tested on 50 withheld subject nights for its ability to recognize Wake, Light (N1/N2), Deep (N3) and REM sleep relative to expert scored labels.
Results: For sleep session estimation, the algorithm had at least 95% overlap on TATS with human consensus scoring for 94% of nights from healthy sleepers. For sleep stage estimation, comparing with the current Fitbit algorithm, Cohen’s kappa for four-class determination of sleep stage increased from an average of 0.56 (std 0.13) to 0.63 (std 0.12), and average accuracy increased from 71% (std 0.10) to 77% (std 0.078)
Conclusion: A set of new algorithms has been developed and tested on Fitbit and Pixel Watches and is capable of providing robust and accurate measurement of sleep in free-living environments.
View details
Accurate human genome analysis with Element Avidity sequencing
Andrew Carroll
Daniel Cook
Lucas Brambrink
Bryan Lajoie
Kelly N. Wiseman
Sophie Billings
Semyon Kruglyak
Bryan R. Lajoie
Junhua Zhao
Shawn E. Levy
Kishwar Shafin
Maria Nattestad
BMC Bioinformatics (2025)
Preview abstract
We investigate the new sequencing technology Avidity from Element Biosciences. We show that Avidity whole genome sequencing matches mapping and variant calling accuracy with Illumina at high coverages (30x-50x) and is noticeably more accurate at lower coverages (20x-30x). We quantify base error rates of Element reads, finding lower error rates, especially in homopolymer and tandem repeat regions. We use Element’s ability to generate paired end sequencing with longer insert sizes than typical short–read sequencing. We show that longer insert sizes result in even higher accuracy, with long insert Element sequencing giving noticeably more accurate genome analyses at all coverages.
View details
Triaging mammography with artificial intelligence: an implementation study
Sarah M. Friedewald
Sunny Jansen
Fereshteh Mahvar
Timo Kohlberger
David V. Schacht
Sonya Bhole
Dipti Gupta
Scott Mayer McKinney
Stacey Caron
David Melnick
Mozziyar Etemadi
Samantha Winter
Alejandra Maciel
Luca Speroni
Martha Sevenich
Arnav Agharwal
Rubin Zhang
Gavin Duggan
Shiro Kadowaki
Atilla Kiraly
Jie Yang
Basil Mustafa
Krish Eswaran
Shravya Shetty
Breast Cancer Research and Treatment (2025)
Preview abstract
Purpose
Many breast centers are unable to provide immediate results at the time of screening mammography which results in delayed patient care. Implementing artificial intelligence (AI) could identify patients who may have breast cancer and accelerate the time to diagnostic imaging and biopsy diagnosis.
Methods
In this prospective randomized, unblinded, controlled implementation study we enrolled 1000 screening participants between March 2021 and May 2022. The experimental group used an AI system to prioritize a subset of cases for same-visit radiologist evaluation, and same-visit diagnostic workup if necessary. The control group followed the standard of care. The primary operational endpoints were time to additional imaging (TA) and time to biopsy diagnosis (TB).
Results
The final cohort included 463 experimental and 392 control participants. The one-sided Mann-Whitney U test was employed for analysis of TA and TB. In the control group, the TA was 25.6 days [95% CI 22.0–29.9] and TB was 55.9 days [95% CI 45.5–69.6]. In comparison, the experimental group's mean TA was reduced by 25% (6.4 fewer days [one-sided 95% CI > 0.3], p<0.001) and mean TB was reduced by 30% (16.8 fewer days; 95% CI > 5.1], p=0.003). The time reduction was more pronounced for AI-prioritized participants in the experimental group. All participants eventually diagnosed with breast cancer were prioritized by the AI.
Conclusions
Implementing AI prioritization can accelerate care timelines for patients requiring additional workup, while maintaining the efficiency of delayed interpretation for most participants. Reducing diagnostic delays could contribute to improved patient adherence, decreased anxiety and addressing disparities in access to timely care.
View details
Towards Conversational AI for Disease Management
Khaled Saab
David Stutz
Kavita Kulkarni
Sara Mahdavi
Joelle Barral
James Manyika
Ryutaro Tanno
Adam Rodman
arXiv (2025)
Preview abstract
While large language models (LLMs) have shown promise in diagnostic dialogue, their capabilities for effective management reasoning - including disease progression, therapeutic response, and safe medication prescription - remain under-explored. We advance the previously demonstrated diagnostic capabilities of the Articulate Medical Intelligence Explorer (AMIE) through a new LLM-based agentic system optimised for clinical management and dialogue, incorporating reasoning over the evolution of disease and multiple patient visit encounters, response to therapy, and professional competence in medication prescription. To ground its reasoning in authoritative clinical knowledge, AMIE leverages Gemini's long-context capabilities, combining in-context retrieval with structured reasoning to align its output with relevant and up-to-date clinical practice guidelines and drug formularies. In a randomized, blinded virtual Objective Structured Clinical Examination (OSCE) study, AMIE was compared to 21 primary care physicians (PCPs) across 100 multi-visit case scenarios designed to reflect UK NICE Guidance and BMJ Best Practice guidelines. AMIE was non-inferior to PCPs in management reasoning as assessed by specialist physicians and scored better in both preciseness of treatments and investigations, and in its alignment with and grounding of management plans in clinical guidelines. To benchmark medication reasoning, we developed RxQA, a multiple-choice question benchmark derived from two national drug formularies (US, UK) and validated by board-certified pharmacists. While AMIE and PCPs both benefited from the ability to access external drug information, AMIE outperformed PCPs on higher difficulty questions. While further research would be needed before real-world translation, AMIE's strong performance across evaluations marks a significant step towards conversational AI as a tool in disease management.
View details
A Scalable Framework for Evaluating Health Language Models
Neil Mallinar
Tony Faranesh
Brent Winslow
Nova Hammerquist
Ben Graef
Cathy Speed
Mark Malhotra
Shwetak Patel
Xavi Prieto
Daniel McDuff
Ahmed Metwally
(2025)
Preview abstract
Large language models (LLMs) have emerged as powerful tools for analyzing complex datasets. Recent studies demonstrate their potential to generate useful, personalized responses when provided with patient-specific health information that encompasses lifestyle, biomarkers, and context. As LLM-driven health applications are increasingly adopted, rigorous and efficient one-sided evaluation methodologies are crucial to ensure response quality across multiple dimensions, including accuracy, personalization and safety. Current evaluation practices for open-ended text responses heavily rely on human experts. This approach introduces human factors and is often cost-prohibitive, labor-intensive, and hinders scalability, especially in complex domains like healthcare where response assessment necessitates domain expertise and considers multifaceted patient data. In this work, we introduce Adaptive Precise Boolean rubrics: an evaluation framework that streamlines human and automated evaluation of open-ended questions by identifying gaps in model responses using a minimal set of targeted rubrics questions. Our approach is based on recent work in more general evaluation settings that contrasts a smaller set of complex evaluation targets with a larger set of more precise, granular targets answerable with simple boolean responses. We validate this approach in metabolic health, a domain encompassing diabetes, cardiovascular disease, and obesity. Our results demonstrate that Adaptive Precise Boolean rubrics yield higher inter-rater agreement among expert and non-expert human evaluators, and in automated assessments, compared to traditional Likert scales, while requiring approximately half the evaluation time of Likert-based methods. This enhanced efficiency, particularly in automated evaluation and non-expert contributions, paves the way for more extensive and cost-effective evaluation of LLMs in health.
View details
Participatory AI Considerations for Advancing Racial Health Equity
Jatin Alla
Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (CHI) (2025)
Capturing Real-World Habitual Sleep Patterns with a Novel User-centric Algorithm to Pre-Process Fitbit Data in the All of Us Research Program: Retrospective observational longitudinal study
Hiral Master
Jeffrey Annis
Karla Gleichauf
Lide Han
Peyton Coleman
Kelsie Full
Neil Zheng
Doug Ruderfer
Logan Schneider
Evan Brittain
Journal of Medical Internet Research (2025)
Preview abstract
Background:
Commercial wearables such as Fitbit quantify sleep metrics using fixed calendar times as default measurement periods, which may not adequately account for individual variations in sleep patterns. To address this limitation, experts in sleep medicine and wearable technology developed a user-centric algorithm designed to more accurately reflect actual sleep behaviors and improve the validity of wearable-derived sleep metrics.
Objective:
This study aims to describe the development of a new user-centric algorithm, compare its performance with the default calendar-relative algorithm, and provide a practical guide for analyzing All of Us Fitbit sleep data on a cloud-based platform.
Methods:
The default and user-centric algorithms were implemented to preprocess and compute sleep metrics related to schedule, duration, and disturbances using high-resolution Fitbit sleep data from 8563 participants (median age 58.1 years, 6002/8341, 71.96%, female) in the All of Us Research Program (version 7 Controlled Tier). Variations in typical sleep patterns were calculated by examining the differences in the mean number of primary sleep logs classified by each algorithm. Linear mixed-effects models were used to compare differences in sleep metrics across quartiles of variation in typical sleep patterns.
Results:
Out of 8,452,630 total sleep logs collected over a median of 4.2 years of Fitbit monitoring, 401,777 (4.75%) nonprimary sleep logs identified by the default algorithm were reclassified as primary sleep by the user-centric algorithm. Variation in typical sleep patterns ranged from –0.08 to 1. Among participants with the greatest variation in typical sleep patterns, the user-centric algorithm identified significantly more total sleep time (by 17.6 minutes; P<.001), more wake after sleep onset (by 13.9 minutes; P<.001), and lower sleep efficiency (by 2.0%; P<.001), on average. Differences in sleep stage metrics between the 2 algorithms were modest.
Conclusions:
The user-centric algorithm captures the natural variability in sleep schedules, providing an alternative approach to preprocess and evaluate sleep metrics related to schedule, duration, and disturbances. A publicly available R package facilitates the implementation of this algorithm for clinical and translational research.
View details
Mitigating Clinician Information Overload: Generative AI for Integrated EHR and RPM Data Analysis
Aman Raj
IEEE Compsac 2025 (2025)
Preview abstract
Generative AI (GenAI), particularly Large Language Models (LLMs), offer powerful capabilities for interpreting the complex data landscape in healthcare. In this paper, we present a comprehensive overview of the capabilities, requirements and applications of GenAI for deriving clinical insights and improving clinical efficiency. We first provide some background on the forms and sources of patient data, namely real-time Remote Patient Monitoring (RPM) streams and traditional Electronic Health Records (EHR). The sheer volume and heterogeneity of this combined data present significant challenges to clinicians and contribute to information overload.
In addition, we explore the potential of LLM-powered applications for improving clinical efficiency. These applications can enhance navigation of longitudinal patient data and provide actionable clinical decision support through natural language dialogue. We discuss the opportunities this presents for streamlining clinician workflows and personalizing care, alongside critical challenges such as data integration complexity, ensuring data quality and RPM data reliability, maintaining patient privacy, validating AI outputs for clinical safety, mitigating bias, and ensuring clinical acceptance. We believe this work represents the first summarization of GenAI techniques for managing clinician data overload due to combined RPM / EHR data complexities.
View details
Smartwatch-Based Walking Metrics Estimation
Amir Farjadian
Anupam Pathak
Alicia Kokoszka
Jonathan Hsu
Kyle DeHolton
Lawrence Cai
Shwetak Patel
Mark Malhotra
Jonathan Wang
Shun Liao
2025
Preview abstract
Gait parameters are important health indicators of neurological control, musculoskeletal health and fall risk, but traditional analysis requires specialized laboratory equipment. While smartphone inertial measurement units (IMUs) enable estimation of gait metrics, their real-world use may be limited by inconsistent placement and user burden. With a fixed on-wrist placement, smartwatches offer a stable, convenient and continuous monitoring potential, but wrist-based sensing presents inherent challenges due to the indirect coupling between arm swing and leg movement. This paper introduces a novel multi-head deep learning model leveraging IMU signals from a consumer smartwatch, along with user height information to estimate a comprehensive suite of spatio-temporal walking metrics, including step length , gait speed, swing time, stance time, and double support time. Results from 250 participants across two countries demonstrate that the model achieves high validity (Pearson r > 0.7) and reliability (ICC > 0.7) for most gait metrics, comparable or exceeding leading smartphone-based approaches. This work, the largest in-lab, smartwatch-based gait study to date, highlights the feasibility of gait monitoring using ubiquitous consumer smartwatches.
View details
Why all roads don't lead to Rome: Representation geometry varies across the human visual cortical hierarchy
Zahraa Chorghay
Arna Ghosh
Shahab Bakhtiari
Blake Richards
(2025) (to appear)
Preview abstract
Biological and artificial intelligence systems navigate the fundamental efficiency-robustness tradeoff for optimal encoding, i.e., they must efficiently encode numerous attributes of the input space while also being robust to noise. This challenge is particularly evident in hierarchical processing systems like the human brain. With a view towards understanding how systems navigate the efficiency-robustness tradeoff, we turned to a population geometry framework for analyzing representations in the human visual cortex alongside artificial neural networks (ANNs). In the ventral visual stream, we found general-purpose, scale-free representations characterized by a power law-decaying eigenspectrum in most but not areas. Of note, certain higher-order visual areas did not have scale-free representations, indicating that scale-free geometry is not a universal property of the brain. In parallel, ANNs trained with a self-supervised learning objective also exhibited scale-free geometry, but not after fine-tuning on a specific task. Based on these empirical results and our analytical insights, we posit that a system’s representation geometry is not a universal property and instead depends upon the computational objective.
View details
Scaling Large Language Models For Next-Generation Single-Cell Analysis
Syed Asad Rizvi
Daniel Levine
Aakash Patel
Shiyang Zhang
Eric Wang
Curtis Jamison Perry
Nicole Mayerli Constante
Sizhuang He
David Zhang
Cerise Tang
Zhuoyang Lyu
Rayyan Darji
Chang Li
Emily Sun
David Jeong
Lawrence Zhao
Jennifer Kwan
David Braun
Brian Hafler
Hattie Chung
Rahul M. Dhodapkar
Paul Jaeger
Jeffrey Ishizuka
David van Dijk
biorxiv (2025)
Preview abstract
Single-cell RNA sequencing has transformed our understanding of cellular diversity, yet current singlecell foundation models (scFMs) remain limited in their scalability, flexibility across diverse tasks, and ability to natively integrate textual information. In this work, we build upon the Cell2Sentence (C2S) framework, which represents scRNA-seq profiles as textual “cell sentences,” to train Large Language Models (LLMs) on a corpus comprising over one billion tokens of transcriptomic data, biological text, and metadata. Scaling the model to 27 billion parameters yields consistent improvements in predictive and generative capabilities and supports advanced downstream tasks that require synthesis of information across multi-cellular contexts. Targeted fine-tuning with modern reinforcement learning techniques produces strong performance in perturbation response prediction, natural language interpretation, and complex biological reasoning. This predictive strength directly enabled a dualcontext virtual screen that uncovered a striking context split for the kinase inhibitor silmitasertib (CX-4945), suggesting its potential as a synergistic, interferon-conditional amplifier of antigen presentation. Experimental validation in human cell models unseen during training confirmed this hypothesis, demonstrating that C2S-Scale can generate biologically grounded, testable discoveries of context-conditioned biology. C2S-Scale unifies transcriptomic and textual data at unprecedented scales, surpassing both specialized single-cell models and general-purpose LLMs to provide a platform for next-generation single-cell analysis and the development of “virtual cells.”
View details
A Novel CI Coding Strategy Based on a Cochlear Model and Deep Neural Network
Maryam Hosseini
Tim Brochier
Zachary Smith
Brett Swanson
Andrew Vandali
Alan Kan
Fadwa Alnafjan
Kat Fernandez
Conference on Implantable Auditory Prostheses 2025
Preview abstract
Objective: Many CI recipients face difficulties in understanding speech in noisy
environments and express frustration with the quality of music. This may be partly due
to the simple filter banks used in current CI technology, which do not fully replicate the
natural processes of the cochlea. This project aims to improve CI perception by more
accurately mimicking the responses of the auditory nerve.
Method: Audio signals were applied to CARFAC (Cascade of Asymmetric Resonators
with Fast-Acting Compression) [1] to produce a representation of the auditory nerve
response, known as a normal hearing (NH) “neurogram”. The NH neurogram was
down-sampled and applied to a deep neural network (DNN) to produce 22 electrode
stimulation currents. These currents were applied to an electrical hearing (EH) model
incorporating current spread, neural adaptation, and refractoriness, to produce a CI
neurogram. The DNN was trained on sentences from the TIMIT database to minimise
the difference between the NH and CI neurograms.
Results: The CI neurograms produced by the CARFAC-DNN strategy were more similar
to the NH neurograms than the CI neurograms produced by the Nucleus ACE strategy.
Similarity was quantified by the structural similarity index and mean squared error.
Conclusions: The CARFAC-DNN strategy may provide a more natural auditory nerve
response than traditional CI sound coding strategies. A sound-booth study with CI
recipients is planned.
This work was funded by Google through the Australian Future Hearing Initiative.
References:
[1] Lyon, R. F. (2017). Human and machine hearing. Cambridge University Press.
View details
Accurate somatic small variant discovery for multiple sequencing technologies with DeepSomatic
Jimin Park
Daniel E. Cook
Lucas Brambrink
Joshua Gardner
Brandy McNulty
Samuel Sacco
Ayse G. Keskus
Asher Bryant
Tanveer Ahmad
Jyoti Shetty
Yongmei Zhao
Bao Tran
Giuseppe Narzisi
Adrienne Helland
Byunggil Yoo
Irina Pushel
Lisa A. Lansdon
Chengpeng Bi
Adam Walter
Margaret Gibson
Tomi Pastinen
Rebecca Reiman
Sharvari Mankame
T. Rhyker Ranallo-Benavidez
Christine Brown
Nicolas Robine
Floris P. Barthel
Midhat S. Farooqi
Karen H. Miga
Andrew Carroll
Mikhail Kolmogorov
Benedict Paten
Kishwar Shafin
Nature Biotechnology (2025)
Preview abstract
Somatic variant detection is an integral part of cancer genomics analysis. While most methods have focused on short-read sequencing, long-read technologies offer potential advantages in repeat mapping and variant phasing. We present DeepSomatic, a deep-learning method for detecting somatic small nucleotide variations and insertions and deletions from both short-read and long-read data. The method has modes for whole-genome and whole-exome sequencing and can run on tumor–normal, tumor-only and formalin-fixed paraffin-embedded samples. To train DeepSomatic and help address the dearth of publicly available training and benchmarking data for somatic variant detection, we generated and make openly available the Cancer Standards Long-read Evaluation (CASTLE) dataset of six matched tumor–normal cell line pairs whole-genome sequenced with Illumina, PacBio HiFi and Oxford Nanopore Technologies, along with benchmark variant sets. Across samples, both cell line and patient-derived, and across short-read and long-read sequencing technologies, DeepSomatic consistently outperforms existing callers.
View details
Accurate somatic small variant discovery for multiple sequencing technologies with DeepSomatic
Jimin Park
Daniel Cook
Lucas Brambrink
Joshua Gardner
Brandy McNulty
Samuel Sacco
Ayse Keskus
Asher Bryant
Tanveer Ahmad
Jyoti Shetty
Yongmei Zhao
Bao Tran
Giuseppe Narzisi
Adrienne Helland
Byunggil Yoo
Irina Pushel
Lisa Lansdon
Chengpeng Bi
Adam Walter
Margaret Gibson
Tomi Pastinen
Rebecca Reiman
Sharvari Mankame
Rhyker Ranallo-Benavidez
Christine Brown
Nicolas Robine
Floris Barthel
Midhat S. Farooqi
Karen Miga
Andrew Carroll
Mikhail Kolmogorov
Benedict Paten
Kishwar Shafin
Nature Biotechnology (2025)
Preview abstract
Somatic variant detection is an integral part of cancer genomics analysis. While most methods have focused on short-read sequencing, long-read technologies offer potential advantages in repeat mapping and variant phasing. We present DeepSomatic, a deep-learning method for detecting somatic small nucleotide variations and insertions and deletions from both short-read and long-read data. The method has modes for whole-genome and whole-exome sequencing and can run on tumor–normal, tumor-only and formalin-fixed paraffin-embedded samples. To train DeepSomatic and help address the dearth of publicly available training and benchmarking data for somatic variant detection, we generated and make openly available the Cancer Standards Long-read Evaluation (CASTLE) dataset of six matched tumor–normal cell line pairs whole-genome sequenced with Illumina, PacBio HiFi and Oxford Nanopore Technologies, along with benchmark variant sets. Across samples, both cell line and patient-derived, and across short-read and long-read sequencing technologies, DeepSomatic consistently outperforms existing callers.
View details