Cory McLean

Cory McLean

Cory is a senior staff software engineer in Google Research who leads the Genomics research team. His research interests broadly include applying machine learning to the analysis and interpretation of genomic data and publishing tools and methods as open-source software. Prior to Google, Cory was at 23andMe where he developed algorithms and tools to improve identity-by-descent detection, haplotype phasing, and genotype imputation, and the application of genetic association study results to drug development. Cory received a PhD in computer science from Stanford, where he developed computational methods to understand vertebrate gene regulation, and a BS in computer science from MIT.
Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Applying multimodal AI to physiological waveforms improves genetic prediction of cardiovascular traits
    Yuchen Zhou
    Mahantesh I. Biradar
    Jacqueline Shreibati
    Dongbing Lai
    Tae-Hwi Schwantes-An
    Robert Luben
    Zachary R. McCaw
    Jorgen Engmann
    Rui Providencia
    Amand Floriaan Schmidt
    Patricia B. Munroe
    Howard Yang
    Andrew Carroll
    Anthony Khawaja
    Babak Behsaz
    American Journal of Human Genetics, 112 (2025), pp. 1562 - 1579
    Preview abstract Electronic health records, biobanks, and wearable biosensors enable the collection of multiple health modalities from many individuals. Access to multimodal health data provides a unique opportunity for genetic studies of complex traits because different modalities relevant to a single physiological system (e.g., circulatory system) encode complementary and overlapping information. We propose a multimodal deep learning method, multimodal representation learning for genetic discovery on low-dimensional embeddings (M-REGLE), for discovering genetic associations from a joint representation of complementary electrophysiological waveform modalities. M-REGLE jointly learns a lower representation (i.e., latent factors) of multimodal physiological waveforms using a convolutional variational autoencoder, performs genome-wide association studies (GWASs) on each latent factor, then combines the results to study the genetics of the underlying system. To validate the advantages of M-REGLE and multimodal learning, we apply it to common cardiovascular modalities (photoplethysmogram [PPG] and electrocardiogram [ECG]) and compare its results to unimodal learning methods in which representations are learned from each data modality separately but are statistically combined for downstream genetic comparison. M-REGLE identifies 19.3% more loci on the 12-lead ECG dataset, 13.0% more loci on the ECG lead I + PPG dataset, and its genetic risk score significantly outperforms the unimodal risk score at predicting cardiac phenotypes, such as atrial fibrillation (Afib), in multiple biobanks. View details
    Preview abstract Electrocardiograms (ECGs) are fundamental to cardiac diagnostics, providing noninvasive insights into cardiovascular conditions. Recent advancements in deep learning have led to foundation models (FMs) capable of learning powerful representations of ECG signals. However, these models often fail to fully exploit the periodic nature and diagnostic frequency bands of ECGs, leading to inefficiencies in computational cost and interpretability. We propose a novel ECG foundation model that learns nested embeddings, where each subset of dimensions encodes progressively higher-frequency information. By explicitly modeling frequency structures and applying a correlation penalty, the method achieves compact, high-rank representations that reduce model size without sacrificing performance. We evaluate our approach on two large-scale datasets for embedding redundancy and prediction performance on downstream clinical tasks such as arrhythmia classification, and cardiac condition detection. We observe similar prediction performance AUROC scores and lower embedding redundancy, offering a computationally efficient and interpretable framework for ECG analysis. Finally, the representations obtained from our model in UK Biobank data capture known cardiovascular variants and detect novel loci, which can be applied to drug discovery. View details
    A personal health large language model for sleep and fitness coaching
    Anastasiya Belyaeva
    Zhun Yang
    Nick Furlotte
    Chace Lee
    Erik Schenck
    Yojan Patel
    Jian Cui
    Logan Schneider
    Robby Bryant
    Ryan Gomes
    Allen Jiang
    Roy Lee
    Javier Perez
    Jamie Rogers
    Cathy Speed
    Shyam Tailor
    Megan Walker
    Jeffrey Yu
    Tim Althoff
    Conor Heneghan
    Mark Malhotra
    Shwetak Patel
    Shravya Shetty
    Jiening Zhan
    Daniel McDuff
    Nature Medicine (2025)
    Preview abstract Although large language models (LLMs) show promise for clinical healthcare applications, their utility for personalized health monitoring using wearable device data remains underexplored. Here we introduce the Personal Health Large Language Model (PH-LLM), designed for applications in sleep and fitness. PH-LLM is a version of the Gemini LLM that was finetuned for text understanding and reasoning when applied to aggregated daily-resolution numerical sensor data. We created three benchmark datasets to assess multiple complementary aspects of sleep and fitness: expert domain knowledge, generation of personalized insights and recommendations and prediction of self-reported sleep quality from longitudinal data. PH-LLM achieved scores that exceeded a sample of human experts on multiple-choice examinations in sleep medicine (79% versus 76%) and fitness (88% versus 71%). In a comprehensive evaluation involving 857 real-world case studies, PH-LLM performed similarly to human experts for fitness-related tasks and improved over the base Gemini model in providing personalized sleep insights. Finally, PH-LLM effectively predicted self-reported sleep quality using a multimodal encoding of wearable sensor data, further demonstrating its ability to effectively contextualize wearable modalities. This work highlights the potential of LLMs to revolutionize personal health monitoring via tailored insights and predictions from wearable data and provides datasets, rubrics and benchmark performance to further accelerate personal health-related LLM research. View details
    Preview abstract An outstanding open problem with high therapeutic value is how to design nucleic acid sequences with specific properties. Even just the 5’ UTR sequence admits 2×10^120 possibilities, making exhaustive exploration impossible. Although the field has focused on developing high-quality predictive models, techniques for generating sequences with desired properties are often not well benchmarked. Lack of benchmarking hinders the production of the best molecules from high quality models and slows the improvement of design algorithms. In this work, we performed the first large-scale comparison of modern sequence design algorithms across 16 biological tasks (such as transcription factor binding and gene expression) and 9 design algorithms. Our benchmark, NucleoBench, compares design algorithms on the same tasks and start sequences across more than 400K experiments, allowing us to derive unique modeling insights on the importance of using gradient information, the role of randomness, scaling properties, and reasonable starting hyperparameters on new problems. We use these insights to present a novel hybrid design algorithm, AdaBeam, which outperforms existing algorithms on 11 of 16 tasks and demonstrates superior scaling properties on long sequences and large predictors. Our benchmark and algorithms are freely available online. View details
    Towards a Personal Health Large Language Model
    Anastasiya Belyaeva
    Nick Furlotte
    Zhun Yang
    Chace Lee
    Erik Schenck
    Yojan Patel
    Jian Cui
    Logan Schneider
    Robby Bryant
    Ryan Gomes
    Allen Jiang
    Roy Lee
    Javier Perez
    Jamie Rogers
    Cathy Speed
    Shyam Tailor
    Megan Walker
    Jeffrey Yu
    Tim Althoff
    Conor Heneghan
    Mark Malhotra
    Shwetak Patel
    Shravya Shetty
    Jiening Zhan
    Yeswanth Subramanian
    Daniel McDuff
    arXiv (2024)
    Preview abstract Large language models (LLMs) can retrieve, reason over, and make inferences about a wide range of information. In health, most LLM efforts to date have focused on clinical tasks. However, mobile and wearable devices, which are rarely integrated into clinical tasks, provide a rich, continuous, and longitudinal source of data relevant for personal health monitoring. Here we present a new model, Personal Health Large Language Model (PH-LLM), a version of Gemini fine-tuned for text understanding and reasoning over numerical time-series personal health data for applications in sleep and fitness. To systematically evaluate PH-LLM, we created and curated three novel benchmark datasets that test 1) production of personalized insights and recommendations from measured sleep patterns, physical activity, and physiological responses, 2) expert domain knowledge, and 3) prediction of self-reported sleep quality outcomes. For the insights and recommendations tasks we created 857 case studies in sleep and fitness. These case studies, designed in collaboration with domain experts, represent real-world scenarios and highlight the model’s capabilities in understanding and coaching. Through comprehensive human and automatic evaluation of domain-specific rubrics, we observed that both Gemini Ultra 1.0 and PH-LLM are not statistically different from expert performance in fitness and, while experts remain superior for sleep, fine-tuning PH-LLM provided significant improvements in using relevant domain knowledge and personalizing information for sleep insights. To further assess expert domain knowledge, we evaluated PH-LLM performance on multiple choice question examinations in sleep medicine and fitness. PH-LLM achieved 79% on sleep (N=629 questions) and 88% on fitness (N=99 questions), both of which exceed average scores from a sample of human experts as well as benchmarks for receiving continuing credit in those domains. To enable PH-LLM to predict self-reported assessments of sleep quality, we trained the model to predict self-reported sleep disruption and sleep impairment outcomes from textual and multimodal encoding representations of wearable sensor data. We demonstrate that multimodal encoding is both necessary and sufficient to match performance of a suite of discriminative models to predict these outcomes. Although further development and evaluation are necessary in the safety-critical personal health domain, these results demonstrate both the broad knowledge base and capabilities of Gemini models and the benefit of contextualizing physiological data for personal health applications as done with PH-LLM. View details
    Unsupervised representation learning on high-dimensional clinical data improves genomic discovery and prediction
    Babak Behsaz
    Zachary Ryan Mccaw
    Davin Hill
    Robert Luben
    Dongbing Lai
    John Bates
    Howard Yang
    Tae-Hwi Schwantes-An
    Yuchen Zhou
    Anthony Khawaja
    Andrew Carroll
    Brian Hobbs
    Michael Cho
    Nature Genetics (2024)
    Preview abstract Although high-dimensional clinical data (HDCD) are increasingly available in biobank-scale datasets, their use for genetic discovery remains challenging. Here we introduce an unsupervised deep learning model, Representation Learning for Genetic Discovery on Low-Dimensional Embeddings (REGLE), for discovering associations between genetic variants and HDCD. REGLE leverages variational autoencoders to compute nonlinear disentangled embeddings of HDCD, which become the inputs to genome-wide association studies (GWAS). REGLE can uncover features not captured by existing expert-defined features and enables the creation of accurate disease-specific polygenic risk scores (PRSs) in datasets with very few labeled data. We apply REGLE to perform GWAS on respiratory and circulatory HDCD—spirograms measuring lung function and photoplethysmograms measuring blood volume changes. REGLE replicates known loci while identifying others not previously detected. REGLE are predictive of overall survival, and PRSs constructed from REGLE loci improve disease prediction across multiple biobanks. Overall, REGLE contain clinically relevant information beyond that captured by existing expert-defined features, leading to improved genetic discovery and disease prediction. View details
    Predicting Cardiovascular Disease Risk using Photoplethysmography and Deep Learning
    Sebastien Baur
    Mayank Daswani
    Christina Chen
    Sujay Kakarmath
    Mariam Jabara
    Babak Behsaz
    Shravya Shetty
    Goodarz Danaei
    Diego Ardila
    PLOS Global Public Health, 4(6) (2024), e0003204
    Preview abstract Cardiovascular diseases (CVDs) are responsible for a large proportion of premature deaths in low- and middle-income countries. Early CVD detection and intervention is critical in these populations, yet many existing CVD risk scores require a physical examination or lab measurements, which can be challenging in such health systems due to limited accessibility. We investigated the potential to use photoplethysmography (PPG), a sensing technology available on most smartphones that can potentially enable large-scale screening at low cost, for CVD risk prediction. We developed a deep learning PPG-based CVD risk score (DLS) to predict the probability of having major adverse cardiovascular events (MACE: non-fatal myocardial infarction, stroke, and cardiovascular death) within ten years, given only age, sex, smoking status and PPG as predictors. We compare the DLS with the office-based refit-WHO score, which adopts the shared predictors from WHO and Globorisk scores (age, sex, smoking status, height, weight and systolic blood pressure) but refitted on the UK Biobank (UKB) cohort. All models were trained on a development dataset (141,509 participants) and evaluated on a geographically separate test (54,856 participants) dataset, both from UKB. DLS’s C-statistic (71.1%, 95% CI 69.9–72.4) is non-inferior to office-based refit-WHO score (70.9%, 95% CI 69.7–72.2; non-inferiority margin of 2.5%, p<0.01) in the test dataset. The calibration of the DLS is satisfactory, with a 1.8% mean absolute calibration error. Adding DLS features to the office-based score increases the C-statistic by 1.0% (95% CI 0.6–1.4). DLS predicts ten-year MACE risk comparable with the office-based refit-WHO score. Interpretability analyses suggest that the DLS-extracted features are related to PPG waveform morphology and are independent of heart rate. Our study provides a proof-of-concept and suggests the potential of a PPG-based approach strategies for community-based primary prevention in resource-limited regions. View details
    Multimodal LLMs for health grounded in individual-specific data
    Anastasiya Belyaeva
    Krish Eswaran
    Shravya Shetty
    Andrew Carroll
    Nick Furlotte
    ICML Workshop on Machine Learning for Multimodal Healthcare Data (2023)
    Preview abstract Large language models (LLMs) have shown an impressive ability to solve tasks in a wide range of fields including health. Within the health domain, there are many data modalities that are relevant to an individual’s health status. To effectively solve tasks related to individual health, LLMs will need the ability to use a diverse set of features as context. However, the best way to encode and inject complex high-dimensional features into the input stream of an LLM remains an active area of research. Here, we explore the ability of a foundation LLM to estimate disease risk given health-related input features. First, we evaluate serialization of structured individual-level health data into text along with in context learning and prompt tuning approaches. We find that the LLM performs better than random in the zero-shot and few-shot cases, and has comparable and often equivalent performance to baseline after prompt tuning. Next, we propose a way to encode complex non-text data modalities into the token embedding space and then use this encoding to construct multimodal sentences. We show that this multimodal LLM achieves better or equivalent performance compared to baseline models. Overall, our results show the potential for using multi-modal LLMs grounded in individual health data to solve complex tasks such as risk prediction. View details
    Longitudinal fundus imaging and its genome-wide association analysis provides evidence for a human retinal aging clock
    Sara Ahadi
    Kenneth A Wilson Jr,
    Drew Bryant
    Orion Pritchard
    Ajay Kumar
    Enrique M Carrera
    Ricardo Lamy
    Jay M Stewart
    Avinash Varadarajan
    Pankaj Kapahi
    Ali Bashir
    eLife (2023)
    Preview abstract Background Biological age, distinct from an individual’s chronological age, has been studied extensively through predictive aging clocks. However, these clocks have limited accuracy in short time-scales. Deep learning approaches on imaging datasets of the eye have proven powerful for a variety of quantitative phenotype inference and provide an opportunity to explore organismal aging and tissue health. Methods Here we trained deep learning models on fundus images from the EyePacs dataset to predict individuals’ chronological age. These predictions lead to the concept of a retinal aging clock which we then employed for a series of downstream longitudinal analyses. The retinal aging clock was used to assess the predictive power of aging inference, termed eyeAge, on short time-scales using longitudinal fundus imaging data from a subset of patients. Additionally, the model was applied to a separate cohort from the UK Biobank to validate the model and perform a GWAS. The top candidate gene was then tested in a fly model of eye aging. Findings EyeAge was able to predict the age with a mean absolute error of 3.26 years, which is much less than other aging clocks. Additionally, eyeAge was highly independent of blood marker-based measures of biological age (e.g. “phenotypic age”), maintaining a hazard ratio of 1.026 even in the presence of phenotypic age. Longitudinal studies showed that the resulting models were able to predict individuals’ aging, in time-scales less than a year with 71% accuracy. Notably, we observed a significant individual-specific component to the prediction. This observation was confirmed with the identification of multiple GWAS hits in the independent UK Biobank cohort. The knockdown of the top hit, ALKAL2, which was previously shown to extend lifespan in flies, also slowed age-related decline in vision in flies. Interpretation In conclusion, predicted age from retinal images can be used as a biomarker of biological aging in a given individual independently from phenotypic age. This study demonstrates the utility of retinal aging clock for studying aging and age-related diseases and quantitatively measuring aging on very short time-scales, potentially opening avenues for quick and actionable evaluation of gero-protective therapeutics. View details
    Accurate human genome analysis with Element Avidity sequencing
    Andrew Carroll
    Bryan Lajoie
    Daniel Cook
    Kelly N. Blease
    Kishwar Shafin
    Lucas Brambrink
    Maria Nattestad
    Semyon Kruglyak
    bioRxiv (2023)
    Preview abstract We investigate the new sequencing technology Avidity from Element Biosciences. We show that Avidity whole genome sequencing matches mapping and variant calling accuracy with Illumina at high coverages (30x-50x) and is noticeably more accurate at lower coverages (20x-30x). We quantify base error rates of Element reads, finding lower error rates, especially in homopolymer and tandem repeat regions. We use Element’s ability to generate paired end sequencing with longer insert sizes than typical short–read sequencing. We show that longer insert sizes result in even higher accuracy, with long insert Element sequencing giving noticeably more accurate genome analyses at all coverages. View details