Evaluating unsupervised disentangled representations for genomic discovery and disease risk prediction
Abstract
High-dimensional clinical data have become invaluable resources for genetic studies, due to their accessibility in biobank-scale datasets and the development of high performance modeling techniques especially using deep learning. Recent work has shown that low dimensional embeddings of these clinical data learned by variational autoencoders (VAE) can be used for genome-wide association studies and polygenic risk prediction. In this work, we consider multiple unsupervised learning methods for learning disentangled representations, namely autoencoders, VAE, beta-VAE, and FactorVAE, in the context of genetic studies. Using spirograms from UK Biobank as a running example, we observed improvements in the genome-wide significant loci, heritability, and polygenic risk scores for asthma and chronic obstructive pulmonary disease compared to VAE or (non-variational) autoencoders. We observed FactorVAEs are consistently effective for genomic discovery and risk prediction across multiple settings of the regularization hyperparameter, while beta-VAEs are much more sensitive to the hyperparameter values.