Jump to Content

Applied science

Combining computer science with physics and biology to create breakthroughs that help the world.

aerial view of the earth

Applied science

Combining computer science with physics and biology to create breakthroughs that help the world.

About the team

Computer science and natural science are complementary: breakthroughs in one can lead to remarkable advances in the other. The goal of the Applied Science organization at Google is to cross-fertilize these two fields. There are four main efforts in Applied Science: Quantum Computing, Google Accelerated Science, Climate and Energy, and Scientific Computing Tools.

Quantum Computing uses advances in applied physics to push the state-of-the-art in computation. Google Accelerated Science and Climate and Energy do the opposite: they use the latest advances in machine learning and artificial intelligence to accelerate progress in natural sciences, including societally-important areas such as biomedical research and zero-carbon energy sources. Finally, we supply Scientific Computing Tools such as Colab to many internal groups to enhance their data and machine learning productivity.

Team focus summaries

Climate and energy

The Climate & Energy team is exploring how to use large-scale computing and machine intelligence to diminish or avoid climate disruption. We partner with fusion companies to accelerate the progress of commercially viable fusion, model techno-economic scenarios for decarbonization, research new techniques for carbon sequestration, and look for novel ways to improve our understanding of earth's ecosystem response to climate change.


The physics team seeks to combine advances in machine learning and other Google technologies to advance our understanding of the physical world. Current projects range from machine learning for scientific computing, developing efficient algorithms for solving nonlinear partial differential equations, applying differentiable algorithms to interpret microscopy data, enabling microscopy on phones, and building algorithms for understanding dysarthric speech.

Quantum AI

The Quantum AI Lab is building quantum processors and algorithms to dramatically accelerate computational tasks for machine intelligence. We are developing quantum algorithms with a particular focus on those which can already run on today’s pre-error corrected quantum processors. Quantum algorithms for optimization, sampling, and quantum simulation hold the promise of dramatic speedups over the fastest classical computers.

Learn more

Featured publications

Exponential Quantum Speedup in Simulating Coupled Classical Oscillators
Dominic Berry
Rolando Somma
Nathan Wiebe
Physical Review X, vol. 13 (2023), pp. 041041
Preview abstract We present a quantum algorithm for simulating the classical dynamics of 2^n coupled oscillators (e.g., 2^n masses coupled by springs). Our approach leverages a mapping between the Schrodinger equation and Newton's equations for harmonic potentials such that the amplitudes of the evolved quantum state encode the momenta and displacements of the classical oscillators. When individual masses and spring constants can be efficiently queried, and when the initial state can be efficiently prepared, the complexity of our quantum algorithm is polynomial in n, almost linear in the evolution time, and sublinear in the sparsity. As an example application, we apply our quantum algorithm to efficiently estimate the kinetic energy of an oscillator at any time, for a specification of the problem that we prove is \BQP-complete. Thus, our approach solves a potentially practical application with an exponential speedup over classical computers. Finally, we show that under similar conditions our approach can efficiently simulate more general classical harmonic systems with 2^n modes. View details
Quantum Simulation of Exact Electron Dynamics can be more Efficient than Classical Mean-Field Methods
William J. Huggins
Dominic W. Berry
Shu Fay Ung
Andrew Zhao
David Reichman
Andrew Baczewski
Joonho Lee
Nature Communications, vol. 14 (2023), pp. 4058
Preview abstract Quantum algorithms for simulating electronic ground states are slower than popular classical mean-field algorithms such as Hartree-Fock and density functional theory, but offer higher accuracy. Accordingly, quantum computers have been predominantly regarded as competitors to only the most accurate and costly classical methods for treating electron correlation. However, here we tighten bounds showing that certain first quantized quantum algorithms enable exact time evolution of electronic systems with exponentially less space and polynomially fewer operations in basis set size than conventional real-time time-dependent Hartree-Fock and density functional theory. Although the need to sample observables in the quantum algorithm reduces the speedup, we show that one can estimate all elements of the k-particle reduced density matrix with a number of samples scaling only polylogarithmically in basis set size. We also introduce a more efficient quantum algorithm for first quantized mean-field state preparation that is likely cheaper than the cost of time evolution. We conclude that quantum speedup is most pronounced for finite temperature simulations and suggest several practically important electron dynamics problems with potential quantum advantage. View details
Longitudinal fundus imaging and its genome-wide association analysis provides evidence for a human retinal aging clock
Sara Ahadi
Kenneth A Wilson Jr,
Orion Pritchard
Ajay Kumar
Enrique M Carrera
Ricardo Lamy
Jay M Stewart
Avinash Varadarajan
Pankaj Kapahi
Ali Bashir
eLife (2023)
Preview abstract Background Biological age, distinct from an individual’s chronological age, has been studied extensively through predictive aging clocks. However, these clocks have limited accuracy in short time-scales. Deep learning approaches on imaging datasets of the eye have proven powerful for a variety of quantitative phenotype inference and provide an opportunity to explore organismal aging and tissue health. Methods Here we trained deep learning models on fundus images from the EyePacs dataset to predict individuals’ chronological age. These predictions lead to the concept of a retinal aging clock which we then employed for a series of downstream longitudinal analyses. The retinal aging clock was used to assess the predictive power of aging inference, termed eyeAge, on short time-scales using longitudinal fundus imaging data from a subset of patients. Additionally, the model was applied to a separate cohort from the UK Biobank to validate the model and perform a GWAS. The top candidate gene was then tested in a fly model of eye aging. Findings EyeAge was able to predict the age with a mean absolute error of 3.26 years, which is much less than other aging clocks. Additionally, eyeAge was highly independent of blood marker-based measures of biological age (e.g. “phenotypic age”), maintaining a hazard ratio of 1.026 even in the presence of phenotypic age. Longitudinal studies showed that the resulting models were able to predict individuals’ aging, in time-scales less than a year with 71% accuracy. Notably, we observed a significant individual-specific component to the prediction. This observation was confirmed with the identification of multiple GWAS hits in the independent UK Biobank cohort. The knockdown of the top hit, ALKAL2, which was previously shown to extend lifespan in flies, also slowed age-related decline in vision in flies. Interpretation In conclusion, predicted age from retinal images can be used as a biomarker of biological aging in a given individual independently from phenotypic age. This study demonstrates the utility of retinal aging clock for studying aging and age-related diseases and quantitatively measuring aging on very short time-scales, potentially opening avenues for quick and actionable evaluation of gero-protective therapeutics. View details
Estimates of broadband upwelling irradiance from GOES-16 ABI
Sixing Chen
Vincent Rudolf Meijer
Joe Ng
Geoff Davis
Carl Elkin
Remote Sensing of Environment, vol. 285 (2023)
Preview abstract Satellite-derived estimates of the Earth’s radiation budget are crucial for understanding and predicting the weather and climate. However, existing satellite products measuring broadband outgoing longwave radiation (OLR) and reflected shortwave radiation (RSR) have spatio-temporal resolutions that are too coarse to evaluate important radiative forcers like aircraft condensation trails. We present a neural network which estimates OLR and RSR based on narrowband radiances, using collocated Cloud and Earth’s Radiant Energy System (CERES) and GOES-16 Advanced Baseline Imager (ABI) data. The resulting estimates feature strong agreement with the CERES data products (R^2 = 0.977 for OLR and 0.974 for RSR on CERES Level 2 footprints), and we provide open access to the collocated satellite data and model outputs on all available GOES-16 ABI data for the 4 years from 2018–2021. View details
Preview abstract Practical quantum computing will require error rates that are well below what is achievable with physical qubits. Quantum error correction [1, 2] offers a path to algorithmically-relevant error rates by encoding logical qubits within many physical qubits, where increasing the number of physical qubits enhances protection against physical errors. However, introducing more qubits also increases the number of error sources, so the density of errors must be sufficiently low in order for logical performance to improve with increasing code size. Here, we report the measurement of logical qubit performance scaling across multiple code sizes, and demonstrate that our system of superconducting qubits has sufficient performance to overcome the additional errors from increasing qubit number. We find our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3 logical qubits on average, both in terms of logical error probability over 25 cycles and logical error per cycle (2.914%±0.016% compared to 3.028%±0.023%). To investigate damaging, low-probability error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6 logical error per round floor set by a single high-energy event (1.6 × 10−7 when excluding this event). We are able to accurately model our experiment, and from this model we can extract error budgets that highlight the biggest challenges for future systems. These results mark the first experimental demonstration where quantum error correction begins to improve performance with increasing qubit number, and illuminate the path to reaching the logical error rates required for computation. View details
Preview abstract The majority of IPCC scenarios call for active CO2 removal (CDR) to remain below 2ºC of warming. On geological timescales, ocean uptake regulates atmospheric CO2 concentration, with two homeostats driving sequestration: dissolution of deep ocean calcite deposits and terrestrial weathering of silicate rocks, acting on 1ka to 100ka timescales. Many current ocean-based CDR proposals effectively act to accelerate the latter. Here we present a method which relies purely on the redistribution and dilution of acidity from a thin layer of the surface ocean to a thicker layer of deep ocean, with the aim of accelerating the former carbonate homeostasis. This downward transport could be seen analogous to the action of the natural biological carbon pump. The method offers advantages over other ocean CDR methods and direct air capture approaches (DAC): the conveyance of mass is minimized (acidity is pumped in situ to depth), and expensive mining, grinding and distribution of alkaline material is eliminated. No dilute substance needs to be concentrated, avoiding the Sherwood’s Rule costs typically encountered in DAC. Finally, no terrestrial material is added to the ocean, avoiding significant alteration of seawater ion concentrations and issues with heavy metal toxicity encountered in mineral-based alkalinity schemes. The artificial transport of acidity accelerates the natural deep ocean invasion and subsequent compensation by calcium carbonate. It is estimated that the total compensation capacity of the ocean is on the order of 1500GtC. We show through simulation that pumping of ocean acidity could remove up to 150GtC from the atmosphere by 2100 without excessive increase of local ocean pH. For an acidity release below 2000m, the relaxation half time of CO2 return to the atmosphere was found to be ~2500 years (~1000yr without accounting for carbonate dissolution), with ~85% retained for at least 300 years. The uptake efficiency and residence time were found to vary with the location of acidity pumping, and optimal areas were calculated. Requiring only local resources (ocean water and energy), this method could be uniquely suited to utilize otherwise-stranded open ocean energy sources at scale. We examine technological pathways that could be used to implement it and present a brief techno-economic estimate of 130-250$/tCO2 at current prices and as low as 86$/tCO2 under modest learning-curve assumptions. View details
Noise-resilient Majorana Edge Modes on a Chain of Superconducting Qubits
Alejandro Grajales Dau
Alex Crook
Alex Opremcak
Alexa Rubinov
Alexander Korotkov
Alexandre Bourassa
Alexei Kitaev
Alexis Morvan
Andre Gregory Petukhov
Andrew Dunsworth
Andrey Klots
Anthony Megrant
Ashley Anne Huff
Benjamin Chiaro
Bernardo Meurer Costa
Bob Benjamin Buckley
Brooks Foxen
Charles Neill
Christopher Schuster
Cody Jones
Daniel Eppens
Dar Gilboa
Dave Landhuis
Dmitry Abanin
Doug Strain
Ebrahim Forati
Edward Farhi
Fedor Kostritsa
Frank Carlton Arute
Guifre Vidal
Igor Aleiner
Jamie Yao
Jeremy Patterson Hilton
Joao Basso
John Mark Kreikebaum
Joonho Lee
Juan Atalaya
Juhwan Yoo
Justin Thomas Iveland
Kannan Aryaperumal Sankaragomathi
Kenny Lee
Kim Ming Lau
Kostyantyn Kechedzhi
Kunal Arya
Lara Faoro
Leon Brill
Marco Szalay
Masoud Mohseni
Michael Blythe Broughton
Michael Newman
Michel Henri Devoret
Mike Shearn
Nicholas Bushnell
Orion Martin
Paul Conner
Pavel Laptev
Ping Yeh
Rajeev Acharya
Rebecca Potter
Reza Fatemi
Roberto Collins
Sergei Isakov
Shirin Montazeri
Steve Habegger
Thomas E O'Brien
Trent Huang
Trond Ikdahl Andersen
Vadim Smelyanskiy
Vladimir Shvarts
Wayne Liu
William Courtney
William Giang
William J. Huggins
Wojtek Mruczkiewicz
Xiao Mi
Yaxing Zhang
Yu Chen
Yuan Su
Zijun Chen
Science (2022) (to appear)
Preview abstract Inherent symmetry of a quantum system may protect its otherwise fragile states. Leveraging such protection requires testing its robustness against uncontrolled environmental interactions. Using 47 superconducting qubits, we implement the kicked Ising model which exhibits Majorana edge modes (MEMs) protected by a $\mathbb{Z}_2$-symmetry. Remarkably, we find that any multi-qubit Pauli operator overlapping with the MEMs exhibits a uniform decay rate comparable to single-qubit relaxation rates, irrespective of its size or composition. This finding allows us to accurately reconstruct the exponentially localized spatial profiles of the MEMs. Spectroscopic measurements further indicate exponentially suppressed hybridization between the MEMs over larger system sizes, which manifests as a strong resilience against low-frequency noise. Our work elucidates the noise sensitivity of symmetry-protected edge modes in a solid-state environment. View details
Next Day Wildfire Spread: A Machine Learning Dataset to Predict Wildfire Spreading From Remote-Sensing Data
Fantine Huot
Lily Hu
Tharun Pratap Sankar
Matthias Ihme
Yi-fan Chen
IEEE Transactions on Geoscience and Remote Sensing, vol. 60 (2022), pp. 1-13
Preview abstract Predicting wildfire spread is critical for land management and disaster preparedness. To this end, we present “Next Day Wildfire Spread,” a curated, large-scale, multivariate dataset of historical wildfires aggregating nearly a decade of remote-sensing data across the United States. In contrast to existing fire datasets based on Earth observation satellites, our dataset combines 2-D fire data with multiple explanatory variables (e.g., topography, vegetation, weather, drought index, and population density) aligned over 2-D regions, providing a feature-rich dataset for machine learning. To demonstrate the usefulness of this dataset, we implement a neural network that takes advantage of the spatial information of these data to predict wildfire spread. We compare the performance of the neural network with other machine learning models: logistic regression and random forest. This dataset can be used as a benchmark for developing wildfire propagation models based on remote-sensing data for a lead time of one day. View details
Comprehensive Imaging of C-2W Plasmas: Instruments and Applications
Erik Granstedt
Deepak Gupta
James Sweeney
Matthew Tobin
the TAE team
Review of Scientific Instruments, vol. 92 (2021), pp. 043515
Preview abstract The C-2W device (“Norman”), has produced and sustained beam-driven field-reversed configuration (FRC) plasmas embedded in a magnetic mirror geometry using neutral beams and end-bias electrodes located in expander divertors. Many discrete vessels comprise this device, and a suite of spatially and radiometrically calibrated, high-speed camera systems have been deployed to visualize the plasma throughout. Besides global visualization of the plasma evolution, this imaging suite has been used in a variety of applications. Reconstruction of the magnetic field in the equilibrium vessel is complicated by eddy currents in conducting structures and thus far, non-perturbative measurements of internal field have not been available. Tomographic reconstruction of O4+ impurity emission provides an independent check of magnetic modeling and indirect evidence for field reversal within the FRC. Voltages up to 3.5 kV are applied to electrodes in the expander divertors to control the radial electric field in the plasma located on open field-lines. This has been shown to improve the macroscopic stability of the FRC; however, a full model for how electrode potentials propagate to the center of the plasma is the subject of ongoing work. Imaging in the expander divertors is used to study gas ionization and to identify metal arcing from electrode surfaces. View details
Preview abstract Contrails (condensation trails) are the ice clouds that trail behind aircraft as they fly through cold and moist regions of the atmosphere. Avoiding these regions could potentially be an inexpensive way to reduce over half of aviation's impact on global warming. Development and evaluation of these avoidance strategies greatly benefits from the ability to detect contrails on satellite imagery. Since little to no public data is available to develop such contrail detectors, we construct and release a dataset of several thousand Landsat-8 scenes with pixel-level annotations of contrails. The dataset will continue to grow, but currently contains 3431 scenes (of which 47\% have at least one contrail) representing 800+ person-hours of labeling time. View details