John C. Platt
John Platt is a Google Fellow, being a technical leader for both Climate and Science. John is best known for his work in machine learning: the SMO algorithm for support vector machines and calibrating the output of models. But, he is an applied mathematician who has worked on numerous fields, such as neural networks, computer graphics, planetary science, analog circuits, quantum computing, numerical analysis, computer vision, human-computer interface, support vector machines, data systems, Python, and computational geometry. He has discovered two asteroids, and won a Technical Academy Award in 2006 for his work in computer graphics.
John currently leads the Applied Science branch of Google Research, which works at the intersection between computer science and physical or biological science. His latest goal is to help to solve climate change. Previously, he was Deputy Director of the Microsoft Research Redmond lab, and was Director of Research at Synaptics.
Research Areas
Authored Publications
Sort By
A scalable system to measure contrail formation on a per-flight basis
Erica Brand
Sebastian Eastham
Carl Elkin
Thomas Dean
Zebediah Engberg
Ulrike Hager
Joe Ng
Dinesh Sanekommu
Tharun Sankar
Marc Shapiro
Environmental Research Communications (2024)
Preview abstract
In this work we describe a scalable, automated system to determine from satellite data whether a given flight has made a persistent contrail.
The system works by comparing flight segments to contrails detected by a computer vision algorithm running on images from the GOES-16 Advanced Baseline Imager. We develop a `flight matching' algorithm and use it to label each flight segment as a `match' or `non-match'. We perform this analysis on 1.6 million flight segments and compare these labels to existing contrail prediction methods based on weather forecast data. The result is an analysis of which flights make persistent contrails several orders of magnitude larger than any previous work. We find that current contrail prediction models fail to correctly predict whether we will match a contrail in many cases.
View details
The effect of uncertainty in humidity and model parameters on the prediction of contrail energy forcing
Marc Shapiro
Zebediah Engberg
Tharun Sankar
Marc E.J. Stettler
Roger Teoh
Ulrich Schumann
Susanne Rohs
Erica Brand
Environmental Research Communications, 6 (2024), pp. 095015
Preview abstract
Previous work has shown that while the net effect of aircraft condensation trails (contrails) on the climate is warming, the exact magnitude of the energy forcing per meter of contrail remains uncertain. In this paper, we explore the skill of a Lagrangian contrail model (CoCiP) in identifying flight segments with high contrail energy forcing. We find that skill is greater than climatological predictions alone, even accounting for uncertainty in weather fields and model parameters. We estimate the uncertainty due to humidity by using the ensemble ERA5 weather reanalysis from the European Centre for Medium-Range Weather Forecasts (ECMWF) as Monte Carlo inputs to CoCiP. We unbias and correct under-dispersion on the ERA5 humidity data by forcing a match to the distribution of in situ humidity measurements taken at cruising altitude. We take CoCiP energy forcing estimates calculated using one of the ensemble members as a proxy for ground truth, and report the skill of CoCiP in identifying segments with large positive proxy energy forcing. We further estimate the uncertainty due to model parameters in CoCiP by performing Monte Carlo simulations with CoCiP model parameters drawn from uncertainty distributions consistent with the literature. When CoCiP outputs are averaged over seasons to form climatological predictions, the skill in predicting the proxy is 44%, while the skill of per-flight CoCiP outputs is 84%. If these results carry over to the true (unknown) contrail EF, they indicate that per-flight energy forcing predictions can reduce the number of potential contrail avoidance route adjustments by 2x, hence reducing both the cost and fuel impact of contrail avoidance.
View details
Suppressing quantum errors by scaling a surface code logical qubit
Anthony Megrant
Cody Jones
Jeremy Hilton
Jimmy Chen
Juan Atalaya
Kenny Lee
Michael Newman
Vadim Smelyanskiy
Yu Chen
Nature (2023)
Preview abstract
Practical quantum computing will require error rates that are well below what is achievable with
physical qubits. Quantum error correction [1, 2] offers a path to algorithmically-relevant error rates
by encoding logical qubits within many physical qubits, where increasing the number of physical
qubits enhances protection against physical errors. However, introducing more qubits also increases
the number of error sources, so the density of errors must be sufficiently low in order for logical
performance to improve with increasing code size. Here, we report the measurement of logical qubit
performance scaling across multiple code sizes, and demonstrate that our system of superconducting
qubits has sufficient performance to overcome the additional errors from increasing qubit number.
We find our distance-5 surface code logical qubit modestly outperforms an ensemble of distance-3
logical qubits on average, both in terms of logical error probability over 25 cycles and logical error
per cycle (2.914%±0.016% compared to 3.028%±0.023%). To investigate damaging, low-probability
error sources, we run a distance-25 repetition code and observe a 1.7 × 10−6 logical error per round
floor set by a single high-energy event (1.6 × 10−7 when excluding this event). We are able to
accurately model our experiment, and from this model we can extract error budgets that highlight
the biggest challenges for future systems. These results mark the first experimental demonstration
where quantum error correction begins to improve performance with increasing qubit number, and
illuminate the path to reaching the logical error rates required for computation.
View details
Preview abstract
The majority of IPCC scenarios call for active CO2 removal (CDR) to remain below 2ºC of warming. On geological timescales, ocean uptake regulates atmospheric CO2 concentration, with two homeostats driving sequestration: dissolution of deep ocean calcite deposits and terrestrial weathering of silicate rocks, acting on 1ka to 100ka timescales. Many current ocean-based CDR proposals effectively act to accelerate the latter. Here we present a method which relies purely on the redistribution and dilution of acidity from a thin layer of the surface ocean to a thicker layer of deep ocean, with the aim of accelerating the former carbonate homeostasis. This downward transport could be seen analogous to the action of the natural biological carbon pump. The method offers advantages over other ocean CDR methods and direct air capture approaches (DAC): the conveyance of mass is minimized (acidity is pumped in situ to depth), and expensive mining, grinding and distribution of alkaline material is eliminated. No dilute substance needs to be concentrated, avoiding the Sherwood’s Rule costs typically encountered in DAC. Finally, no terrestrial material is added to the ocean, avoiding significant alteration of seawater ion concentrations and issues with heavy metal toxicity encountered in mineral-based alkalinity schemes.
The artificial transport of acidity accelerates the natural deep ocean invasion and subsequent compensation by calcium carbonate. It is estimated that the total compensation capacity of the ocean is on the order of 1500GtC. We show through simulation that pumping of ocean acidity could remove up to 150GtC from the atmosphere by 2100 without excessive increase of local ocean pH. For an acidity release below 2000m, the relaxation half time of CO2 return to the atmosphere was found to be ~2500 years (~1000yr without accounting for carbonate dissolution), with ~85% retained for at least 300 years. The uptake efficiency and residence time were found to vary with the location of acidity pumping, and optimal areas were calculated.
Requiring only local resources (ocean water and energy), this method could be uniquely suited to utilize otherwise-stranded open ocean energy sources at scale. We examine technological pathways that could be used to implement it and present a brief techno-economic estimate of 130-250$/tCO2 at current prices and as low as 86$/tCO2 under modest learning-curve assumptions.
View details
A human-labeled Landsat contrails dataset
Vincent Rudolf Meijer
Erica Wickstrom Brand
Carl Elkin
ICML workshop on Climate Change 2021 (2021)
Preview abstract
Contrails (condensation trails) are the ice clouds that trail behind aircraft as they fly through cold and moist regions of the atmosphere. Avoiding these regions could potentially be an inexpensive way to reduce over half of aviation's impact on global warming. Development and evaluation of these avoidance strategies greatly benefits from the ability to detect contrails on satellite imagery. Since little to no public data is available to develop such contrail detectors, we construct and release a dataset of several thousand Landsat-8 scenes with pixel-level annotations of contrails. The dataset will continue to grow, but currently contains 3431 scenes (of which 47\% have at least one contrail) representing 800+ person-hours of labeling time.
View details
Multi-instrument Bayesian reconstruction of plasma shape evolution in C-2W experiment
Erik Trask
Hiroshi Gota
Jesus Romero
Rob von Behren
Tom Madams
Physics of Plasmas (2021)
Preview abstract
We determined the time-dependent geometry including high-frequency oscillations of the plasma density in TAE’s C2W experiment. This was done as a joint Bayesian reconstruction from a 14-chord FIR interferometer in the midplane, 32 Mirnov probes at the periphery, and 8 shine-through detectors at the targets of the neutral beams. For each point in time we recovered, with credibility intervals: the radial density profile of the plasma; bulk plasma displacement; amplitudes, frequencies and phases of the azimuthal modes n=1 to n=4. Also reconstructed were the radial profiles of the deformations associated with each of the azimuthal modes. Bayesian posterior sampling was done via Hamiltonian
Monte Carlo with custom preconditioning. This gave us a comprehensive uncertainty quantification of the reconstructed values, including correlations and some understanding of multimodal posteriors. This method was applied to thousands of experimental shots on C-2W, producing a rich data set for analysis of plasma performance.
View details
OVERVIEW OF C-2W: HIGH TEMPERATURE, STEADY-STATE BEAM-DRIVEN FIELD-REVERSED CONFIGURATION PLASMAS
Rob von Behren
TAE
Tom Madams
William D Heavlin
Nuclear Fusion (2021)
Preview abstract
TAE Technologies, Inc. (TAE) is pursuing an alternative approach to magnetically confined fusion, which relies on field-reversed configuration (FRC) plasmas composed of mostly energetic and well-confined particles by means of a state-of-the-art tunable energy neutral-beam (NB) injector system. TAE’s current experimental device, C-2W (also called “Norman”), is the world’s largest compact-toroid device and has made significant progress in FRC performance, producing record breaking, high temperature (electron temperature, Te >500 eV; total electron and ion temperature, Ttot >3 keV) advanced beam-driven FRC plasmas, dominated by injected fast particles and sustained in steady-state for up to 30 ms, which is limited by NB pulse duration. C-2W produces significantly better FRC performance than the preceding C-2U experiment, in part due to Google’s machine-learning framework for experimental optimization, which has contributed to the discovery of a new operational regime where novel settings for the formation sections yield consistently reproducible, hot, and stable plasmas. Active plasma control system has been developed and utilized in C-2W to produce consistent FRC performance as well as for reliable machine operations using magnets, electrodes, gas injection, and tunable NBs. The active control system has demonstrated a stabilization of FRC axial instability. Overall FRC performance is well correlated with NBs and edge-biasing system, where higher total plasma energy is obtained with increasing both NB injection power and applied-voltage on biasing electrodes. C-2W divertors have demonstrated a good electron heat confinement on open-field-lines using strong magnetic mirror fields as well as expanding the magnetic field in the divertors (expansion ratio >30); the electron energy lost per ion, ~6–8, is achieved, which is close to the ideal theoretical minimum.
View details
Fusion Plasma Reconstruction
Nathan Neibauer
Rob von Behren
(2019)
Preview abstract
Fusion Plasma Reconstruction work done at Google in partnership with TAE is presented.
View details
Preview abstract
TAE Technologies’ research is devoted to producing high temperature, stable, long-lived field-reversed
configuration (FRC) plasmas by neutral-beam injection (NBI) and edge biasing/control. The newly constructed C-2W
experimental device (also called “Norman”) is the world’s largest compact-toroid (CT) device, which has several key
upgrades from the preceding C-2U device such as higher input power and longer pulse duration of the NBI system as well as
installation of inner divertors with upgraded electrode biasing systems. Initial C-2W experiments have successfully
demonstrated a robust FRC formation and its translation into the confinement vessel through the newly installed inner
divertor with adequate guide magnetic field. They also produced dramatically improved initial FRC states with higher
plasma temperatures (Te ~250+ eV; total electron and ion temperature >1.5 keV, based on pressure balance) and more
trapped flux (up to ~15 mWb, based on rigid-rotor model) inside the FRC immediately after the merger of collided two CTs
in the confinement section. As for effective edge control on FRC stabilization, a number of edge biasing schemes have been
tried via open field-lines, in which concentric electrodes located in both inner and outer divertors as well as end-on plasma
guns are electrically biased independently. As a result of effective outer-divertor electrode biasing alone, FRC plasma is well
stabilized and diamagnetism duration has reached up to ~9 ms which is equivalent to C-2U plasma duration. Magnetic field
flaring/expansion in both inner and outer divertors plays an important role in creating a thermal insulation on open field-lines
to reduce a loss rate of electrons, which leads to improvement of the edge and core FRC confinement properties.
Experimental campaign with inner-divertor magnetic-field flaring has just commenced and early result indicates that electron
temperature of the merged FRC stays relatively high and increases for a short period of time, presumably by NBI and ExB
heating.
View details
Quantum Supremacy using a Programmable Superconducting Processor
Frank Arute
Kunal Arya
Rami Barends
Rupak Biswas
Fernando Brandao
David Buell
Yu Chen
Jimmy Chen
Ben Chiaro
Roberto Collins
William Courtney
Andrew Dunsworth
Edward Farhi
Brooks Foxen
Austin Fowler
Rob Graff
Keith Guerin
Steve Habegger
Michael Hartmann
Alan Ho
Markus Rudolf Hoffmann
Trent Huang
Travis Humble
Sergei Isakov
Kostyantyn Kechedzhi
Sergey Knysh
Alexander Korotkov
Fedor Kostritsa
Dave Landhuis
Mike Lindmark
Dmitry Lyakh
Salvatore Mandrà
Anthony Megrant
Xiao Mi
Kristel Michielsen
Masoud Mohseni
Josh Mutus
Charles Neill
Eric Ostby
Andre Petukhov
Eleanor G. Rieffel
Vadim Smelyanskiy
Kevin Jeffery Sung
Matt Trevithick
Amit Vainsencher
Benjamin Villalonga
Z. Jamie Yao
Ping Yeh
John Martinis
Nature, 574 (2019), 505–510
Preview abstract
The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor. A fundamental challenge is to build a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2^53 (about 10^16). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times-our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy for this specific computational task, heralding a much-anticipated computing paradigm.
View details