Jump to Content

Publications

Our teams aspire to make discoveries that impact everyone, and core to our approach is sharing our research and tools to fuel progress in the field.

people standing in front of a screen with images and a chipboard

Publications

Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
1 - 15 of 330 publications
    Preview abstract Floods are one of the most common natural disasters, with a disproportionate impact in developing countries that often lack dense streamflow gauge networks. Accurate and timely warnings are critical for mitigating flood risks, but hydrological simulation models typically must be calibrated to long data records in each watershed. Here we show that AI-based forecasting achieves reliability in predicting extreme riverine events in ungauged watersheds at up to a 5-day lead time that is similar to or better than the reliability of nowcasts (0-day lead time) from a current state of the art global modeling system (the Copernicus Emergency Management Service Global Flood Awareness System). Additionally, we achieve accuracies over 5-year return period events that are similar to or better than current accuracies over 1-year return period events. This means that AI can provide flood warnings earlier and over larger and more impactful events in ungauged basins. The model developed in this paper was incorporated into an operational early warning system that produces publicly available (free and open) forecasts in real time in over 80 countries. This work highlights a need for increasing the availability of hydrological data to continue to improve global access to reliable flood warnings. View details
    A scalable system to measure contrail formation on a per-flight basis
    Erica Brand
    Sebastian Eastham
    Carl Elkin
    Thomas Dean
    Zebediah Engberg
    Ulrike Hager
    Joe Ng
    Dinesh Sanekommu
    Tharun Sankar
    Marc Shapiro
    Environmental Research Communications (2024)
    Preview abstract In this work we describe a scalable, automated system to determine from satellite data whether a given flight has made a persistent contrail. The system works by comparing flight segments to contrails detected by a computer vision algorithm running on images from the GOES-16 Advanced Baseline Imager. We develop a `flight matching' algorithm and use it to label each flight segment as a `match' or `non-match'. We perform this analysis on 1.6 million flight segments and compare these labels to existing contrail prediction methods based on weather forecast data. The result is an analysis of which flights make persistent contrails several orders of magnitude larger than any previous work. We find that current contrail prediction models fail to correctly predict whether we will match a contrail in many cases. View details
    Preview abstract This is an invited OFC 2024 conference workshop talk regarding a new type of lower-power datacenter optics design choice: linear pluggable optics. In this talk I will discuss the fundamental performance constraints facing linear pluggable optics and their implications on DCN and ML use cases View details
    Preview abstract Generative AI models, including large language models and multimodal models that include text and other media, are on the cusp of transforming many aspects of modern life, including entertainment, education, civic life, the arts, and a range of professions. There is potential for Generative AI to have a substantive impact on the methods and pace of discovery for a range of scientific disciplines. We interviewed twenty scientists from a range of fields (including the physical, life, and social sciences) to gain insight into whether or how Generative AI technologies might add value to the practice of their respective disciplines, including not only ways in which AI might accelerate scientific discovery (i.e., research), but also other aspects of their profession, including the education of future scholars and the communication of scientific findings. In addition to identifying opportunities for Generative AI to augment scientists’ current practices, we also asked participants to reflect on concerns about AI. These findings can help guide the responsible development of models and interfaces for scientific education, inquiry, and communication. View details
    Preview abstract With modern fluorescent probes and light microscopes, the possibilities of monitoring the dynamics of cells, organelles, molecular complexes and even single molecules with high spatiotemporal resolution are more than ever [1]. Characterizing the motion of cellular and molecular entities can reveal a great deal of information about their functions and interactions and overall activity landscape [2]. Motion characterization is generally the end product of an image analysis pipeline that starts with identifying and localizing the objects in each image (segmentation/detection), tracking them over time, and then analyzing the resulting trajectories to characterize the objects’ motion. Whether objects are large (e.g. cells or organelles) or small (e.g. molecules or molecular complexes), as long as they can be represented by coordinates (e.g. cell centroid position or molecular position), following them over time in a series of images is effectively a (multiple) particle tracking problem. In their recent publication in Nature Machine Intelligence, Pineda et al. [3] describe a powerful deep learning approach based on graph neural networks (GNNs) for particle tracking or, if desired, for the characterization of object motion without explicit tracking. Particle tracking is often the most challenging step in the analysis pipeline from images to motion characterization. Whenever the analysis involves a relatively high density of heterogeneously moving objects, there is ambiguity in determining which object has gonewent where throughout the image series [4]. In addition, objects may merge with each other – due to crossing paths or interactions – and may undergo splitting, such as during cell division. Despite these challenges, a high density of tracked objects is often desired, because of the rich information that it yields about the system studied [5]. The novel GNN-based approach by Pineda et al. [3], named MAGIK, offers solutions to the tracking problem in two ways: First, MAGIK can be employed to construct the trajectories of the imaged objects from the graph of their connections in space and time. Second, MAGIK can be employed to characterize the motion of the imaged objects directly from their graph of spatiotemporal connections, without explicit tracking. Graphs are ubiquitously used in science to represent complex systems of interacting objects, from molecules to social and transportation networks [6]. GNNs provide a framework for incorporating existing information about the objects, with an inductive bias based on a larger structure relating them, to make predictions about these objects or the system as a whole. In MAGIK [3], spatiotemporal connections between imaged objects, encoded in the structure of the graph, provide this inductive bias , with the premise that objects close in space-time are likely to be the same. MAGIK utilizes this graph representation in a powerful way by employing GNNs [7] to perform various tracking and motion characterization tasks. The GNN model proposed in MAGIK considers both spatial and temporal information in a static graph. This model is enhanced by an adaptive and interpretable attention mechanism. Attention estimates the strength of association among the objects and provides insights into the dynamics of the system for the task. GNNs enable MAGIK to provide a versatile platform for performing multiple tasks from linking coordinates into trajectories to inferring local and global dynamic properties. MAGIK is tested for its flexibility and reliability in real and simulated scenarios corresponding to a variety of biological experiments. The results of the tests show that MAGIK is able to identify which spatiotemporal connections in a graph influence the dynamic properties of each object. They further show that MAGIK accurately constructs trajectories, obtaining outstanding results for cell tracking, including the identification of cell division events, using multiple microscopy techniques and cell types. As in most applications the final goal of tracking is to characterize the dynamics of the system, Pineda et al. [3] have tested MAGIK for quantifying motion parameters without explicit tracking, and they have shown that MAGIK can accurately and sensitively quantify local or global motion properties of the imaged objects. Technically, MAGIK performs these various tasks by tailoring its training to the task: Tracking as a graph edge classification task, local motion motion characterization as a graph node regression task, and global motion characterization as a graph-level regression or classification task. As demonstrated by MAGIK, GNNs offer powerful tools for the analysis of the spatiotemporal connections between objects in biological images. New developments in the fields of graphs and GNNs will further advance this goal. One possibility is to replace the fixed graph and the fully connected graph in MAGIK with a learnable sparse graph [8]. Another possibility is to use hypergraphs, which go beyond binary connections (a fundamental limitation of graphs). This would be a promising approach to characterize the spatiotemporal connections of systems with complex interactions [9]. Furthermore, as the problem studied here is temporal in nature, it may benefit from temporal GNNs [10], which directly incorporate time into the GNN formulation. All in all, the powerful combination of cutting-edge microscopes, fluorescent probes and geometric deep learning analytical tools will aid with the study of the organization, dynamics and interactions of diverse systems, from molecules in a cell, to cells in a tissue, and beyond. View details
    Estimates of broadband upwelling irradiance from GOES-16 ABI
    Sixing Chen
    Vincent Rudolf Meijer
    Joe Ng
    Geoff Davis
    Carl Elkin
    Remote Sensing of Environment, vol. 285 (2023)
    Preview abstract Satellite-derived estimates of the Earth’s radiation budget are crucial for understanding and predicting the weather and climate. However, existing satellite products measuring broadband outgoing longwave radiation (OLR) and reflected shortwave radiation (RSR) have spatio-temporal resolutions that are too coarse to evaluate important radiative forcers like aircraft condensation trails. We present a neural network which estimates OLR and RSR based on narrowband radiances, using collocated Cloud and Earth’s Radiant Energy System (CERES) and GOES-16 Advanced Baseline Imager (ABI) data. The resulting estimates feature strong agreement with the CERES data products (R^2 = 0.977 for OLR and 0.974 for RSR on CERES Level 2 footprints), and we provide open access to the collocated satellite data and model outputs on all available GOES-16 ABI data for the 4 years from 2018–2021. View details
    Chimane-Mosetén
    Jeanette Sakel
    Amazonian Languages: An International Handbook, De Gruyter Mouton (2023)
    Preview abstract Chimane-Mosetén (also known as Mosetenan; ISO 639–3: cas; Glottocode: mose1249) is a dialect continuum spoken by 13,500–16,000 people in the Amazonian region of northern Bolivia. It has not been convincingly shown to be related to any other language. Its status as an isolate makes it unique in many respects, not least in its combination of features typical of both Amazonian and Andean languages. Like its closer geographical neighbors in Amazonian Bolivia, including Movima, Tacana, Reyesano, and Cavineña, it exhibits contrastive nasality in the vowel system and is head marking and predominantly agglutinative. Bound pronominal forms marking arguments in the clause have the same form as bound pronominals marking possessors. Subordinate clauses typically involve nominalized verbs. Unlike most of its Amazonian neighbors, on the other hand, it does not have a semantically-based classifier or gender system but instead features arbitrarily assigned masculine or feminine gender. It also does not feature any incorporation of nouns, adverbs, or adpositions. It has an extensive oblique case-marking system, though core case-marking does not occur. More similar to Quechua and other Andean languages, it features a complex predicate-argument agreement system in which one or more agreement suffixes cross-reference the subject and object arguments of a transitive verb. It also has a large class of lexical numbers following a decimal numeral system. View details
    Accelerating Molecular Graph Neural Networks via Knowledge Distillation
    Filip Ekström Kelvinius
    Dimitar Georgiev
    Artur Petrov Toshev
    Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS) (2023)
    Preview abstract Recent advances in graph neural networks (GNNs) have allowed molecular simulations with accuracy on par with conventional gold-standard methods at a fraction of the computational cost. Nonetheless, as the field has been progressing to bigger and more complex architectures, state-of-the-art GNNs have become largely prohibitive for many large-scale applications. In this paper, we, for the first time, explore the utility of knowledge distillation (KD) for accelerating molecular GNNs. To this end, we devise KD strategies that facilitate the distillation of hidden representations in directional and equivariant GNNs and evaluate their performance on the regression task of energy and force prediction. We validate our protocols across different teacher-student configurations and demonstrate that they can boost the predictive accuracy of student models without altering their architecture. We also conduct comprehensive optimization of various components of our framework, and investigate the potential of data augmentation to further enhance performance. All in all, we manage to close as much as 59% of the gap in predictive accuracy between models like GemNet-OC and PaiNN with zero additional cost at inference. View details
    Evolve Smoothly, Fit Consistently: Learning Smooth Latent Dynamics For Advection-Dominated Systems
    Leonardo Zepeda-Núñez
    Anudhyan Boral
    International Conference on Learning Representations (2023) (to appear)
    Preview abstract We present a data-driven, space-time continuous framework to learn surrogate models for complex physical systems described by advection-dominated partial differential equations. Those systems have slow-decaying Kolmogorov n-width that hinders standard methods, including reduced order modeling, from producing high-fidelity simulations at low cost. In this work, we construct hypernetwork-based latent dynamical models directly on the parameter space of a compact representation network. We leverage the expressive power of the network and a specially designed consistency-inducing regularization to obtain latent trajectories that are both low-dimensional and smooth. These properties render our surrogate models highly efficient at inference time. We show the efficacy of our framework by learning models that generate accurate multi-step rollout predictions at much faster inference speed compared to competitors, for several challenging examples. View details
    How Climate Change was Won
    Communications of the ACM, vol. Vol. 66 No. 11 (2023), Pages 100-ff
    Preview abstract A short fictional piece telling the story of how climate change was won, from the perspective of a newscaster on Mars. View details
    Ewald-Based Long-Range Message Passing for Molecular Graphs
    Arthur Kosmala
    Nicholas Gao
    Stephan Günnemann
    International Conference on Machine Learning (ICML) (2023)
    Preview abstract Neural architectures that learn potential energy surfaces from molecular data have undergone fast improvement in recent years. A key driver of this success is the Message Passing Neural Network (MPNN) paradigm. Its favorable scaling with system size partly relies upon a spatial distance limit on messages. While this focus on locality is a useful inductive bias, it also impedes the learning of long-range interactions such as electrostatics and van der Waals forces. To address this drawback, we propose Ewald message passing: a nonlocal Fourier space scheme which limits interactions via a cutoff on frequency instead of distance, and is theoretically well-founded in the Ewald summation method. It can serve as an augmentation on top of existing MPNN architectures as it is computationally inexpensive and agnostic to architectural details. We test the approach with four baseline models and two datasets containing diverse periodic (OC20) and aperiodic structures (OE62). We observe robust improvements in energy mean absolute errors across all models and datasets, averaging 10% on OC20 and 16% on OE62. Our analysis shows an outsize impact of these improvements on structures with high long-range contributions to the ground truth energy. View details
    EEAGER: A neural network model for finding beaver complexes in satellite and aerial imagery
    Emily Fairfax
    Steffi Maiman
    Aman Shaikh
    William W. Macfarlane
    Joseph M. Wheaton
    Dan Ackerstein
    Eddie Corwin
    JGR Biosciences, vol. 128 (2023), N/A
    Preview abstract Beavers are ecosystem engineers that create and maintain riparian wetland ecosystems in a variety of ecologic, climatic, and physical settings. Despite the large-scale implications of ongoing beaver conservation and range expansion, relatively few landscape-scale studies have been conducted, due in part to the significant time required to manually locate beaver dams at scale. To address this need, we developed EEAGER—an image recognition machine learning model that detects beaver complexes in aerial and satellite imagery. We developed the model in the western United States using 13,344 known beaver dam locations and 56,728 nearby locations without beaver dams. Performance assessment was performed in twelve held out evaluation polygons of known beaver occupancy but previously unmapped dam locations. These polygons represented regions similar to the training data as well as more novel landscape settings. Our model performed well overall (accuracy = 98.5%, recall = 63.03%, precision = 25.83%) in these areas, with stronger performance in regions similar to where the model had been trained. We favored recall over precision, which results in a more complete catalog of beaver dams found but also a higher incidence of false positives to be manually removed during quality control. These results have far-reaching implications for monitoring of beaver-based river restoration, as well as potential applications detecting other complex landforms. View details
    Global extreme heat forecasting using neural weather models
    Amy McGovern
    Jason Hickey
    Artificial Intelligence for the Earth Systems, vol. 2 (2023), e220035
    Preview abstract Heatwaves are projected to increase in frequency and severity with global warming. Improved warning systems would help reduce the associated loss of lives, wildfires, power disruptions, and reduction in crop yields. In this work, we explore the potential for deep learning systems trained on historical data to forecast extreme heat on short, medium and subseasonal time scales. To this purpose, we train a set of neural weather models (NWMs) with convolutional architectures to forecast surface temperature anomalies globally, 1 to 28 days ahead, at ~200-km resolution and on the cubed sphere. The NWMs are trained using the ERA5 reanalysis product and a set of candidate loss functions, including the mean-square error and exponential losses targeting extremes. We find that training models to minimize custom losses tailored to emphasize extremes leads to significant skill improvements in the heatwave prediction task, relative to NWMs trained on the mean-square-error loss. This improvement is accomplished with almost no skill reduction in the general temperature prediction task, and it can be efficiently realized through transfer learning, by retraining NWMs with the custom losses for a few epochs. In addition, we find that the use of a symmetric exponential loss reduces the smoothing of NWM forecasts with lead time. Our best NWM is able to outperform persistence in a regressive sense for all lead times and temperature anomaly thresholds considered, and shows positive regressive skill relative to the ECMWF subseasonal-to-seasonal control forecast after 2 weeks. View details
    Towards Large-Scale Simulations of Open-Ended Evolution in Continuous Cellular Automata
    GECCO '23: Proceedings of the Genetic and Evolutionary Computation Conference, ACM (2023)
    Preview abstract Inspired by biological and cultural evolution, there have been many attempts to explore and elucidate the necessary conditions for open-endedness in artificial intelligence and artificial life. Using a continuous cellular automata called Lenia as the base system, we built large-scale evolutionary simulations using parallel computing framework JAX, in order to achieve the goal of never-ending evolution of self-organizing patterns. We report a number of system design choices, including (1) implicit implementation of genetic operators, such as reproduction by pattern self-replication and selection by differential existential success; (2) localization of genetic information; and (3) algorithms for dynamically maintenance of the localized genotypes and translation to phenotypes. Simulation results tend to go through a phase of diversity and creativity, gradually converge to domination by fast expanding patterns, presumably a optimal solution under the current design. Based on our experimentation, we propose several factors that may further facilitate open-ended evolution, such as virtual environment design, mass conservation, and energy constraints. View details
    WeatherBench 2: A benchmark for the next generation of data-driven global weather models
    Alex Merose
    Peter Battaglia
    Tyler Russell
    Alvaro Sanchez
    Vivian Yang
    Matthew Chantry
    Zied Ben Bouallegue
    Peter Dueben
    Carla Bromberg
    Jared Sisk
    Luke Barrington
    Aaron Bell
    arXiv (2023) (to appear)
    Preview abstract WeatherBench 2 is an update to the global, medium-range (1-14 day) weather forecasting benchmark proposed by Rasp et al. (2020), designed with the aim to accelerate progress in data-driven weather modeling. WeatherBench 2 consists of an open-source evaluation framework, publicly available training, ground truth and baseline data as well as a continuously updated website with the latest metrics and state-of-the-art models: https://sites.research.google/weatherbench. This paper describes the design principles of the evaluation framework and presents results for current state-of-the-art physical and data-driven weather models. The metrics are based on established practices for evaluating weather forecasts at leading operational weather centers. We define a set of headline scores to provide an overview of model performance. In addition, we also discuss caveats in the current evaluation setup and challenges for the future of data-driven weather forecasting. View details