Shreya Agrawal

Shreya Agrawal

Authored Publications
Sort By
  • Title
  • Title, descending
  • Year
  • Year, descending
    Global extreme heat forecasting using neural weather models
    Amy McGovern
    Jason Hickey
    Artificial Intelligence for the Earth Systems, 2 (2023), e220035
    Preview abstract Heatwaves are projected to increase in frequency and severity with global warming. Improved warning systems would help reduce the associated loss of lives, wildfires, power disruptions, and reduction in crop yields. In this work, we explore the potential for deep learning systems trained on historical data to forecast extreme heat on short, medium and subseasonal time scales. To this purpose, we train a set of neural weather models (NWMs) with convolutional architectures to forecast surface temperature anomalies globally, 1 to 28 days ahead, at ~200-km resolution and on the cubed sphere. The NWMs are trained using the ERA5 reanalysis product and a set of candidate loss functions, including the mean-square error and exponential losses targeting extremes. We find that training models to minimize custom losses tailored to emphasize extremes leads to significant skill improvements in the heatwave prediction task, relative to NWMs trained on the mean-square-error loss. This improvement is accomplished with almost no skill reduction in the general temperature prediction task, and it can be efficiently realized through transfer learning, by retraining NWMs with the custom losses for a few epochs. In addition, we find that the use of a symmetric exponential loss reduces the smoothing of NWM forecasts with lead time. Our best NWM is able to outperform persistence in a regressive sense for all lead times and temperature anomaly thresholds considered, and shows positive regressive skill relative to the ECMWF subseasonal-to-seasonal control forecast after 2 weeks. View details
    WeatherBench 2: A benchmark for the next generation of data-driven global weather models
    Alex Merose
    Peter Battaglia
    Tyler Russell
    Alvaro Sanchez
    Vivian Yang
    Matthew Chantry
    Zied Ben Bouallegue
    Peter Dueben
    Carla Bromberg
    Jared Sisk
    Luke Barrington
    Aaron Bell
    arXiv (2023) (to appear)
    Preview abstract WeatherBench 2 is an update to the global, medium-range (1-14 day) weather forecasting benchmark proposed by Rasp et al. (2020), designed with the aim to accelerate progress in data-driven weather modeling. WeatherBench 2 consists of an open-source evaluation framework, publicly available training, ground truth and baseline data as well as a continuously updated website with the latest metrics and state-of-the-art models: https://sites.research.google/weatherbench. This paper describes the design principles of the evaluation framework and presents results for current state-of-the-art physical and data-driven weather models. The metrics are based on established practices for evaluating weather forecasts at leading operational weather centers. We define a set of headline scores to provide an overview of model performance. In addition, we also discuss caveats in the current evaluation setup and challenges for the future of data-driven weather forecasting. View details
    Preview abstract In this work, we check whether a deep learning model that does rainfall prediction using a variety of sensor readings behaves reasonably. Unlike traditional numerical weather prediction models that encode the physics of rainfall, our model relies purely on data and deep learning. Can we trust the model? Or should we take a rain check? We perform two types of analysis. First, we perform a one-at-a-time sensitivity analysis to understand properties of the input features. Holding all the other features fixed, we vary a single feature from its minimum to its maximum value and check whether the predicted rainfall obeys conventional intuition (e.g. more lightning implies more rainfall). Second, for specific prediction at a certain location, we use an existing feature attribution technique to identify influential features (sensor readings) from this and other locations. Again, we check whether the feature importances match conventional wisdom. (e.g. is ‘instant reflectivity’, a measure of the current rainfall more influential than say surface temperature). We compute influence both on the predictions of the model, but also on the error; the latter is perhaps a novel contribution to the literature on feature attribution. The model we chose to analyze is not the state of the art. It is flawed in several ways, and therefore makes for an interesting analysis target. We find several interesting issues. However, we should clarify that our analysis is not an indictment of machine learning approaches; indeed we know of better models ourselves. But our goal is to demonstrate an interactive analysis technique. View details
    MetNet: A Neural Weather Model for Precipitation Forecasting
    Casper Kaae Sønderby
    Lasse Espeholt
    Avital Oliver
    Jason Hickey
    Nal Kalchbrenner
    Submission to journal (2020)
    Preview abstract Weather forecasting is a long standing scientific challenge with direct social and economic impact. The task is suitable for deep neural networks due to vast amounts of continuously collected data and a rich spatial and temporal structure that presents long range dependencies. We introduce MetNet, a neural network that forecasts precipitation up to 8 hours into the future at the high spatial resolution of 1 km and at the temporal resolution of 2 minutes with a latency in the order of seconds. MetNet takes as input radar and satellite data and forecast lead time and produces a probabilistic precipitation map. The architecture uses axial self-attention to aggregate the global context from a large input patch corresponding to a million square kilometers. We evaluate the performance of MetNet at various precipitation thresholds and find that MetNet outperforms Numerical Weather Prediction at forecasts of up to 7 to 8 hours on the scale of the continental United States. View details
    Machine Learning for Precipitation Nowcasting from Radar Images
    Carla L. Bromberg
    Cenk Gazen
    Jason J. Hickey
    John Burge
    Luke Barrington
    Machine Learning and the Physical Sciences Workshop at the 33rd Conference on Neural Information Processing Systems (NeurIPS) (2019), pp. 4
    Preview abstract In recent years, Deep Learning techniques have shown dramatic promise in many domains, including the geosciences. We continue this trend by investigating the application of Deep Learning techniques to the problem of \emph{precipitation nowcasting], i.e., the short-term prediction of precipitation at high spatial resolution. We treat forecasting as a image-to-image translation problem, and leverage the power of the ubiquitous UNET autoencoder to make our predictions. We find our straight-forward approach performs favorably to the commonly used HRRR numerical nowcast. Such numerical methods provide strong longer-term predictions (e.g., next-day predictions), but due to their computational complexity, struggle to make effective short-term predictions--an issue deep learning techniques don’t suffer from. View details