Jump to Content

The Rain Check

Abstract

In this work, we check whether a deep learning model that does rainfall prediction using a variety of sensor readings behaves reasonably. Unlike traditional numerical weather prediction models that encode the physics of rainfall, our model relies purely on data and deep learning. Can we trust the model? Or should we take a rain check? We perform two types of analysis. First, we perform a one-at-a-time sensitivity analysis to understand properties of the input features. Holding all the other features fixed, we vary a single feature from its minimum to its maximum value and check whether the predicted rainfall obeys conventional intuition (e.g. more lightning implies more rainfall). Second, for specific prediction at a certain location, we use an existing feature attribution technique to identify influential features (sensor readings) from this and other locations. Again, we check whether the feature importances match conventional wisdom. (e.g. is ‘instant reflectivity’, a measure of the current rainfall more influential than say surface temperature). We compute influence both on the predictions of the model, but also on the error; the latter is perhaps a novel contribution to the literature on feature attribution. The model we chose to analyze is not the state of the art. It is flawed in several ways, and therefore makes for an interesting analysis target. We find several interesting issues. However, we should clarify that our analysis is not an indictment of machine learning approaches; indeed we know of better models ourselves. But our goal is to demonstrate an interactive analysis technique.