Forecasting the future of forests with AI: From counting losses to predicting risk

November 5, 2025

Drew Purves, Research Scientist, Google DeepMind, and Charlotte Stanton, Senior Program Manager, Google Research, on behalf of the ForestCast team

We introduce the first deep learning–powered benchmark for proactive deforestation risk forecasting.

Nature underpins our climate, our economies, and our very lives. And within nature, forests stand as one of the most powerful pillars — storing carbon, regulating rainfall, mitigating floods, and harboring the majority of the planet’s terrestrial biodiversity.

Yet, despite their critical importance, the world continues to lose forests at an alarming rate. Last year alone, we lost the equivalent of 18 soccer fields of tropical forest every minute, totaling 6.7 million hectares — a record high and double the amount lost the year before. Today, habitat conversion is the greatest threat to biodiversity on land.

For years, satellite data has been our essential tool for measuring this loss. More recently, in collaboration with the World Resources Institute, we helped map the underlying drivers of that loss — from agriculture and logging to mining and fire — for the years 2000–2024. These maps, which are at an unprecedented 1km2 resolution, provide a basis for a wide range of forest protection measures. However those insights, critical as they are, only look backward. Now, it's time to look ahead.

We're excited to announce the release of “ForestCast: Forecasting Deforestation Risk at Scale with Deep Learning”, along with the first publicly available benchmark dataset dedicated to training deep learning models to predict deforestation risk. This shift from merely monitoring what's already gone to forecasting what's at risk in the future changes the game. Previous approaches to risk have depended on assembling patchily-available input maps, such as roads and population density, which can quickly go out of date. By contrast, we have developed an efficient approach based on pure satellite data that can be applied consistently, in any region, and can be readily updated in the future when more data becomes available. We found that this approach could match or exceed the accuracy of previous approaches. To ensure the community can reproduce and build on our work, we are releasing all of the input, training, and evaluation data as a public benchmark dataset.

Why predicting deforestation is so difficult

Deforestation is fundamentally a human process driven by a complex web of economic, political, and environmental factors. It's fueled by commodity-driven expansion for products like cattle, palm oil, and soy, but also by wildfires, logging, the expansion of settlements and infrastructure, and the extraction of hard minerals and energy. Predicting the location and timing of future loss is therefore incredibly hard.

The current state-of-the-art approach tries to solve this by assembling specialized geospatial information on as many of those factors as possible: maps of roads, economic indicators, policy enforcement data, etc. This approach has provided accurate predictions for some regions at some times. However, it is not generally scalable because those input maps are often patchy, inconsistent, and need to be assembled separately for each region. This approach is also not future-proof, because the input maps tend to quickly go out of date, and there is no guarantee when, if ever, they may be refreshed.

A scalable satellite approach

To overcome these challenges, we adopt a “pure satellite” model, where the only inputs are derived from satellites. We tested raw satellite inputs from the Landsat and Sentinel 2 satellites. We also included a satellite-derived input we refer to as “change history”, which identifies each pixel that has already been deforested and provides a year for when that deforestation occurred. We trained and evaluated the model using satellite-derived labels of deforestation.

The pure satellite approach provides consistency, in that we can apply the exact same method anywhere on Earth, allowing for meaningful comparisons between different regions. It also makes our model future proof — these satellite data streams will continue for years to come, so we can repeat the method to give updated predictions of risk and examine how risk is changing through time.

To achieve accuracy and scalability, we developed a custom model based on vision transformers. The model receives a whole tile of satellite pixels as input, which is crucial to capture the spatial context of the landscape and recent deforestation (as captured in the change history). It then outputs a whole tile’s worth of predictions in one pass, which makes the model scalable to large regions.

We found that our model was able to reproduce, or exceed, the accuracy of methods based on specialized inputs (such as roads), accurately predicting tile-to-tile variation in the amount of deforestation, and, within tiles, accurately predicting which pixels were the most likely to become deforested next.

play silent looping video pause silent looping video

Our deep learning vision model analyzes satellite time series and historical forest loss to forecast deforestation risk.

Surprisingly, we found that by far the most important satellite input was the simplest, the change history. So much so that a model receiving only this input could provide predictions with accuracy metrics indistinguishable from models using the full, raw satellite data. In retrospect we can see that the change history is a small, but highly information dense, model input — including information on tile-to-tile variation in recent deforestation rates, and how these are trending through time, and also capturing moving deforestation fronts within tiles.

To promote transparency and repeatability, we are releasing the training and evaluation data used in this work, as a benchmark. This allows the wider machine learning community to verify our results; to potentially extract deeper understanding of why the model makes certain predictions; and ultimately, to build and compare improved deforestation risk models.

Moreover, our benchmark and paper provide a clear template for scaling this approach globally — to model tropical deforestation across Latin America and Africa, and eventually, to temperate and boreal latitudes where forest loss is often driven by different dynamics, such as cattle ranching and fire.

Conclusion

Land-use change, especially tropical deforestation and forest conversion, is responsible for roughly 10% of global anthropogenic greenhouse-gas emissions and threatens the vast majority of the planet's terrestrial life. Forecasts of deforestation risk could be a vital tool for targeting resources where they can have the greatest impact in curbing those emissions and protecting nature.

This ability to anticipate risk allows governments, companies, and communities to act early, when there’s still time to prevent loss, rather than reacting to damage that’s already done. For example:

  • A government agency can channel support and conservation incentives to communities in emerging deforestation frontiers.
  • A company can proactively manage their supply chains to reduce and eliminate deforestation.
  • An indigenous community can deploy scarce resources towards protecting their land at greatest risk.

Thus, a forecast like this isn't a prediction of an unavoidable future. Instead, it's a tool designed to change a future outcome. The goal is to share this information with those who can act, helping them channel resources to the most vulnerable areas before it's too late, and empowering them to ensure these high-risk forests remain standing. By combining open data and advanced AI, we're forging a powerful new tool to safeguard nature.

For more on geospatial AI for sustainability check out Google Earth AI.

Acknowledgements

This research was co-developed by Google Deepmind and Google Research.

Google Deepmind: Matt Overlan, Arianna Manzini, Drew Purves, Julia Haas, Maxim Neumann, Mélanie Rey.

Google Research: Charlotte Stanton, Michelangelo Conserva.

We’d also like to thank our additional collaborators Kira Prabhu, Youngin Shin and Kuan Lu, as well as Peter Battaglia and Kat Chou for their support.