The M4 Forecasting Competition was a huge success along many dimensions. It enabled a comparison of 60 different forecasting methods across 100,000 real-world time series. It also created the opportunity for both the participants and observers to learn from each other through open sharing of both the full set of test data as well as the solutions applied.
As practitioners who have worked on a wide range of forecasting problems, we were excited about the competition and its results, as well as the innovative ideas that were shared as a result of the competition. At the same time, we also saw some differences between the nature of the competition and the type of problems that we work on and have seen others working on. We share our suggestions for 5 themes which we observe in the forecasting community and which we would like to see reflected in future competitions to tie them more closely to today’s real-world forecasting and prediction challenges. We also outline some attributes of forecasting approaches that we expect will be key success factors in prediction for those types of problems, and discuss how those resonate with the competition findings. Lastly, we compare attributes of the M4 competition data set with those of a recent Kaggle competition on web traffic time series forecasting which was hosted during the same time period.