Solved – Evaluate forecasting ability of GARCH models with RMSE and MAE

accuracygarchmodel-evaluationrmsvolatility-forecasting

I am evaluating different forecasting models and their ability to forecast index volatility during period of market turmoil, using two measurements, Root Mean Square Error and Mean Absolute Error. For the previous evaluated models, ARMA (1,1) as an example, I was able to obtain the residuals and calculate the RMSE quite easily in Stata.

When estimating the GARCH (1,1) model in Stata I am however not able to correctly obtain the residuals in the post estimation procedure, no option to directly obtain the RMSE is available. Perhaps I have misunderstood how one should evaluate the forecasting ability of GARCH models, since the models specifies the conditional variance unlike ARMA which specifies the conditional mean.

Does anyone have a suggestion on how to obtain these evaluation measurements after estimating a GARCH model? And preferably how to do it in Stata.

Best Answer

As @Cagdas Ozgenc writes, the problem is that GARCH does not forecast future realizations (which you can observe), but future volatility (which you cannot observe). Thus, classical point forecast error (or accuracy) measures don't make sense.

So, how do we evaluate a GARCH volatility forecast? In fact, one usually not only forecasts volatility using GARCH, but adds distributional assumptions (typically a normal or a t distribution) and outputs a density forecast. The question now becomes how to evaluate a density forecast.

The classical way of evaluating a density forecast is to calculate its Probability Integral Transform, plot a histogram and check whether the PIT is uniformly distributed. Diebold, Gunther & Tay (1998, International Economic Review) is the classical reference - note that they give a very nice example using t-GARCH processes. Tay & Wallis (2000, Journal of Forecasting) is a somewhat newer overview.

However, recent research has focused on the shortcomings of the PIT. It turns out that systematically wrong forecasts can still give uniform histograms. Gneiting, Balabdaoui & Raftery (2007, JRSS B) give some disconcerting examples and propose scoring rules as a remedy. These are less intuitive than the PIT, but they simultaneously evaluate calibration and sharpness of predictive distributions. Gneiting & Katzfuss (2014, Annual Review of Statistics and Its Application) give a more up-to-date overview of density forecasting and evaluation.