GARCH – Sensitivity to Distribution Assumptions in GARCH Models

assumptionsdistributionsgarchmodel comparisonmodel selection

I am trying to fit an ARMA(4,4)- GARCH(1,1) model to return data, where the distribution of returns is highly leptokurtic. I plan to see whether autocorrelations exist in the data even after specifying a GARCH model.

I estimated the GARCH model with different distributional assumptions (Normal, t-distribution and GED) and both the log-likelihood and AIC suggest that a t-distribution with 6 degrees of freedom is the best fit.

However, my problem is that the results in terms of my estimated coefficients are very sensitive to the distributional assumption I use and also to the type of GARCH specification.

Since I would mainly want to interpret the coefficients and not use the model for forecasting I am a bit stuck.

So my questions are:

  1. Does anyone have any suggestions on how I could mitigate this sensitivity or is it common for GARCH models to have this?
  2. Is there a better way to apply the model with the best distribution than just checking different specifications manually?

Many thanks in advance!

Best Answer

  1. Given a sufficiently large sample, I do not see a reason to mitigate the sensitivity in the first place. Sensitivity is desirable. It allows the model to reflect regularities in the data, and this is what we use models for (not only GARCH but also more generally).
    On the other hand, if your sample is small and the results vary a lot among similar model specifications, you may be heavily overfitting the data. Try some more parsimonious model specifications instead.
    Also note that ARMA models with different lag orders and different coefficient values can nevertheless produce similar patterns. Thus you may gain more insight by looking at and comparing impulse-response functions (IRFs) of the ARMA models than their coefficient values.

  2. I do not think there is. The problem is not specific to GARCH models, however. It is common to a wide range of statistical models.

In summary, I would try different models, assess their assumptions (by running diagnostics on standardized residuals) and pick a model that offers a good trade-off between statistical adequacy (based on diagnostics) and parsimony (based on model complexity). AIC is one measure that can aid you in that.

Related Question