Solved – GARCH vs SV for Forecasting

bayesiangarchpredictive-modelsstochastic-processestime series

I believe I am aware of how GARCH family and stochastic volatility models differ in their construction and assumptions on the volatility states, (i.e. GARCH family assumes deterministic volatility states while SV assumes stochastic volatility states).

My primary goal is to improve 1-step ahead forecasts, mostly for financial and macro-economic times series. I will be doing a lot with energy commodities; crude oil, coal, and natural gas are some examples.

For a series $\{y_t\}$, I will judge my models' performance with a log-scoring rule of the form
$$
\sum_{t=s}^T\mathrm{log} P(y_t|\mathbf{y}_{t-1})
$$

where $\mathbf{y}_{t-1}=(y_1,…,y_{t-1})$, $1 \leq s <T$, and $P(Y_t|\mathbf{y}_{t-1})$ is the Bayesian posterior predictive density estimated with $y_1,…,y_{t-1}$.
i.e.
$$
P(y_t|\mathbf{y}_{t-1})=\int_\theta P(y_t|\theta,\mathbf{y}_{t-1})P(\theta|\mathbf{y}_{t-1},\alpha)d\theta=E_{\theta|\mathbf{y}_{t-1},\alpha}[P(y_t|\theta,\mathbf{y}_{t-1})]
$$
which can be found analytically or empirically with MCMC.

It goes without saying that I can always run both types of models and compare them on the basis of this score, but given the overwhelming number of models at my disposal it would be very convenient to know beforehand which class of models, GARCH or SV, are more likely better suited for the task.

Specifically I am interested in the following:

  1. For the purpose of probabilistic forecasting of financial and macro-economic time series: Is it known whether one class of model has a tendency to perform better than the other in general?
  2. From a computational standpoint, is one class of model significantly more convenient to estimate in a Bayesian paradigm than the other. Specifically for computing the posterior predictive log score above?
  3. Will a GARCH and SV model tend to produce similar forecasts in general, or will these differ drastically between the 2 classes of models? Moreover are the consequences of misspecifying a SV process by modeling it as a GARCH or vice versa likely to be very significant in a univariate time series, or is it usually "not that big of a deal". (I know this one is probably a real big "it depends", but if there is any general consensus, practical experience or a paper related to this topic it would be greatly appreciated) .

I know these are very broad questions. If anyone has recommendations on how to make this question more answerable I welcome constructive comments.

Best Answer

This is more like an extended comment than an answer as the answer really depends on the data series at hand.

1)

For the purpose of probabilistic forecasting of financial and macro-economic time series: Is it known whether one class of model has a tendency to perform better than the other in general?

There are quite some papers comparing forecast abilities of a broad class of different volatility models. My recommendation is to look around and read some papers in order to determine which models have shown good abilities for the kind of series you are interested in.

Hansen and Lunde (2005): A FORECAST COMPARISON OF VOLATILITY MODELS: DOES ANYTHING BEAT A GARCH(1,1)?. In this article they compare 330 different volatility models on exchange rate and IBM return data and find that no model performs significantly better than the GARCH(1,1) model. They do however not consider SV models.

Hansen, Lunde and Nason (2003): Choosing the Best Volatility Models: The Model Confidence Set Approach*. They compare 55 different volatility models including fractional GARCH and SV models.

Fleming and Kirby (2003): A Closer Look at the Relation between GARCH and Stochastic Autoregressive Volatility. They compare the two model classes and find similar results.

Giot and Laurent (2003): Value-at-risk for long and short trading positions. They compare the abilities of different univariate and multivariate ARCH class models in VaR modelling.

Giot and Laurent (2004): Modelling daily Value-at-Risk using realized volatility and ARCH type models. They compare ARCH models and daily realized volatility in VaR modelling.

These were just a few papers and there are loads more out there.

2)

From a computational standpoint, is one class of model significantly more convenient to estimate in a Bayesian paradigm than the other. Specifically for computing the posterior predictive log score above?

I cannot answer this as I have no experience in estimating these models using Bayesian techniques, however ARCH type models are easily estimated by MLE while SV can be estimated using the EM algorithm or the Kalman filter. There are as well several different ways to evaluate them. One such way is mentioned in the Hansen, Lunde and Nason (2003) paper above.

3)

Will a GARCH and SV model tend to produce similar forecasts in general, or will these differ drastically between the 2 classes of models? Moreover are the consequences of misspecifying a SV process by modeling it as a GARCH or vice versa likely to be very significant in a univariate time series, or is it usually "not that big of a deal". (I know this one is probably a real big "it depends", but if there is any general consensus, practical experience or a paper related to this topic it would be greatly appreciated).

Often when modelling financial data with a GARCH model you see that the coefficients are very close to 1 indicating an IGARCH (integrated-GARCH). This could be a sign of very persisitent data but also mis-specification as changing parameters/breaks would result in an IGARCH (residuals could also be non-normal etc.). In this case you would get better results by estimating a SV model as these can capture the changes.

This is a massive litterature so you will need to read a bit up on this topic yourself.