R Forecasting – Aggregating Multiperiod DCC-GARCH Forecast Covariance Matrices

covarianceforecastinggarchrvolatility

Say I fit a $DCC$$GARCH(1,1)$ model to a dataset of weekly returns for four assets.

I forecast the covariance matrix for the next month (so four weekly steps ahead). This gives me four $4 \times 4$ covariance matrices, one for each of $t+1$, $t+2$, $t+3$ and $t+4$, where each covariance matrix is of 'weekly' data.

How do I aggregate these four weekly covariance matrices into a single covariance matrix for the forecast month? Is it as simple as just element-wise summing the matrices?:

$$
\widehat{\Sigma}_{t+1, t+4} = \widehat{\Sigma}_{t+1} + \widehat{\Sigma}_{t+2} + \widehat{\Sigma}_{t+3} + \widehat{\Sigma}_{t+4}
$$

In case it matters, the underlying returns are log returns.

Any help is appreciated. Apologies for my lack of knowledge here. Thank you!!

While this isn't really a programming question, here's my code in case it helps.

library(mvtnorm)
library(rmgarch)
library(rugarch)

# make dummy returns
means <- c(0.05, 0.02, 0.08, 0.10)
stdevs <- c(0.10, 0.07, 0.15, 0.25)
returns <- rmvnorm(1000, mean=means, sigma=diag(stdevs^2))

# GARCH(1,1) Specification
garch_spec <- ugarchspec(
    variance.model=list(model="fGARCH", submodel = "GARCH", garchOrder=c(1, 1)),
    mean.model=list(armaOrder=c(0, 0), include.mean=F),
    distribution.model="norm"
)

# create multispec--a set of GARCH(1,1) specifications on each series
ms <- multispec(replicate(ncol(returns), garch_spec))

# turn multispec into a DCC spec
dcc_spec <- dccspec(ms)

# fit the DCC spec to returns
dcc_fit <- dccfit(dcc_spec, returns)

# 1 month forecast (i.e 4 weeks ahead)
month_forecast <- dccforecast(dcc_fit, n.ahead=4)

# variance covariance matrices, one for each forecast week
vcvs <- rcov(month_forecast)[[1]]

### This produces the below 4 covariance matrices:
, , T+1

            Asset_1      Asset_2      Asset_3      Asset_4
Asset_1 0.013089899 0.0011057502 0.0024347432 0.0019650439
Asset_2 0.001105750 0.0053824805 0.0005703209 0.0003636192
Asset_3 0.002434743 0.0005703209 0.0284183605 0.0062753705
Asset_4 0.001965044 0.0003636192 0.0062753705 0.0664394251

, , T+2

            Asset_1      Asset_2      Asset_3      Asset_4
Asset_1 0.013089760 0.0011048251 0.0024324506 0.0019626789
Asset_2 0.001104825 0.0053825861 0.0005702948 0.0003628085
Asset_3 0.002432451 0.0005702948 0.0284176578 0.0062705168
Asset_4 0.001962679 0.0003628085 0.0062705168 0.0664401546

, , T+3

            Asset_1      Asset_2      Asset_3      Asset_4
Asset_1 0.013089622 0.0011039007 0.0024301600 0.0019603159
Asset_2 0.001103901 0.0053826916 0.0005702688 0.0003619984
Asset_3 0.002430160 0.0005702688 0.0284169559 0.0062656673
Asset_4 0.001960316 0.0003619984 0.0062656673 0.0664408834

, , T+4

            Asset_1      Asset_2      Asset_3     Asset_4
Asset_1 0.013089484 0.0011029771 0.0024278715 0.001957955
Asset_2 0.001102977 0.0053827970 0.0005702427 0.000361189
Asset_3 0.002427872 0.0005702427 0.0284162546 0.006260822
Asset_4 0.001957955 0.0003611890 0.0062608221 0.066441611
###

# aggregate into single 1-month variance covariance matrix
aggregated_vcv <- rowSums(vcvs, dims = 2) # Is this allowed???

### This produces the below:
            Asset_1     Asset_2     Asset_3     Asset_4
Asset_1 0.052358766 0.004417453 0.009725225 0.007845994
Asset_2 0.004417453 0.021530555 0.002281127 0.001449615
Asset_3 0.009725225 0.002281127 0.113669229 0.025072377
Asset_4 0.007845994 0.001449615 0.025072377 0.265762074
###

Best Answer

Is it as simple as just element-wise summing the matrices?:

$$ \widehat{\Sigma}_{t+1, t+4} = \widehat{\Sigma}_{t+1} + \widehat{\Sigma}_{t+2} + \widehat{\Sigma}_{t+3} + \widehat{\Sigma}_{t+4} $$

Yes, I think it is that simple. This is because the standardized innovations $z$ are assumed to be i.i.d. and thus uncorrelated across time: $\rho(z_{i,s},z_{j,t})=0$ for all pairs $(i, j)$ whenever $s\neq t$. (Here, ${i, j}$ denote the assets and $s,t$ denote time periods / time points.) The unstandardized innovations are still conditionally uncorrelated because the conditional variances of each innovation are a function of the conditioning information and so can be treated as multiplicative constants. I think that is sufficient to prove the equality you have proposed above.