The solution is to use another package called rugarch
.
install.packages("rugarch")
require(rugarch)
Let's construct a (Tx2) matrix called data
, where column 1 is $Y_t$ and column 2 is $X_t$.
data <- cbind(rnorm(1000),rnorm(1000))
We can then compute the GARCH(1,1) model that I described in my question:
spec <- ugarchspec(variance.model = list(model = "sGARCH", garchOrder = c(1, 1),
submodel = NULL, external.regressors = NULL, variance.targeting = FALSE),
mean.model = list(armaOrder = c(0, 0), external.regressors = matrix(data[,2]),
distribution.model = "norm", start.pars = list(), fixed.pars = list()))
garch <- ugarchfit(spec=spec,data=data[,1],solver.control=list(trace=0))
Comparing this to EViews
: The R
coefficient estimates that are greater than $0.01$ differ to the EViews
estimates by approximately $1-2\%$, which is acceptable in my mind. $t$-statistics differ by more but it doesn't effect inference.
Yes, you do need to work recursively, plugging in previous values. First you plug in $X_t$, and when these are not available any more, you plug in the point predictions.
To get the innovations, you are right that you need the first fitted value. There is no "right" way to get this. Some people (and some software) estimate the initial fits via maximum likelihood, others use heuristics and use, e.g., the first actual observation, or a heuristically smoothed value.
If this is self-study, do whatever your instructor suggested. If you are trying to recreate something your software did, you will need to find out how it fits these initial values. This is often not very well documented.
Best Answer
Assuming the series are stationary, you know that $Var[X_t]=Var[X_{t-1}]=Var[X_{t-2}]$. You can figure out the rest yourself.
Also, your process is a restricted ARMA(2,1), because you set coefficient of the first lag to 0.