The behavior that you see is due to the presample variance option in EViews.
rugarch
uses the variance of all data points and EViews uses backcasting using a parameter of 0.7.
More precisely, EViews uses this formula for initialization of the variance:
$\sigma_0^2 = \lambda \hat\sigma^2 + (1-\lambda) \sum_{t=0}^T \lambda^{T-t-1} \cdot \varepsilon_{T-t}^2$
where $\lambda$ is the backcast parameter (default in EViews: 0.7, default in rugarch
, fGarch
, and gretl: 1.0) and $\hat\sigma^2$ is the unconditional variance of all residuals $\varepsilon_1, \ldots, \varepsilon_T$.
This is the explanation in the EViews manual regarding this choice of the variance initialization (whatever outperform
means for them):
Our experience has been that GARCH models initialized using backcast
exponential smoothing often outperform models initialized
using the unconditional variance.
Using the same approach as rugarch
, we get:
Dependent Variable: RETURN
Method: ML - ARCH (Marquardt) - Normal distribution
Date: 10/30/17 Time: 20:26
Sample: 1 438
Included observations: 438
Convergence achieved after 11 iterations
Presample variance: unconditional
GARCH = C(2) + C(3)*RESID(-1)^2 + C(4)*GARCH(-1)
Variable Coefficient Std. Error z-Statistic Prob.
C 0.001093 0.000547 1.996164 0.0459
Variance Equation
C 1.33E-06 9.92E-07 1.345676 0.1784
RESID(-1)^2 0.026633 0.007118 3.741463 0.0002
GARCH(-1) 0.963334 0.011717 82.21705 0.0000
which is quite close to the estimates of rugarch
.
However, we still see some differences in the standard errors: they are lower for rugarch
resulting in differences regarding hypothesis testing of the GARCH parameters.
For comparison, these are the estimates using the fGarch
library for R:
Estimate Std. Error t value Pr(>|t|)
mu 1.088e-03 5.198e-04 2.094 0.03627 *
omega 1.333e-06 1.060e-06 1.257 0.20864
alpha1 2.663e-02 9.728e-03 2.738 0.00618 **
beta1 9.633e-01 1.283e-02 75.078 < 2e-16 ***
and gretl:
coefficient std. error z p-value
-------------------------------------------------------
const 0.00108849 0.000519879 2.094 0.0363 **
alpha(0) 1.33330e-06 1.12351e-06 1.187 0.2353
alpha(1) 0.0266348 0.0101192 2.632 0.0085 ***
beta(1) 0.963349 0.0138129 69.74 0.0000 ***
which seem to be more similar to EViews than to rugarch
.
What astonishes me most is the drop in the standard errors of the rugarch
estimates: e.g. from 57 to 28, or from 1.7 to 0.9 - they appear as if they were halved.
Maybe this finding is worth posting on the R-SIG-Finance mailing list as we don't observe such a drop in the estimated standard errors for the other packages / programs.
fGarch
without mean:
Estimate Std. Error t value Pr(>|t|)
omega 1.488e-06 1.219e-06 1.221 0.22227
alpha1 2.517e-02 9.602e-03 2.621 0.00875 **
beta1 9.635e-01 1.424e-02 67.681 < 2e-16 ***
gretl without mean:
coefficient std. error z p-value
-------------------------------------------------------
alpha(0) 1.48801e-06 1.31591e-06 1.131 0.2581
alpha(1) 0.0251710 0.0101023 2.492 0.0127 **
beta(1) 0.963509 0.0156173 61.69 0.0000 ***
EViews without mean:
Variable Coefficient Std. Error z-Statistic Prob.
Variance Equation
C 1.49E-06 1.13E-06 1.314825 0.1886
RESID(-1)^2 0.025179 0.006580 3.826449 0.0001
GARCH(-1) 0.963505 0.012718 75.75811 0.0000
Best Answer
Such tiny differences between AIC would normally mean all the models are roughly equally good (or equally bad). However, the table contains AIC-per-observation, not AIC, as
rugarch
has got the nomenclature wrong. To obtain the actual AIC, you would have to multiply it by the sample size $T$. If $T$ is 100 or more, the differences are quite salient. There seems to be a clear winner at 2.314.(I would also check whether the reported AIC-per-observation is has not been multiplied by -1; then the winner would be the highest value, i.e. 2.3746, and it would be not such a clear-cut case unless $T$ is about 666 or more. I use 666 to yield a difference of 2 between the AIC values of the best and the next best model. But according to p. 28 of the
rugarch
vignette, the signs are not mixed up, so then there is no need to multiply by -1.)