Solved – Why check the standarized residuals of an ARCH process

archresidualstime series

Suppose I want to model some returns by

$$
\begin{aligned}
r_t &= \mu_t + a_t \\
a_t &=\sigma_t \epsilon_t \\
\sigma_t^2 &= \alpha_0 + \alpha_1 a_{t-1}^2 + \dots + \alpha_m a_{t-m}^2
\end{aligned}
$$

where $\mu_t$ denotes a stationary, low-order ARMA process and the error terms $a_t$ follows an ARCH process.

The literature says that the standardized residuals of the ARCH model have to be white noise for the model to be well specified.

Can someone please explain me

  1. how the residuals of the ARCH model are precisely defined within the above setup?; and
  2. why I have to check the standardized residuals instead of the normal ones?

Best Answer

  1. The (regular) residuals are $\hat a_t$, i.e. the fitted values of $a_t$.
    The standardized residuals are $\hat\epsilon_t$, i.e. the fitted values of $\epsilon_t$.
  2. The model assumes that the standardized errors have a certain distribution (e.g. Normal, Student-$t$ or the like) with zero mean and unit variance. The likelihood function for the model is built using this assumption. If the assumption does not hold, the likelihood function is misspecified and the maximum likelihood estimator (MLE) might not have the desirable properties (although it still might work alright as quasi MLE in some cases). Therefore, you check the empirical counterpart of the standardized (rather than regular) errors which is the standardized residuals.
Related Question