Solved – On the Autocorrelation Matrix of an ARMA(2,2) to derive the Yule Walker Equations

armaautocorrelationself-study

For an AR(2) I can get the Yule-Walker equations: $$\begin{cases} \rho_1=\alpha_1+\alpha_2\rho_1 \\ \rho_2=\alpha_1\rho_1+\alpha_2 \\ \rho_k=\alpha_1\rho_{k-1}+\alpha_2\rho_{k-2} \end{cases}$$ starting from the Autocorrelation Matrix.
But how can I manage the MA part to get the complete autocorrelation matrix in the case of an ARMA(2,2)?

Best Answer

Let's define a general ARMA model of orders $(p,q)$ as follows:

$$ \psi_t \equiv \sum_{i=0}^p \alpha_i\, y_{t-i} = \sum_{i=0}^q \theta_i\, \epsilon_{t-i} \,, \mbox{ with } \epsilon_t \sim NID\,(0, \sigma^2_\epsilon) \,. $$

where $\alpha_0$ and $\theta_0$ are normalised to $1$.

You can check that multiplying $\psi_t$ by $\psi_{t-\tau}$ and taking expectations in both sides of the equation yields:

\begin{equation} \sum_{i=0}^p \sum_{j=0}^p \alpha_i \alpha_j \gamma_{\tau+j-i} = \sigma^2_\epsilon \sum_{j=0}^{q-\tau} \theta_j \theta_{j+\tau} \,, \end{equation}

where $\gamma_i$ is the autocovariance of order $i$.

The mapping between the autocovariances and the parameters in an ARMA model is not as rewarding as in an AR model. The above equation does not return a system of equations that can be easily solved to obtain an estimate of the parameters by the method of moments. The Yule-Walker equations are instead easy to solve and return an estimate of the AR coefficients.

Although it is not straightforward, the method of moments can still be applied for an ARMA model by means of a two-steps procedure: the first step uses the Yule-Walker equations and the second step is based on the equation given above. If your question goes in this direction I could give you further details about it.

Edit

The following is an extract from pp. 545-546 in D.S.G. Pollock (1999) A handbook of time series analysis, signal processing and dynamics, Academic Press (changed notation $\theta$ is $\mu$ in the original source):

1)

\begin{eqnarray} \begin{array}{lcl} E(\psi_t\psi_{t-\tau}) &=& E\left\{ \left( \sum_i \theta_i \epsilon_{t-i} \right) \left( \sum_j \theta_j \epsilon_{t-\tau-j} \right) \right\} \\ &=& \sum_i \sum_j \theta_i \theta_j E(\epsilon_{t-i} \epsilon_{t-\tau-j}) \\ &=& \sigma^2_\epsilon \sum_j \theta_j \theta_{j+\tau} \,. \end{array} \end{eqnarray}

2)

\begin{eqnarray} \begin{array}{lcl} E(\psi_t\psi_{t-\tau}) &=& E\left\{ \left( \sum_i \alpha_i y_{t-i} \right) \left( \sum_j \alpha_j y_{t-\tau-j} \right) \right\} \\ &=& \sum_i \sum_j \alpha_i \alpha_j E(y_{t-i} y_{t-\tau-j}) \\ &=& \sum_i \sum_j \alpha_i \alpha_j \gamma_{\tau+j-i} \,. \end{array} \end{eqnarray}

Putting (1) and (2) together:

$$ \sum_i\sum_j \alpha_i\alpha_j\gamma_{\tau+j-i} = \sigma^2_\epsilon \sum_j \theta_j \theta_{j+\tau} \,. $$

Related Question