Solved – Sum of autocovariances for AR(p) model

autocorrelationautoregressivespectral analysistime series

Suppose I have the following $AR(p)$ model.

$$X_t = \sum_{i=1}^{p} \phi_i X_{t-i} + \epsilon_t\,, $$

where $\epsilon_t$ has mean 0 variance $\sigma^2$. I am not interested in fitting this model, but instead interested in results for the following quantities.

It is my understanding from here that the spectral density at 0 (which is also $\lim_{n\to \infty} n \text{Var}(\bar{X}_n)$) for this process is,
$$f(0) = \dfrac{\sigma^2}{(1 – \sum_{i=1}^{p} \phi_i)^2} \,.$$

I also understand that the lag $k$ autocovariance $\gamma(k) = \text{E}[X_{t} X_{t-k}]$ is estimated by solving the Yule-Walker equations, and as such closed form expressions do not exist.

But since $f(0) = \sum_{k=-\infty}^{\infty} \gamma(k)$ by definition, can someone direct me to two proofs

  1. A direct proof that yields $$f(0) = \sum_{k=-\infty}^{\infty}
    \gamma(k) = \dfrac{\sigma^2}{(1 – \sum_{i=1}^{p} \phi_i)^2}\,,$$

    without going through the theory on spectral densities.

  2. Is there a closed form expression for
    $\sum_{k=0}^{\infty} k\, \gamma(k)$?. Since Part 1 above
    has a closed form expression, my guess is there is a closed form
    expression here as well. Could someone lead me to it?


AR(1) Model: A closed for expression is available for the AR(1) model.
$$Y_t = \phi Y_{t-1} + \epsilon_{t} \,. $$

The variance in the stationary process is known to be $\sigma^2/(1- \phi^2)$. The spectral density at 0 is
$$f(0) = \dfrac{\sigma^2}{(1 – \phi)^2}\,. $$

Next, we consider $\sum_{k=0}^{\infty}k \gamma(k)$.
\begin{align*}
\sum_{k=0}^{\infty}k \gamma(k) &:= \sum_{k = 1}^{\infty} k \text{Cov}(Y_1, Y_{1+k})\\
& = \sum_{k = 1}^{\infty} \dfrac{\sigma^2}{1-\phi^2}k \phi^k\\
& = \dfrac{\phi\sigma^2}{1-\phi^2} \sum_{k=1}^{\infty} k \phi^{k-1}\\
& = \dfrac{\phi\sigma^2}{(1-\phi^2)(1-\phi)^2}\,.
\end{align*}

Thus, we have a closed form expression.


VAR(1) Model: We can do the same thing as above for the VAR(1) model. Suppose now $\Phi$ is a $p \times p$ matrix, $Y_t \in \mathbb{R}^p$ and $\epsilon_t \in \mathbb{R}^p$ with mean vector 0 ad variance covariance matrix $\Sigma$. The VAR(1) model is
$$Y_t = \Phi Y_{t-1} + \epsilon_t\,. $$

Using the reference here, it is then known that the stationary process has variance matrix $V$ such that $vec(V) = (I_{p^2} – \Phi \otimes \Phi)^{-1} vec(\Sigma)$, where $\otimes$ is the Kronecker product.

Also, in same reference, we find that $\gamma(s) = \Phi^s V$, and $\gamma(-s) = \gamma(s)^T$. So,
\begin{align*}
f(0) & = \sum_{s=-\infty}^{\infty} \gamma(s)\\
& = \sum_{s=0}^{\infty}\gamma(s) + \sum_{s=0}^{\infty}\gamma(s)^T – \gamma(0)\\
& = \sum_{s=0}^{\infty} \Phi^s V + \sum_{s=0}^{\infty} V(\Phi^{T})^s – V \\
& = (I -\Phi)^{-1}V + V(I – \Phi^T)^{-1} – V.
\end{align*}

Similarly,
\begin{align*}
\sum_{s=0}^{\infty} s\gamma(s) & = \sum_{s=0}^{\infty} s\Phi^s V \\
& = \left(\sum_{s=0}^{\infty} s\Phi^{s-1} \right) \Phi V \\
& = (I -\Phi)^{-2}\Phi V.
\end{align*}

Which again, gives a closed form solution. The same proof technique doesn't follow for the AR(p), because we don't have the nice geometric series expression. But considering there should be a relation between VAR(1) and an AR(p), can we use the results here to get an answer to my question (2)?

Best Answer

Regarding 1, the first inequality is by definition of the spectral density. That's the only theory on spectral densities you need. The rest is just bilinearity of covariance, and definitions. Write your (causal) AR(p) process as $$ \phi(B) X_t = \epsilon_t $$ where $\phi(z) = 1 - \phi_1z - \cdots - \phi_p z^p$. This is the same as as $$ X_t = \phi^{-1}(B)\epsilon_t = \psi(B)\epsilon_t = \sum_{i=0}^{\infty}\psi_i \epsilon_{t-i} $$ where $\psi(z) = 1 + \psi_1 z + \psi_2 z^2 + \cdots$ is the reciprocal of $\phi(z)$.

Then you just plug everything in:

\begin{align*} \sum_{k=-\infty}^{\infty} \gamma(k) &= \sum_{k=-\infty}^{\infty}\text{cov}\left(\sum_{i=0}^{\infty}\psi_i \epsilon_{t-i}, \sum_{j=0}^{\infty}\psi_j \epsilon_{t+k-j} \right)\\ &= \sum_{k=-\infty}^{\infty} \sum_{i=0}^{\infty}\sum_{j=0}^{\infty} \psi_i \psi_j \text{cov}(\epsilon_{t-i}, \epsilon_{t+k-j} ) \\ &= \sigma^2 \sum_{k=-\infty}^{\infty} \sum_{i=0}^{\infty}\sum_{j=0}^{\infty} \psi_i \psi_j \mathbb{1}(t-i = t+k-j)\\ &= \sigma^2 \sum_{k=-\infty}^{\infty} \sum_{i=0}^{\infty}\psi_i \psi_{k+i} \\ &= \sigma^2 \sum_{i=0}^{\infty}\sum_{k=-\infty}^{\infty} \psi_i \psi_{k+i} \\ &= \sigma^2 \sum_{i=0}^{\infty}\sum_{k=-i}^{\infty} \psi_i \psi_{k+i} \\ &= \sigma^2\left(\sum_{k=0}^{\infty}\psi_k \right)^2\\ &= \sigma^2 \left(\frac{1}{1 - \sum_{i=1}^p\phi_i } \right)^2. \end{align*}

Regarding the second question: not sure, let me think about it. Can you say a little more about why you're interested in it?

Edit: I see you're using the formula for a low-order polylogarithm. You can use that again, but it seems like it gets a little messier.

Recall the Yule-Walker equations: $$ \gamma(k) = \sum_{i=1}^p \phi_i \gamma(k-i) $$ for $k=0,\ldots,p$. Then \begin{align*} \sum_{k=1}^{\infty} k \gamma(k) &= \sum_{k=1}^{\infty} k \sum_{i=1}^p \phi_i \gamma(k-i) \\ &= \sum_{i=1}^p \sum_{k=1}^{\infty} k \phi_i \gamma(k-i) \\ &= \sum_{i=1}^p \phi_i \left( \sum_{k=1}^{\infty} k \gamma(k-i) \right) \\ &= \left(\sum_{i=1}^p \phi_i \sum_{k=1}^{i} k \gamma(k-i) \right) + \left( \sum_{i=1}^p \phi_i \sum_{k=i+1}^{\infty} k \gamma(k-i) \right) \\ &= \left(\sum_{i=1}^p \phi_i \sum_{k=1}^{i} k \gamma(k-i) \right) + \left( \sum_{i=1}^p \phi_i \sum_{s=1}^{\infty} (s+i) \gamma(s) \right) \\ &= \left(\sum_{i=1}^p \phi_i \sum_{k=1}^{i} k \gamma(k-i) \right) + \left( \sum_{i=1}^p \phi_i \left[ \sum_{s=1}^{\infty} s \gamma(s) + i \sum_{s=1}^{\infty} \gamma(s) \right] \right) \\ &= \left(\sum_{i=1}^p \phi_i \sum_{k=1}^{i} k \gamma(k-i) \right) + \left( \sum_{i=1}^p \phi_i \sum_{s=1}^{\infty} s \gamma(s) \right)+ \left( \sum_{i=1}^p \phi_i i \sum_{s=1}^{\infty} \gamma(s) \right) . \end{align*} All of these bits have closed-form expressions.

Related Question