An example of non-ergodicity (Birkhoff–Khinchin theorem)

asymptoticsergodic-theoryprobability theorystochastic-processes

Let $\{Y_t\}_{t\in \mathbb Z}$ be a stationary process with mean zero. We know that the autocovariance function is given by:
$$\gamma_Y(h)= cov(Y_0, Y_{h})= E[Y_0\,Y_h]$$
We say that $\{Y_t\}_{t\in \mathbb Z}$ is ergodic for the second moments if
\begin{equation}\label{ergo-1}\tag{E}
\hat{\gamma}_Y (h):= \frac{1}{T-h} \sum_{t=h+1}^T Y_t Y_{t-h} \overset{p}{\to} \gamma_Y(h),\quad \forall h >0 \quad (T \to \infty).
\end{equation}

According to slide 48 of these lectures notes, a sufficient conditions for the ergodicity for the second moments is:
\begin{equation}\label{ergo0}\tag{CE}
\sum_{h=0}^\infty |\gamma_Y(h)| < \infty
\end{equation}

Now, let $\{X_t\}_{t\in \mathbb Z}$ be a stationary process with mean zero satisfying (\ref{ergo0}):
$$\sum_{h=0}^\infty |\gamma_X(h)| < \infty$$
(consequently, $\{X_t\}_{t\in \mathbb Z}$ is ergodic for the second moments)
Moreover, Let $N \sim \hbox{Poisson}(\lambda)$. Consider the same Poisson compounding process of this question:
$$Y_t = \sum_{j=1}^N X_{t;j}, \quad (t \in \mathbb Z)$$
Using this other question, we have that:
\begin{equation}\label{ac}\tag{ACV}
\gamma_Y(h) = \lambda E[X_0X_h]= \lambda \gamma_X(h)
\end{equation}

So, by the sufficient conditions for the ergodicity for the second moments, we have:
$$\sum_{h=0}^\infty |\gamma_Y(h)| = \lambda \sum_{h=0}^\infty |\gamma_X(h)| < \infty$$
This would show that $\{Y_t\}_{t\in \mathbb Z}$ is ergodic. However, following the same technique as the answer to the question already mentioned above, we can show that:
$$ \hat{\gamma}_Y (h) \overset{p}{\to} N \gamma_X(h), \quad (T \to \infty)$$
So, we would Not have that $\{Y_t\}_{t\in \mathbb Z}$ is ergodic ( see \ref{ergo-1} and \ref{ac} ), which would be a contradiction.

What's happening?

Best Answer

The result in your lecture notes is not strictly true, as stated; they're waving away some complexity that usually doesn't matter, but in this specific example does.

The actual ergodic theorem states that a sample mean $\frac1n \sum_{i=1}^n X_i$ converges to its conditional expectation $\mathbb E[X_0 \mid \mathcal C]$, where $\mathcal C$ is the invariant $\sigma$-algebra. For most time series models, $\mathcal C$ is trivial, so this is equivalent to saying it converges to $\mathbb E[X_0]$.

However, for the model you've described, $\mathcal C = \sigma(N)$, so the correct version of (E) converges to $\mathbb E[Y_0Y_h\mid N]=N\gamma_X(h)$, which agrees with the answer to your first question.

For a simpler example of this effect, generate i.i.d. standard normals $X_n$, and flip a coin. If it's heads, set all $Y_n = X_n$; tails, set all $Y_n = 0$. You can check that $Y_n$ is stationary, $\mathbb E[Y_0] = 0$ and $\mathbb E[Y_0Y_k] = \frac12 \delta_{0,k}$, but limit of the sample ACF clearly depends on the coin toss.