Show that random sum of ergodic processes is not ergodic

asymptoticsergodic-theoryprobability theoryself-learningstochastic-processes

We say that a mean stationary stochastic process $(X_t)_{t \in \mathbb N}$ – i.e. $E[X_t]= \mu_X$ for all $t$ – is ergodic mean if
\begin{equation}\tag{I}
\frac 1 T \sum_{t=1}^T X_t \overset {pr} \longrightarrow \mu_X, \quad (T \to \infty)
\end{equation}

It staightforward to show that the finite sum of ergodic processes is ergodic: Let $(X_{t;j})_{1\leq j \leq n}$ be a finite sequence of ergodic process and $Y_{t;n} = \sum_{j=1}^n X_{t;j}$. So:
$$\frac 1 T \sum_{t=1}^T Y_{t;n} \overset {pr} \longrightarrow \mu_n, \quad (T \to \infty)$$
where $\mu_j= E[X_{t;j}]$ and $\mu_n = \sum_{j=1}^n \mu_j$.

But, how about a random sum?

More specifically, Let $(X_t)_{t \in \mathbb N}$ be a ergodic process with $\mu_X = E[X_t]$ as (I) above. Consider
$$Y_t = \sum_{j=1}^N X_{t;j}, \quad N\sim \hbox {Poisson} (\lambda)$$
where $X_{t,1}, X_{t,2},…, X_{t,j},… \overset {\mathrm{i.i.d}} \sim X_{t}$ for all $t$ (copies) and independent of $N$.

Note that this process is not the Compound Poisson Process. This is just a stationary process such that $Y_t$ is a Compound Poisson random variable, for all $t$, according with this.

It straightforward to show that $(Y_t)_{t\in \mathbb N}$ is mean stationary
$$E[Y_t]= \lambda \mu_X, \quad \forall\, t \in \mathbb N.$$

How to show that

\begin{equation}%\tag{I}
\frac 1 T \sum_{t=1}^T Y_t \overset {pr} \longrightarrow \lambda \mu_X, \quad (T \to \infty) \quad ??
\end{equation}

My first attempt is try to show that $P\left(\left| \frac 1 T \sum_{t=1}^T Y_t – \lambda \mu_X \right| > \epsilon \right) \to 0$, as $T \to \infty$, for all $\epsilon >0$. Note:
\begin{aligned}
P\left(\left| \frac 1 T \sum_{t=1}^T \sum_{j=1}^N X_{t;j}, – \lambda \mu_X \right| > \epsilon \right) &= \sum_{n=1}^\infty P\left(\left| \frac 1 T \sum_{t=1}^T \sum_{j=1}^n X_{t;j} – \lambda \mu_X \right| > \epsilon \right)P(N=n)\\
= \sum_{n=1}^\infty& P\left(\left| \frac 1 T \sum_{t=1}^T \sum_{j=1}^n X_{t;j} – n \mu_X + n \mu_X + \lambda \mu_X \right| > \epsilon \right)P(N=n)
\end{aligned}

So. I am trying to use the ergodicity of $ \sum_{j=1}^n X_{t;j}$, i.e.: $P\left(\left| \frac 1 T \sum_{t=1}^T \sum_{j=1}^n X_{t;j} – n \mu_X \right| > \epsilon \right) \overset {pr} \longrightarrow 0$ as $T \to \infty$.

How to conclude? Is there another way to show this?

Best Answer

Actually, you can show that $$ \frac 1T\sum_{t=1}^T\sum_{j=1}^NX_{t;j}-N\mu_X\to 0\mbox{ in probability as }T\to\infty. $$ Indeed, proceeding similarly as in the opening post, $$ \mathbb P\left(\left| \frac 1 T \sum_{t=1}^T \sum_{j=1}^N X_{t;j}, -N\mu_X \right| > \epsilon \right) = \sum_{n=1}^\infty \mathbb P\left(\left| \frac 1 T \sum_{t=1}^T \sum_{j=1}^n X_{t;j} - n\mu_X \right| > \epsilon \right)\mathbb P(N=n)\\ \leqslant \sum_{n=1}^R \mathbb P\left(\left| \frac 1 T \sum_{t=1}^T \sum_{j=1}^n X_{t;j} - n\mu_X \right| > \epsilon \right)+\mathbb P(N\geqslant R+1) $$ hence by mean-ergodicity of $(X_{t;j})_t$, $$ \limsup_{T\to\infty} \mathbb P\left(\left| \frac 1 T \sum_{t=1}^T \sum_{j=1}^N X_{t;j}, -N\mu_X \right| > \epsilon \right) \leqslant \mathbb P(N\geqslant R+1) $$ and since $R$ is arbitrary, the conclusion follows.