[Math] Ergodic theorem in Markov chains: for which probability $P$ is the convergence $P$-a.s.

markov chainsprobabilityprobability theory

I am asking about a proof of the Ergodic theorem for Markov chains from page 7 here.

Remember that the Ergodic theorem for Markov chains says that, if $\{X_n\}_{n\geq0}$ is an irreducible Markov chain, then for every state $i$: $$\lim_n \frac{V_i(n)}{n}=\frac{1}{m_i}\;\text{ a.s.},$$where $V_i(n)=\sum_{k=0}^{n-1}1_{\{X_k=i\}}$ is the number of visits to $i$ before time $n$, and $m_i=E[T_i|X_0=i]$, where $T_i=\inf\{n\geq 1: X_n=i\}$.

What I do not understand is under which probability it is written that a.s. From the statement in the link, it seems that is under the probability of the underlying probability space. However, when applying the strong law of large numbers in the recurrent case, it seems that it is under $P_i=P|_{X_0=i}$.

In the proof it is said something about this (just before the paragraph starting by "By Lemma 3.2$\ldots$"), but I would like more details.

Best Answer

Because the Markov chain is irreducible, the desired result holds under any given initial condition, and hence under any probability mass function for the initial condition.

Specifically, let $\{X_n\}_{n=0}^{\infty}$ be an irreducible discrete time Markov chain over a finite or countably infinite state space $S$. Fix states $i \in S$ and $j \in S$. Suppose we are given that $X_0=j$, that is, $P[X_0=j]=1$. We want to understand why the following holds: $$\lim_{n\rightarrow\infty} \frac{V_i(n)}{n} = \frac{1}{m_i} \quad \mbox{(with prob 1)} $$

By standard renewal theory, we know the result holds under the initial condition $X_0=i$, (i.e., the case $j=i$ is easy). Now suppose $j \neq i$:

Case 1: If the probability of eventually hitting state $i$, starting from state $j$, is 1, then we have a delayed renewal process with an almost-surely finite transient time before going to $i$, and the desired result holds.

Case 2: If there is a positive probability of never visiting state $i$, given we start in $j$, then it must be that $m_i=\infty$ (since the chain is irreducible and so there is a positive probability of leaving $i$, going to $j$, and then and never returning). In this case, we visit state $i$ at most finitely often with probability 1. So $\lim_{n\rightarrow\infty} \frac{V_i(n)}{n}=0$ with prob 1, so the result again holds (where we interpret $\frac{1}{m_i}$ to be $0$ in the case $m_i=\infty$).

Related Question