Prove that $\sum\pi_i = \sum\frac{1}{E_iT_i} = 1$ in an irreducible Markov chain with stationary distribution $\pi$

markov chainsstationary-processes

In Durrett's book Chapter 5 Theorem 4.6, It said that if p is irreducible and has stationary distribution, then $\pi_i=\frac{1}{E_iT_i}$. Where $T_i$ is the first time the markov process returns back to state i. $E_iT_i$ is the expected number of times needed to return back to state i given the initial state is $i$.

Also in the proof of theorem 4.7 said, if p is ireducible and i is positive recurrent, then, $\pi_j=\frac{\sum_{n=0}^{\infty}P_i(X_n=j, T_i \gt n)}{E_iT_i}$ is a stationary distribution.

Does that mean $\frac{\sum_{n=0}^{\infty}P_i(X_n=j, T_i \gt n)}{E_iT_i} = \frac{1}{E_jT_j}$? How can we prove that?

And also how can we prove that $\sum\frac{1}{E_jT_j} = 1$ in an irreducible Markov chain with stationary distribution $\pi$?

Best Answer

What text are you referring to, exactly? Probability: Theory and Examples Fifth Edition by Rick Durrett? The book proves these theorems in detail, so what part of the proof are you having trouble with?

The theorem you are asking about is numbered 5.5.11, not 4.7 - perhaps you have an old edition of the text? The claim that if $p$ is irreducible and $i$ positive recurrent, then $$ \pi_j = \frac{\sum_{n=0}^\infty \mathbb P(X_n=j, T_i > n\mid X_0=i)}{\mathbb E[T_i\mid X_0=i]} $$ defines a stationary distribution is stated as part of theorem 5.5.12.

As for the statement in the title of the question, if $\pi$ is a stationary distribution then by definition it must sum to one. The other equality follows from Theorem 5.5.7 in the book, which states:

Let $i$ be a recurrent state, and let $T=\inf\{n\geqslant 1: X_n=i\}$. Then $$ \mu_i(j) = \mathbb E\left[\sum_{n=0}^{T-1} \mathsf 1_{\{X_n=j\}}\right] = \sum_{n=0}^\infty \mathbb P(X_n=j,T>n\mid X_0=i) $$ defines a stationary measure.

In the text, the author uses Fubini's theorem to show the above. The equality $\pi_i = \frac1{\mathbb E[T_i\mid X_0=i]}$ follows from summing $\mu_i$ over all states and using Fubini's theorem to show that the sum is in fact equal to $\mathbb E[T_i\mid X_0=i]$. Finally, the result that an irreducible and positive recurrent Markov chain has a unique stationary measure up to constant multiples is used. This is Theorem 5.5.9 in the text, and the proof requires a bit of finesse. From this it follows that $\pi_i = \frac{\mu_i(i)}{\mathbb E[T_i\mid X_0=i]}$, and since by definition $\mu_i(i)=1$ , we have $\pi_i = \frac{1}{\mathbb E[T_i\mid X_0=i]}$. Since this equality is true for each state, it remains true when summing over all states, and hence $$ \sum_i \pi_i = \sum_i \frac{1}{\mathbb E[T_i\mid X_0=i]}, $$ as desired.