[Math] Why can a Markov chain having two states and no self-loop have a stationary distribution

markov chainsstationary-processes

Why does a Markov chain having two states and no self-loop can have a stationary distribution?

Lets consider a markov chain with two nodes = $\{A, B\}$ and the transition matrix:
$P = \begin{array}{ccc}
States| & A & B\\ \hline
A| &0 & 1\\
B| & 1 & 0 \end{array}$

Clearly the period of this chain is $2$ and hence it's not aperiodic.
More over we see that $P^{2n} = \begin{array}{ccc}
States| & A & B\\ \hline
A| &1 & 0\\
B| & 0 & 1 \end{array}$

and

$P^{2n+1} = P$

But letting $\pi = [1/2, \ 1/2]^T$ we see that $\pi^T = \pi^T*P$

Therefore this chain has a steady state probability distribution $\pi$ which contradicts with the fact that "steady state probability distribution exists iff the underlying Markov chain is irreducible and ergodic"

Where am I making the mistake? Please explain.

Best Answer

What you're confusing is the idea of the existance invariant measure and convergence to the invariant distribution.

A lot of Markov chains have invariant measures (in fact, on a finite state space you always have at least one, and in general for irreducibility a null recurrent chain will have an invariant measure and positive recurrent will have invariant distributions) but don't converge to the invariant measure from any measure other than the invariant measure. In the example you give, they are both irreducible, not aperiodic so you don't have convergence to the invariant distribution (but it does have an invariant distribution).

I'd recommend reading Ch. 1 of Norris' Markov Chains for more details.

Related Question