I understand that
-
if all states of an irreducible Markov chain are non-null recurrent, then the MC has a unique stationary distribution
-
if an irreducible Markov chain is finite, then all of its states are non-null recurrent.
Therefore, an irreducible finite Markov chain with transition probabilities $P$ has a unique stationary distribution, let's call it $\pi$.
What are the conditions under which $\pi$ is also a limiting distribution?
Is it true that if there exists a distribution $x$ such that the sequence of distributions $x, xP, xP^2, xP^3, \ldots$ does not converge to $\pi$, then the chain must be periodic?
Another way of asking this is: If $\pi$ is a stationary distribution but not a limiting distribution, we know that some initial distributions will converge to $\pi$, but what will the others do? Do they necessarily converge to a "periodic stationary distribution", can they just bounce around crazily,…?
Best Answer
Your statements are slightly confusing, since a transient chain could be considered "non null recurrent" (after all, it is not null recurrent). So you should replace your statements with "positive recurrent":
if all states of an irreducible Markov chain are positive recurrent, then the MC has a unique stationary distribution
if an irreducible Markov chain is finite, then all of its states are positive recurrent.
You can also say that if a Markov chain is irreducible and positive recurrent, then the (unique) stationary distribution has strictly positive components.
Every finite state irreducible Markov chain $\{M(t)\}_{t=0}^{\infty}$ has a unique stationary distribution $\pi =(\pi_i)_{i \in S}$ (where $S$ denotes the finite state space). When you simulate, with probability 1, the sample path fractions of time converge to this distribution, so that: $$ \lim_{T\rightarrow\infty} \frac{1}{T}\sum_{t=0}^{T-1} 1\{M(t)=i\} = \pi_i \quad, \forall i \in S \quad \mbox{(with prob 1)}$$ regardless of the initial state $M(0)$. Taking expectations of both sides and using the bounded convergence theorem together with $E[1\{M(t)=i\}]=P[M(t)=i]$ we also get: $$ \lim_{T\rightarrow\infty} \frac{1}{T}\sum_{t=0}^{T-1} P[M(t)=i] = \pi_i \quad, \forall i \in S$$
If the chain is finite state, irreducible, and also aperiodic, you can further say $$ \lim_{n\rightarrow\infty} P[M(t)=i|M(0)=j] = \pi_i \quad, \forall i \in S$$ regardless of the initial state $j \in S$. So if the chain is finite state, irreducible, but limiting probabilities do not converge, then the chain cannot be aperiodic.
If $M(t)$ is finite state, irreducible, and periodic with period $d>1$, then limiting probabilities cannot converge (assuming we start in a particular state with probability 1). This is because: \begin{align*} \lim_{k\rightarrow\infty} P[M(kd)=i|M(0)=i] > 0 \\ \lim_{k\rightarrow\infty} P[M(kd+1)=i|M(0)=i] = 0 \end{align*} This is because the Markov chain $\{Z(k)\}_{k=0}^{\infty}$ defined by $Z(k)=M(kd)$ is irreducible and aperiodic (over an appropriately reduced state space) and so all states it can reach have positive steady state values.
With this reasoning, it can be shown that $P[M(t)=i|M(0)=j]$ converges (as $t\rightarrow\infty$) to a periodic function with period $d$. The particular $d$-periodic function it converges to depends on the intial state.
This should be pretty clear if you do a few matlab examples: Plot $\vec{\pi}(t) = \vec{\pi}(0)P^t$ versus $t \in \{0, 1, 2, ...\}$ for some examples with $\vec{\pi}(0)=[1 , 0, 0, ...]$ or $\vec{\pi}(0)=[0, 1, 0, 0, ...]$.