[Math] Designing a Markov chain given its steady state probabilities

markov chainsprobabilitysteady state

As explained in https://en.wikipedia.org/wiki/Markov_chain, for a three state Markov Chain with the transition matrix given as

$P = \begin{bmatrix}
0.9 & 0.075 & 0.025 \\
0.15 & 0.8 & 0.05 \\
0.25 & 0.25 & 0.5
\end{bmatrix},$

the steady state probabilities can be found from

$\lim_{N\to \infty } \, P^N=
\begin{bmatrix}
0.625 & 0.3125 & 0.0625 \\
0.625 & 0.3125 & 0.0625 \\
0.625 & 0.3125 & 0.0625 \\
\end{bmatrix}.$

Now my Question is, how can I find a valid transition matrix P for any other steady state probabilities, lets say for example $[0.3, 0.5, 0.2]?$.
Preferably with P containing only elements that are nice to work with by hand.

Best Answer

Of course, there are many possible Markov chains we can design, but here is an approach that generalizes well, is easy to construct, and gives a Markov chain that's easy to solve.

For the specific example where we want to get the limiting distribution $$ (\pi_1, \pi_2, \pi_3) = (0.3, 0.5, 0.2) $$ take the transition matrix $$ P = \begin{bmatrix} 0.5 & 0.5 & 0 \\ 0.3 & 0.5 & 0.2 \\ 0 & 0.5 & 0.5 \end{bmatrix}. $$ To solve for the stationary distribution here, note that we can only get from state $1$ to states $\{2,3\}$ via the $1 \to 2$ transition, and we can only get from states $\{2,3\}$ via the $2 \to 1$ transition. In any infinite sample from this Markov chain, these transitions must alternate, and therefore the limiting rate of these transitions must be equal. This gives us the equation $$ 0.5 \pi_1 = 0.3 \pi_2. $$ By the same token, the $2 \to 3$ and $3 \to 2$ transitions must alternate, giving us the equation $$ 0.2 \pi_2 = 0.5 \pi_3. $$ These equations, together with $\pi_1 + \pi_2 + \pi_3 = 1$, uniquely determine the limiting distribution; also, it is easy to check that the desired vector $(0.3, 0.5, 0.2)$ satisfies them, since $$ 0.5 \cdot 0.3 = 0.3 \cdot 0.5 \qquad \text{and} \qquad 0.2 \cdot 0.5 = 0.5 \cdot 0.2. $$ To generalize this construction to get an $n$-state Markov chain with limiting distribution $(\pi_1, \pi_2, \dots, \pi_n)$, simply set

  • $P_{i,i+1} = \pi_{i+1}$ for $i = 1, 2, \dots, n-1$;
  • $P_{i,i-1} = \pi_{i-1}$ for $i = 2, 3, \dots, n$;
  • Put all remaining probability for each state in $P_{i,i}$, leaving all other transitions at $0$.

In this Markov chain, you can only go from state $i$ to the "adjacent" states $i\pm1$, or stay at $i$. By the argument we've already seen, the limiting rate of the transitions $i \to i+1$ and $i+1 \to i$ must be equal, giving us the equations $$ \pi_i P_{i,i+1} = \pi_{i+1} P_{i+1,i} $$ for each $i = 1, 2, \dots, n$. Together with the equation $\pi_1 + \pi_2 + \dots + \pi_n = 1$, these equations uniquely determine the limiting disribution, and it's easy to check that the limiting distribution we want satisfies them.

Some more general comments:

  • These simplified equations for solving for the limiting distribution are known as the time-reversibility equations. For a general Markov chain, the system of equations $\pi_i P_{i,j} = \pi_j P_{j,i}$ (for all $i,j$) won't necessarily have a solution, but if it does, that solution is always a stationary distribution.
  • This assumes that there is no state $i$ for which we want $\pi_i = 0$. If we do want that, then leave $i$ out of the chain at first, and later add it back in with a probability-$1$ transition to some other state, doesn't matter which.
Related Question