As explained in https://en.wikipedia.org/wiki/Markov_chain, for a three state Markov Chain with the transition matrix given as
$P = \begin{bmatrix}
0.9 & 0.075 & 0.025 \\
0.15 & 0.8 & 0.05 \\
0.25 & 0.25 & 0.5
\end{bmatrix},$
the steady state probabilities can be found from
$\lim_{N\to \infty } \, P^N=
\begin{bmatrix}
0.625 & 0.3125 & 0.0625 \\
0.625 & 0.3125 & 0.0625 \\
0.625 & 0.3125 & 0.0625 \\
\end{bmatrix}.$
Now my Question is, how can I find a valid transition matrix P for any other steady state probabilities, lets say for example $[0.3, 0.5, 0.2]?$.
Preferably with P containing only elements that are nice to work with by hand.
Best Answer
Of course, there are many possible Markov chains we can design, but here is an approach that generalizes well, is easy to construct, and gives a Markov chain that's easy to solve.
For the specific example where we want to get the limiting distribution $$ (\pi_1, \pi_2, \pi_3) = (0.3, 0.5, 0.2) $$ take the transition matrix $$ P = \begin{bmatrix} 0.5 & 0.5 & 0 \\ 0.3 & 0.5 & 0.2 \\ 0 & 0.5 & 0.5 \end{bmatrix}. $$ To solve for the stationary distribution here, note that we can only get from state $1$ to states $\{2,3\}$ via the $1 \to 2$ transition, and we can only get from states $\{2,3\}$ via the $2 \to 1$ transition. In any infinite sample from this Markov chain, these transitions must alternate, and therefore the limiting rate of these transitions must be equal. This gives us the equation $$ 0.5 \pi_1 = 0.3 \pi_2. $$ By the same token, the $2 \to 3$ and $3 \to 2$ transitions must alternate, giving us the equation $$ 0.2 \pi_2 = 0.5 \pi_3. $$ These equations, together with $\pi_1 + \pi_2 + \pi_3 = 1$, uniquely determine the limiting distribution; also, it is easy to check that the desired vector $(0.3, 0.5, 0.2)$ satisfies them, since $$ 0.5 \cdot 0.3 = 0.3 \cdot 0.5 \qquad \text{and} \qquad 0.2 \cdot 0.5 = 0.5 \cdot 0.2. $$ To generalize this construction to get an $n$-state Markov chain with limiting distribution $(\pi_1, \pi_2, \dots, \pi_n)$, simply set
In this Markov chain, you can only go from state $i$ to the "adjacent" states $i\pm1$, or stay at $i$. By the argument we've already seen, the limiting rate of the transitions $i \to i+1$ and $i+1 \to i$ must be equal, giving us the equations $$ \pi_i P_{i,i+1} = \pi_{i+1} P_{i+1,i} $$ for each $i = 1, 2, \dots, n$. Together with the equation $\pi_1 + \pi_2 + \dots + \pi_n = 1$, these equations uniquely determine the limiting disribution, and it's easy to check that the limiting distribution we want satisfies them.
Some more general comments: