[Math] Finding the stationary distribution for an absorbing Markov Chain

eigenvalues-eigenvectorsmarkov chainsstationary-processes

I have an absorbing Markov Chain that has 5 states, that can be envisioned as 5 nodes in a straight line. The left and right most nodes are the absorbing states. Everything starts at the middle node and moves to the left with probability $p$, and to the right with probability $q=(1-p)$.

I am trying to find the long run stationary (as Did pointed out I meant limit) distribution, and just by inspection I think that the answer is, $$x_1 = \sum_{i=0}^{\infty} p^2(2pq)^i$$
$$x_5 = 1-x_1$$ but that is from looking at the construction of the chain itself rather than solving the transition matrix.

To find the stationary distribution of a Markov Chain, I know that I am supposed to find $A\vec{x} = \vec{x}$, where $A$ is the transition matrix.

$A$, in my case, is

$$
\left( \begin{array}{ccccc}
1 & p & 0 & 0 & 0 \\
0 & 0 & p & 0 & 0 \\
0 & q & 0 & p & 0 \\
0 & 0 & q & 0 & 0 \\
0 & 0 & 0 & q & 1 \end{array} \right)
$$

where $A_{ij}$ is the probability of moving from state $j$ to state $i$.

My first thought was to see if this matrix has an eigenvalue of 1. I am not sure if I did everything properly.

det$(A-I\lambda) = 0 $

$= (1-\lambda)$det$$
\left( \begin{array}{cccc}
-\lambda & p & 0 & 0 \\
q & -\lambda & p & 0 \\
0 & q & -\lambda & 0 \\
0 & 0 & q & -\lambda \end{array} \right)
$$

$= -(1-\lambda)^2$det$$
\left( \begin{array}{ccc}
-\lambda & p & 0 \\
q & -\lambda & p \\
0 & q & -\lambda \end{array} \right)
$$
$ 0 = -(1-\lambda)^2\left[-\lambda(\lambda^2-pq)-p(-q\lambda) \right]$

If $\lambda = 1$, then it does equal 0.

Now I need to find $A-I\lambda = \vec{0}$.

My $A-I\lambda$ is

$$
\left( \begin{array}{ccccc}
0 & p & 0 & 0 & 0 \\
0 & -1 & p & 0 & 0 \\
0 & q & -1 & p & 0 \\
0 & 0 & q & -1 & 0 \\
0 & 0 & 0 & q & 0 \end{array} \right)
$$

But the only solution I found was the trivial solution. Please help me find where I messed up.
Also, how does one generally find the solution to a transition matrix with variables in it?

Best Answer

Let $\alpha(n)$ represent the probability of reaching 1 before 5 starting at state $n$. We are looking for $\alpha(3)$. From the transition probabilities, we have the following: \begin{align*} \alpha(1) &= 1, \\ \alpha(2) &= p\alpha(1) + q\alpha(3),\\ \alpha(3) &= p\alpha(2) + q \alpha(4),\\ \alpha(4) &= p\alpha(3) + q\alpha(5),\\ \alpha(5) &= 0. \end{align*} You can choose to solve this system with general methods, but it is small enough to obtain $\alpha(3)$ explicitly. With some substitution, we have $$\alpha(3) = p(p + q\alpha(3)) + q(p\alpha(3)) = p^2 + 2pq\alpha(3),$$ implying $$\alpha(3) = \frac{p^2}{1 - 2pq}.$$ The limit probability distribution $\pi$ is stationary and given by $$\pi = \begin{bmatrix} \dfrac{p^2}{1 - 2pq} & 0 & 0 & 0 & \dfrac{q^2}{1 - 2pq}. \end{bmatrix}$$ A couple things to note:

  1. The eigenvalue/eigenvector method is a consequence of the Perron-Frobenius theorem and only works if you have an irreducible, aperiodic matrix. Here, the communication class $\{2,3,4\}$ is transient with period 2, and $\{1\}$ and $\{5\}$ are recurrent. When you have reducible or periodic Markov processes, you will have to resort to either numeric computations (finding large powers of the transition matrix, conjecturing, then proving properties) or using difference equations like I did.

  2. This problem is a version of the Gambler's Ruin. Other properties of interest are values of $p$ that tilt the game in the dealer's favor or the expected time til ruin.

Related Question