Steady state of Markov Process

linear algebramarkov-processmatricesprobability

I would like to have acknowledgment about this exercise.

Find, less than a constant, the steady state of the following Markov process
$$\begin{bmatrix}
w_{k+1}\\
s_{k+1}\\
d_{k+1}
\end{bmatrix} = \begin{bmatrix}
1 & 1/4 &0\\
0 & 3/4 &1/2\\
0 & 0 & 1/2
\end{bmatrix} \begin{bmatrix}
w_{k}\\
s_{k}\\
d_{k}
\end{bmatrix}.
$$

Here it is my solution: with direct calculations I found that the matrix has 3 distinct eigenvalues $\lambda_1 =1, \lambda_2=3/4$ and $\lambda_3 = 1/2$, whose corresponding eigenvectors are $(1, 0, 0), (1, -1, 0)$ and $(-2, 1, -2)$ respectively.
Setting $u_k:= \begin{bmatrix}
w_k\\
s_k\\
d_k
\end{bmatrix}$
, the previous equality becomes $u_{k+1} = A u_k$ which gives, iterating, that $u_k = A^k u_0$. But the matrix $A$ is diagonalizable, so $A^k =S \Lambda^k S^{-1}$, i.e.
$$
\begin{bmatrix}
1 & 1/4 &0\\
0 & 3/4 &1/2\\
0 & 0 & 1/2
\end{bmatrix} = \begin{bmatrix}
1 & 1 &-2\\
0 & -1 & 1\\
0 & 0 & -2
\end{bmatrix}\begin{bmatrix}
1 & 0 &0\\
0 & (3/4)^k & 0\\
0 & 0 & (1/2)^k
\end{bmatrix}\begin{bmatrix}
1 & 1 &-1/2\\
0 & -1 &-1/2\\
0 & 0 & -1/2
\end{bmatrix}.
$$

Thus
$$u_k = S\Lambda^k S^{-1} u_0 \iff \begin{bmatrix}
1\\
0\\
0
\end{bmatrix} (w_0 +s_0 +d_0) +\begin{bmatrix}
1\\
-1\\
0
\end{bmatrix} (3/4)^k(-s_0 -1/2 d_0) +\begin{bmatrix}
-2\\
1\\
-2
\end{bmatrix}(1/2)^k (-1/2) d_0.
$$

The steady state is obtained evaluating the limit for $k\to +\infty$ of $u_k$, which is, in this case,
$$\lim_{k\to +\infty} u_k = w_0.$$

Could anyone please tell me if it is all true or I missed something?

Thank you in advance!

Best Answer

The terminology is a bit unusual, since there is not really a Markov chain here. If you wish to get the limit of $(u_k)$, then what you did is almost correct, except that you get $$ \lim_{k \to + \infty} u_k = (w_0 + s_0 + d_0) [1, 0, 0]. $$

However, if you are really in the framework of Markov chains, you do not need all this. Denote by $M$ a stochastic matrix, and $Y_k$ the vector describing the probabilities to be in each state after $k$ steps. If $M$ is irreducible and aperiodic, then $Y_k = Y M^k$ converges to the stationary probability, i.e. the unique left-eigenvector of $M$ corresponding to the eigenvalue 1.

In your context, everything is written for right-stochastic matrices, and thus the limit of $(u_k)$ is the unique right-eigenvector of the matrix corresponding to the eigenvalue 1, which is indeed $(1,0,0)$. This is consistent with what is written above since $w_0 + s_0 + d_0 = 1$ for probability vectors.

All calculations apart, this is quite clear: from state 3, you eventually go to state 2; and from state 2, you eventually go to state 1, which is absorbing. Therefore, regardless of where you start, you always end up at state 1 and stay there.

Related Question