Help using eigenvectors to solve Markov chain

eigenvalues-eigenvectorsmarkov chainssteady state

I took Linear Algebra last semester and when learning about Markov Chains in my statistics class, I wanted to use eigenvectors/eigenvalues to find the steady-state vector rather than just using systems of equations like our professor taught us. I seem to be having a bit of trouble, however. Here's an example:

Let's say we have a transition matrix P:

$$
P =
\begin{bmatrix}
0 & 0.5 & 0.5 \\
0.5 & 0 & 0.5 \\
1 & 0 & 0 \\
\end{bmatrix}
$$

We know that one of the eigenvalues is 1, since it is a Markov chain. The other eigenvalues can be found via $det(P – \lambda I)$:

$$
\lambda_1 = 1
$$

$$
\lambda_2 = -0.5
$$

Thus making the eigenvectors:

$$
x_1 =
\begin{bmatrix}
1 \\
1 \\
1 \\
\end{bmatrix}
$$

$$
x_2 =
\begin{bmatrix}
-0.5 \\
-0.5 \\
1 \\
\end{bmatrix}
$$

This is where I always get stuck. In another example, I simply normalized $x_1$ to get
$$
x_{ss} =
\begin{bmatrix}
\frac{1}{3} \\
\frac{1}{3} \\
\frac{1}{3} \\
\end{bmatrix}
$$

for the long-term forecast/steady state vector. For this example however, a calculator revealed that the steady state vector was actually
$$
x_{ss} =
\begin{bmatrix}
\frac{4}{9} \\
\frac{2}{9} \\
\frac{3}{9} \\
\end{bmatrix}
$$

How do I get there from the eigenvectors? Any help is much appreciated.

Best Answer

You’ve found right eigenvectors of $P$, but what you really need are the left eigenvectors: you’re trying to solve $\pi P=\pi$, not $P\pi=\pi$. Use whatever technique you used to compute the eigenvectors of $P$ on $P^T$ instead.

Incidentally, since the rows of a row-stochastic matrix all sum to $1$, the vector consisting of all $1$s will always be a right eigenvector.