Finding steady state equations of infinite-dimensional Markov Chain

markov chainsmarkov-processprobability

Suppose we have the transition matrix

$$P=\begin{bmatrix} 0&1&0&0&0&\cdots\\\\ p&0&1-p&0&0&\cdots\\\\
0&p&0&1-p&0&\cdots\\\\ 0&0&p&0&1-p&\cdots\\\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots\\\\ \end{bmatrix}$$

with $p\in[1/2,1]$. Show that the solution to the steady-state equations are given by

$$\pi_0=\frac{r-1}{2r}$$

$$\pi_j=\frac{(r+1)(r-1)}{2r^{j+1}}\quad j\geq 1$$

where $r = pāˆ•(1 āˆ’ p)$.

My attempt: We have that

$$P-I=\begin{bmatrix} -1&1&0&0&0&\cdots\\\\ p&-1&1-p&0&0&\cdots\\\\
0&p&-1&1-p&0&\cdots\\\\ 0&0&p&-1&1-p&\cdots\\\\ \vdots&\vdots&\vdots&\vdots&\vdots&\ddots\\\\ \end{bmatrix}$$

and so we must solve the equations

$$\pi_0=p\pi_1$$
$$\pi_1=\pi_0+p\pi_2$$
$$\pi_2=(1-p)\pi_1+p\pi_3$$
$$\pi_3=(1-p)\pi_2+p\pi_4$$
$$\vdots$$

and $\pi_0+\pi_1+\pi_2+\cdots=1$.

The structure of the transition matrix is simple, but it's not clear to me how to proceed to arrive at the desired equations.

Best Answer

A steady-state solution satisfies $P^T \pi = \pi$, where $\pi$ is a column vector of probabilities. Reading off the matrix system gives

$$\begin{cases} \pi_0 = p \pi_1 \\ \pi_1 = \pi_0 + p \pi_2 \\ \pi_i = (1 - p) \pi_{i - 1} + p \pi_{i + 1} & i \ge 2 \\ \pi_0 + \pi_1 + ... = 1 \end{cases}$$

Of course, we may rewrite the system to depend strictly on indices less than the one of interest:

$$\begin{cases} \pi_0 = p \pi_1 \\ \pi_1 = \pi_0 + p \pi_2 \\ \pi_i = \frac{1}{p} \pi_{i - 1} + \left( 1 - \frac{1}{p} \right) \pi_{i - 2} & i \ge 3 \\ \pi_0 + \pi_1 + ... = 1 \end{cases}$$

This is essentially a linear recurrence, where

$$\begin{pmatrix} \pi_{i + 1} \\ \pi_i \end{pmatrix} = \underbrace{\begin{pmatrix} \frac{1}{p} & 1 - \frac{1}{p} \\ 1 & 0 \end{pmatrix}}_{A} \begin{pmatrix} \pi_i \\ \pi_{i - 1} \end{pmatrix}$$

Let's diagonalize the matrix $A$:

$$|A - \lambda I| = \lambda^2 - \frac{1}{p} \lambda - 1 + \frac{1}{p} = 0$$

We find that the two eigenvalues are $\lambda_1 = 1$ and $\lambda_2 = \frac{1}{p} - 1$. $\left\{ \begin{pmatrix} 1 \\ 1 \end{pmatrix} \right\}$ is a basis for the eigenspace of $\lambda_1$, while $\left\{ \begin{pmatrix} \frac{1}{p} - 1 \\ 1 \end{pmatrix} \right\}$ is a basis for the eigenspace of $\lambda_2$. Solving the first two equations in the above system gives the "initial condition" for the recurrence:

$$\begin{pmatrix} \pi_2 \\ \pi_1 \end{pmatrix} = \frac{\pi_0}{p} \begin{pmatrix} \frac{1}{p} - 1 \\ 1 \end{pmatrix}$$

which we find lies exactly in the eigenspace of $\lambda_2$. Therefore, we may say that in general,

$$\begin{pmatrix} \pi_{i + 1} \\ \pi_i \end{pmatrix} = \frac{\pi_0}{p} \left( \frac{1}{p} - 1 \right)^{i - 1} \begin{pmatrix} \frac{1}{p} - 1 \\ 1 \end{pmatrix}$$

or for $i \ge 1$,

$$\pi_i = \frac{\pi_0}{p} \left( \frac{1}{p} - 1 \right)^{i - 1}$$

We now find that the normalization condition is

$$\pi_0 + \pi_1 + ... = \pi_0 \left( 1 + \frac{1}{p} \sum_{j = 0}^\infty \left( \frac{1}{p} - 1 \right)^j \right) = 1$$

Evaluating the geometric series assuming $\frac{1}{p} - 1 < 1 \implies p > \frac{1}{2}$ yields

$$\sum_{j = 0}^\infty \left( \frac{1}{p} - 1 \right)^j = \frac{1}{1 - \left( \frac{1}{p} - 1 \right)} = \frac{p}{2 p - 1}$$

so that

$$1 + \frac{1}{p} \sum_{j = 0}^\infty \left( \frac{1}{p} - 1 \right)^j = \frac{2 p}{2 p - 1}$$

and thus

$$\pi_0 = 1 - \frac{1}{2 p}$$

which you will find to be equivalent to the given expression. Knowing $\pi_0$, all the other probabilities can be found to be

$$\pi_i = \left( \frac{1}{p} - \frac{1}{2 p^2} \right) \left( \frac{1}{p} - 1 \right)^{i - 1}$$

which after some manipulation can be shown to be equivalent to the given expression.