Expected Number of Steps and Probability in a Markov Chain – Markov Chains

markov chains

Can anyone give an example of a Markov Chain and how to calculate the expected number of steps to reach a particular state? Or the probability of reaching a particular state after T transitions?

I ask because they seem like powerful concepts to know but I am having a hard time finding good information online that is easy to understand.

Best Answer

The simplest examples come from stochastic matrices. Consider a finite set of possible states. Say that the probability of transitioning from state $i$ to state $j$ is $p_{ij}$. For fixed $i$, these probabilities need to add to $1$, so $$\sum_j p_{ij} = 1$$

for all $i$. So the matrix $P$ whose entries are $p_{ij}$ needs to be right stochastic, which means that $P$ has non-negative entries and $P 1 = 1$ where $1$ is the vector all of whose entries are $1$.

By considering all the possible ways to transition between two states, you can prove by induction that the probability of transitioning from state $i$ to state $j$ after $n$ transitions is given by $(P^n)_{ij}$. So the problem of computing these probabilities reduces to the problem of computing powers of a matrix. If $P$ is diagonalizable, then this problem in turn reduces to the problem of computing its eigenvalues and eigenvectors.

Computing the expected time to get from state $i$ to state $j$ is a little complicated to explain in general. It will be easier to explain in examples.

Example. Let $0 \le p \le 1$ and let $P$ be the matrix $$\left[ \begin{array}{cc} 1-p & p \\\ p & 1-p \end{array} \right].$$

Thus there are two states. The probability of changing states is $p$ and the probability of not changing states is $1-p$. $P$ has two eigenvectors: $$P \left[ \begin{array}{c} 1 \\\ 1 \end{array} \right] = \left[ \begin{array}{c} 1 \\\ 1 \end{array} \right], P \left[ \begin{array}{c} 1 \\\ -1 \end{array} \right] = (1 - 2p) \left[ \begin{array}{c} 1 \\\ -1 \end{array} \right].$$

It follows that $$P^n \left[ \begin{array}{c} 1 \\\ 1 \end{array} \right] = \left[ \begin{array}{c} 1 \\\ 1 \end{array} \right], P^n \left[ \begin{array}{c} 1 \\\ -1 \end{array} \right] = (1 - 2p)^n \left[ \begin{array}{c} 1 \\\ -1 \end{array} \right]$$

and transforming back to the original basis we find that $$P^n = \left[ \begin{array}{cc} \frac{1 + (1 - 2p)^n}{2} & \frac{1 - (1 - 2p)^n}{2} \\\ \frac{1 - (1 - 2p)^n}{2} & \frac{1 + (1 - 2p)^n}{2} \end{array} \right].$$

Thus the probability of changing states after $n$ transitions is $\frac{1 - (1 - 2p)^n}{2}$ and the probability of remaining in the same state after $n$ transitions is $\frac{1 + (1 - 2p)^n}{2}$.

The expected number of transitions needed to change states is given by $$\sum_{n \ge 1} n q_n$$

where $q_n$ is the probability of changing states after $n$ transitions. This requires that we do not change states for $n-1$ transitions and then change states, so $$q_n = p (1 - p)^{n-1}.$$

Thus we want to compute the sum $$\sum_{n \ge 1} np (1 - p)^{n-1}.$$

Verify however you want the identity $$\frac{1}{(1 - z)^2} = 1 + 2z + 3z^2 + ... = \sum_{n \ge 1} nz^{n-1}.$$

This shows that the expected value is $$\frac{p}{(1 - (1 - p))^2} = \frac{1}{p}.$$

An alternative approach is to use linearity of expectation. To compute the expected time $\mathbb{E}$ to changing states, we observe that with probability $p$ we change states (so we can stop) and with probability $1-p$ we don't (so we have to start all over and add an extra count to the number of transitions). This gives $$\mathbb{E} = p + (1 - p) (\mathbb{E} + 1).$$

This gives $\mathbb{E} = \frac{1}{p}$ as above.

Related Question