[Math] How to “look back” in a Markov chain

markov chains

Imagine I have a discrete-time, discrete-space Markov chain with some transition matrix $B$ and stationary distribution $\pi$. If I know what state I'm in at some time $t$, how can I calculate $p ( X_{t-1} \mid X_t)$, the probability that I came from each possible state?

To give concreteness, let's say:

$$
B = \begin{bmatrix}
0.6 & 0.2 & 0.1 & 0.1 \\
0.2 & 0.5 & 0.2 & 0.1 \\
0.1 & 0.1 & 0.5 & 0.3 \\
0.1 & 0.2 & 0.3 & 0.4
\end{bmatrix}
$$
$$
\pi = \begin{bmatrix}
0.2491 & 0.2454 & 0.2821 & 0.2234
\end{bmatrix}
$$
$$
X_t = \begin{bmatrix}
0 & 1 & 0 & 0
\end{bmatrix}
$$

How do I calculate $p(X_{t-1}\mid X_t)$?

Best Answer

According to me you are asked to find the transition probabilities of the reversed Markov chain.

Let us denote the Markov chain that you have by $(X_t, t\ge0)$. Its transition probability matrix is $B=(b_{ij})$ i.e. $b_{ij}=P(X_t=j|X_{t-1}=i)$ for any $t$. You are also given the steady state distribution $\vec \pi=(\pi_1,\pi_2,...)$ of the chain $(X_t, t\ge0)$.

The Markov property is stated as โ€œthe future is independent of the past given the present stateโ€. Is can be restated as โ€œthe past is independent of the future given the present stateโ€. But this means that the process in reverse time, is still a Markov chain. So for your Markov chain $(X_t, t\ge0)$ there exists a reversed version Markov chain, say $(X^*_t, t\ge0)$, which goes back in time. This argument can be made more clear and strict but it is not the proper place here to do it (check the chapters on Reversible Markov chain in the classic books on queueing theory and markov chains. For example, Stochastic Modeling and the Theory of Queues by Ronald W. Wolff).

Denote the transition probabilities of $(X^*_t, t\ge0)$ by $b^*_{ij}$. So what you are asking, is how to compute $b^*_{ij}$.

The theory says that $b^*_{ij}$ can be exactly computed in terms of $b_{ij}$ and $\pi$ in the following way: $$ b^*_{ij}={\pi_j \over \pi_i}b_{ji}. $$

Related Question