[Math] Amount of time spent in each state by a Markov chain with matrix of transition times

markov chainsmarkov-processprobabilityprobability theory

When we have a simple Markov matrix (rows sum to 1) $P$ where $P_{i,j}$ gives the probability of making a transition from state $i$ to state $j$, We can get the steady state probabilities of being in each of the states ($\pi$) by solving the following system of equations –

$$P \pi = \pi$$

Now, let's say we have a matrix of transition times as well $T$, where $T_{i,j}$ gives the time it takes to make the transition from state $i$ to state $j$. When $T$ has all entries of the same magnitude, it is clear that $\pi$ would not change. If not however, $T$ will also have some impact on $\pi$. Any ideas on how to find the new $\pi$?

Best Answer

Let $X:=\{X_n:n=0,1,\ldots\}$ be the original Markov chain and $Y:=\{Y_n:n=0,1\ldots\}$ the process obtained by adding the transition times. $Y$ is not a Markov chain, because at time $n$, the $(n+1)^{\mathrm{th}}$ transition time depends on the $(n+1)^{\mathrm{th}}$ state. So we cannot take the approach of finding a stationary distribution for a transition matrix.

Let $E$ be the state space of $X$, then $Y$ takes values in $E\times\mathbb N$. That is, for each $n$ we have $Y_n=(X_n, J_n)$, where $J_n$ is the $n^{th}$ jump time. For example, if $X_0 = i_0$, $X_1 = i_1$, and $T_{i_0,i_1} = 2$, then $Y_0 = (i_0,2)$ and $$ \mathbb P(Y_1 = (i_1, k)) = \sum_{j\in E} kP_{i_1,j}\cdot \mathsf 1_k(T_{i_1,j}). $$ $Y$ is a Markov renewal process, a generalization of Markov chains (and of renewal processes!). For each $n$, let $$\nu_{i,n}=\left(\frac1{1 + \sum_{m=0}^n J_m}\right)\sum_{m=0}^nJ_m\cdot \mathsf 1_i(X_m)$$ be the time spent in state $i$ after $n$ jumps. We want to find $\nu_i :=\lim_{n\to\infty} \nu_{i,n}$.

I believe (though it remains to be proven), that $$ \nu_i = \pi_i \cdot \left(\frac{\sum_{j\in E} T_{i,j}}{\sum_{k\in E}\sum_{l\in E} T_{k,l}} \right). $$ That is, the limiting fraction of time $Y$ is in state $i$ is given by the limiting fraction of time $X$ is in state $i$ multiplied by the ratio of the mean transition time from state $i$ to the mean transition time between any pair of states.