[Math] Time to absorption and fraction of time spent in a state in a CTMC

markov chainsprobabilitystochastic-processes

I consider a Markov chain with a single absorbing state with $N$ transient states and I would like to find the expected time to absorption, given an initial state. I write the following equation to characterise the time to absorption from different states.

$$\left(\begin{array}{c|c}0& \mathbf 0_{1\times N} \\ \hline \\ \mathbf {\hat{t}}_{N\times 1} & \mathbf P_{N\times N}\end{array}\right)\begin{pmatrix}1_{1\times1} \\ \mathbf t_{N\times 1}\end{pmatrix}=\begin{pmatrix}1_{1\times1} \\ \mathbf t_{N\times 1}\end{pmatrix}$$

  • The first row and column stand for transition to and from the absorbing state in the CTMC.
  • The vector $\mathbf {\hat{t}}$ represents the time to go from different states to state 0 in the next transition multiplied by the time spent in that state. For example, if you have a state 1 with a transition rate to 0 of $\mu_1$ and a transition rate to 2 as $\lambda_{12}$, then the entry in $\mathbf {\hat{t}}$ will be $\frac{\mu_1}{\mu_1+\lambda_{12}}\times \frac1{\mu_1}$.
  • $\mathbf P$ stands for transition probability matrix among non-absorptive states.
  • $\mathbf t$ is the vector of expected times to reach the absorbing state from different states. Continuing with the example in the second bullet, if it is possible to only move to 2 and 0 from 1, the equation will be $t_1=\frac{1}{\mu_1+\lambda_{12}}+\frac{\lambda_{12}}{\mu_1+\lambda_{12}}t_2$.

I have two questions:

  1. Is the above fixed point equation correct? Can I say anything about its solution?
  2. How do I find the fraction of time spent in different states prior to absorption, given the initial state? If I consider a reduced state space of $N$ states and find it to be irreducible and positive recurrent, is it possible to relate this to the stationary probabilities and the expected time to absorption?

Best Answer

Call $o$ the absorbing state. For every state $x\ne o$, let $\mu_x$ denote the rate of transition from $x$ to $o$, $\lambda_{xy}$ the rate of transition from $x$ to $y\ne o$ and $t_x$ the mean time before the absorption at $o$ starting from $x$.

Starting at $x$, the Markov chain leaves $x$ after an exponential time of parameter $\lambda_x=\mu_x+\sum\limits_{y\ne o}\lambda_{xy}$, and goes to a new state which is $o$ with probability $\mu_x/\lambda_x$ and $y\ne o$ with probability $\lambda_{xy}/\lambda_x$. Thus, $$ t_x=1/\lambda_x+\sum_{y\ne o}t_y\lambda_{xy}/\lambda_x. $$ This can be summarized as the fact that $(\Lambda\mathbf t)_x=1$ for every $x\ne o$, where $\mathbf t=(t_x)_{x\ne o}$, $\Lambda_{xx}=\lambda_x$ and $\Lambda_{xy}=-\lambda_{xy}$ for every $x\ne y$, from which $\mathbf t$ can be deduced.

Regarding your question (2), a useful trick to compute the mean time $t_{xy}$ spent at $y$ starting from $x$ before absorption at $o$ is to add a transition from $o$ to $x$ at rate $1/\alpha$. Then the Markov chain becomes positive recurrent and it performs some successive cycles $x\to\mathrm{states}\ne o\to o\to x$. The mean length of each cycle is $\alpha+t_x$. Considering its unique stationary distribution $(\pi^\alpha_z)_z$ (which one can compute by the usual procedure), one gets $\pi^\alpha_y=t_{xy}/(\alpha+t_x)$ for every $x\ne o$, that is, $$ t_{xy}=(\alpha+t_x)\pi^\alpha_y. $$

Related Question