[Math] Is the hitting time for a Continuous Markov Chain and a discrete Markov Chain the same

markov chainsmarkov-process

I am not that familiar with Markov Chains, so forgive me if i say something out of line or use wrong terminology!

I know that the expected hitting time for a set $S$ for a discrete a markov chain can be obtained by solving the following!

\begin{equation}
\textbf{E}_{o}[T_{S}] = \textbf{1} + \sum_{ij}P_{ij}\textbf{E}_{j}[T_{S}]
\end{equation}

Where $P$ denotes the transition rate matrix (or the transposed normalized adjacency matrix of the graph). $S$ is the subset of nodes we are interested in and $o$ is an arbitrary node we start from. Of course if $o$ is a member of $S$ then we get the hitting time to be zero. How does this compare to a continuous time Markov Chain? Can i still use the same expression to get a hold of the hitting time?

Best Answer

The correspondence between the two cases is to consider the generator equation:

$$(Lu)(x)=-1 \quad x \not \in A \\ u(x)=0 \quad x \in A$$

where $A$ is the set you're talking about hitting and $u$ is the expected time to hit $A$ started at $x$. In the discrete time case, $L=P-I$ where $P$ is the transition probability matrix. In the continuous time case, $L$ is the generator matrix, which is really the basic object for describing a continuous time discrete space Markov chain. If you are not familiar, for $i \neq j$, $L_{ij}$ is the rate of transitions from $i$ to $j$, and $L_{ii}=-\sum_{j \neq i} L_{ij}$.

Intuitively the continuous version is the same as the discrete version but instead of conditioning on the state $1$ time into the future, you condition on the state an arbitrarily small amount of time into the future, using the identities

$$P[X_{t+h}=i \mid X_t=i]=1+L_{ii} h + o(h) \\ P[X_{t+h}=j \mid X_t=i]=L_{ij} h+o(h).$$

Related Question