Probability distribution of time-integral of a two-state continuous-time Markov process

markov chainsmarkov-processprobabilityprobability distributionsstochastic-processes

I have a two-state time-continuous Markov process $X_t$. It takes values in $\{0,1\}$ and switches between these two states with the transition rate $\lambda$. I would like to find the probability density function for $Y_t\equiv\int_0^t X_s ds$ for $t>0$ given initial distribution of $X_0$ ($p_0\equiv Prob(X_0=0)$ and $p_1\equiv Prob(X_0=1)$).

Things I know how to solve:

  • I know how to compute the probability that $X_t=0$. It exponentially decays (in $t$) to $1/2$ with the rate $\lambda$.
  • I know that the waiting times between switches are distributed as $Exp(\lambda)$.
  • I know the number of switches until $t$ is Poisson distributed with parameter $\lambda t$.

If the distribution of the waiting times were independent of the number of switches, I could easily solve the problem, but they are not (knowing how many transitions happened before $t$ affects the distribution of waiting times between transitions).

How do I proceed?

Best Answer

The problem gets simpler if one replace the fixed time $t$ by some random time $T_\theta$ which is independent of the Markov process and is Exponential with parameter $\lambda$.

Going back to the distribution of $Y_t$ needs to inverse a Laplace transform, which is sometimes difficult, and possible in the present situation. Observe that for every bounded Borel function $h : \mathbb{R} \to \mathbb{R}$, $$E[h(Y_{T_\theta})] = \int_0^\infty E[h(Y_t)] \theta e^{-\theta t} \mathrm{d}t.$$

I call $S_1 < S_2 < \ldots$ the jump times, and I set $S_0=0$, $D_1=S_1-S_0$, $D_2=S_2-S_1$,... and $N_t = \sup\{n \ge 0 : S_n \le t\}$. The distribution of $N_t$ is Poisson($\lambda t$), and conditionally on $N_t=n$ and $X_0=x$, the density of $(S_1,\ldots,S_n)$ is $$(s_1,\ldots,s_n) \mapsto \frac{n!}{t^n}1_{0<s_1<\ldots<s_n<t}.$$

I omit the index $\theta$ in what follows.

Let $h : \mathbb{R}^{n+1} \to \mathbb{R}$ be any bounded Borel function and $Z = h(D_1,\ldots,D_n,T-S_n)$. \begin{eqnarray*} E\big[Z1_{N_T = n}\big] &=& \int_0^\infty E\big[h(D_1,\ldots,D_n,t-S_n)1_{N_t = n}\big] \theta e^{-\theta t} \mathrm{d}t \\ &=& \int_0^\infty E\big[h(S_1,\ldots,S_n-S_{n-1},t-S_n)1_{N_t=n}\big] \theta e^{-\theta t} \mathrm{d}t \\ &=& \int_0^\infty \frac{(\lambda t)^n}{n!}e^{-\lambda t} \Big( \int_{R^n} h(s_1,\ldots,s_n-s_{n-1},t-s_n) \frac{n!}{t^n} 1_{0<s_1<\ldots<s_n<t} ds_1 \ldots ds_n \Big) \theta e^{-\theta t}\mathrm{d}t \\ &=& \int_{R^{n+1}} \theta \lambda^n e^{-(\lambda+\theta)t} h(s_1,\ldots,s_n-s_{n-1},t-s_n) 1_{0<s_1<\ldots<s_n<t} ds_1 \ldots ds_ndt \\ &=& \int_{\mathbb{R}^{n+1}} \theta \lambda^n e^{-(\lambda+\theta)(t_1+\cdots+t_n+r)} h(t_1,\ldots,t_n,r) 1_{t_1>0;\ldots;t_n>0;r>0} ds_1 \ldots dt_n dr \end{eqnarray*} Hence $$P[N_T=n] = \frac{\theta \lambda^n}{(\lambda+\theta)^{n+1}}$$ and conditionally on $[N_T=n;X_0=x]$, the random variables $D_1,\ldots,D_n,T-S_n$ are independent with distribution Exponential($\lambda + \theta$).

This enables to compute the distribution of $Y_T$.

Conditionally on $[N_T=n;X_0=x]$, $Y_T$ is the sum of $n_x := \lfloor (n+1+x)/2 \rfloor$ (respectively $\lfloor (N+2)/2 \rfloor$) independent random variables with distribution Exponential($\lambda + \theta$). Note that $n_0$ and $n_1$ depend on $n$ and $n_0+n_1=n+1$.

The distribution of $Y_T$ given $x_0=x$ is $$\sum_{n=0}^{+\infty}\frac{\theta\lambda^n}{(\lambda+\theta)^{n+1}} \Gamma(n_x,\lambda+\theta).$$ The distribution of $Y_T$ given $x_0=0$ is $$\frac{\theta}{\lambda+\theta}d\delta_0(y) + \sum_{n=1}^{+\infty}\frac{\theta\lambda^n}{(n_0-1)!(\lambda+\theta)^{n_1}} 1_{y>0} y^{n_0-1}e^{-\lambda y} dy.$$

Observe that for all $y>0$ \begin{eqnarray*} \frac{\theta\lambda^{n_1}}{(\lambda+\theta)^{n_1}} e^{-\theta y} &=& \theta e^{-\theta y} \int_0^\infty \frac{\lambda^{n_1}}{(n_1-1)!}s^{n_1-1} e^{-(\lambda+\theta) s} ds \\ &=& \int_y^\infty \frac{\lambda^{n_1}}{(n_1-1)!} (t-y) ^{n_1-1} e^{-\lambda (t-y)} \theta e^{-\theta t}dt \\ &=& \int_0^\infty 1_{t>y} \frac{\lambda^{n_1}}{(n_1-1)!} (t-y) ^{n_1-1} e^{-\lambda (t-y)} \theta e^{-\theta t}dt \end{eqnarray*} Hence, dividing both sides by $\theta$ and inverting Laplace transform yields that the distribution of $Y_t$ given $X_0=0$ is $$e^{-\lambda t} d\delta_0(y) + \sum_{n=1}^{+\infty} \lambda^{n}e^{-\lambda t} \frac{1}{(n_0-1)!(n_1-1)!} 1_{0<y<t} y^{n_0-1}(t-y) ^{n_1-1}dy,$$ namely $$e^{-\lambda t} d\delta_0(y) + \sum_{n=1}^{+\infty} \frac{(\lambda t)^{n}e^{-\lambda t}}{n!} \frac{n!}{(n_0-1)!(n_1-1)!}1_{0<y<t}\frac{1}{t^n} y^{n_0-1}(t-y) ^{n_1-1}dy.$$ Conditionally on $[N_t=n;X_0=0]$, $Y_t/t$ follows a Beta distribution with parameters $n_0$ and $n_1$.