Density of BM conditioned on start and end (with drift and diffusion)

brownian motionconditional probabilitynormal distributionprobabilitystochastic-calculus

I am trying to calculate the conditional density $X_t|X_{t_1},X_{t_2}$ for $0<t_1<t<t_2$ of a BM $X_t$ with drift and scaling given by $\mathrm{d}X_t=a\mathrm{d}t+b\mathrm{d}W_t$ (so Brownian bridge with fixed ends).

My approach is to follow the usual derivation for a normal BM. I know that since $f_{W_u|W_v}(x|y)=\frac{f_{W_u,W_v}(x,y)}{f_{W_v}(y)}=\frac{f_{W_u,W_{v-u}}(x,y-x)}{f_{W_v}(y)}=\frac{f_{W_u}(x)f_{W_{v-u}}(y-x)}{f_{W_v}(y)}$ where the second equality follows from $W_v=W_u+W_{v-u}$ and the third equality follows from independence of increments. I therefore tried to apply this to $X_t$ above as follows.

\begin{align*}
f_{X_t|X_{t_1},X_{t_2}}&=\frac{f_{X_t,X_{t_2}|X_{t_1}}(x,x_{t_2}|x_{t_1})}{f_{X_{t_2}|X_{t_1}}(x_{t_2}|x_{t_1})}\\
&=\frac
{\frac1{\sqrt{2\pi b^2(t-t_1)}}\frac1{\sqrt{2\pi b^2(t_2-t)}}\mathrm{exp}\left(-\frac{[(x-x_{t_1})-a(t-t_1)]^2}{2b^2(t-t_1)}\right)\mathrm{exp}\left(-\frac{[(x_{t_2}-x-x_{t_1})-a(t_2-t-t_1)]^2}{2b^2(t_2-t)}\right)}{\frac1{\sqrt{2\pi b^2(t_2-t_1)}}\mathrm{exp}\left(-\frac{[(x_{t_2}-x_{t_1})-a(t_2-t_1)]^2}{2b^2(t_2-t_1)}\right)}.
\end{align*}

where I have used $f_{X_t|X_{t_1}}(x|x_{t_1})=\frac1{\sqrt{2\pi b^2(t-t_1)}}\mathrm{exp}\left(-\frac{[(x-x_{t_1})-a(t-t_1)]^2}{2b^2(t-t_1)}\right)$. However, this doesn't seem right. I have seen the link here, but the density I have derived doesn't seem to match. Furthermore, by substituting $t_1=0$ and $t_2=T$, I don't get the correct answer of $\hat{\mu}=(tX_T+( T-t )X_0)/T$ and $\hat{\sigma}^2=b^2t(T-t)/T$ in the usual distribution of a BM. I also tried to use this conditional probability to find the probability of crossing a barrier (which I have asked in another question), but got nowhere near the correct answer. Where have I gone wrong in my derivation? Thanks!

Edit: in attempting to convert this to a normal BM problem as described in the current answer I have $f(\cdot)=\frac1bg(\cdot)$ where $g$ is the distribution of the standard BM $W_t$, although I haven’t progressed much (still getting complicated fractions).

Best Answer

First let's establish the interpolation property of Brownian Motion.

Theorem. Let $W$ be a standard Brownian Motion, and $0<t_1 < t < t_2$. Then the conditional distribution of $W_t$ given $W_{t_1}=x_1$ and $W_{t_2}=x_2$ is $N(\mu, \sigma^2)$ with $\mu = x_1 + \frac{t-t_1}{t_2-t_1}(x_2-x_1)$ and $\sigma^2 = \frac{(t_2-t)(t-t_1)}{t_2-t_1}$.

Note for intuition: $\mu$ is the linear interpolation of the values $(t_1, x_1), (t_2, x_2)$ at time $t$.

Proof. Let $p(t; x,y) := \frac{1}{\sqrt{2\pi t}}e^{-(x-y)^2/2t}$ be the Gaussian kernel. We have by the independence of increments $$ P(B_{t_1} \in dx, B_{t} \in dy, B_{t_2} \in dz) = p(t_1; 0, x)p(t-t_1; x, y)p(t_2-t; y, z)\,dx\,dy\,dz. $$ Dividing by the density above by the density of $$ P(B_{t_1} \in dx, B_{t_2} \in dz) = p(t_1; 0, x) p(t_2-t_1; x, z)\,dx\,dz $$ we obtain the conditional density $$ P(B_{t} \in dy | B_{t_1} = x, B_{t_2}=z) = \frac{p(t-t_1; x, y)p(t_2-t; y, z)}{ p(t_2-t_1; x, z)}\,dy $$ so all that remains is to the simplify the expression above. I will try to go through the calculation without too much magic so you can see where the $\mu$ and $\sigma^2$ formulas come from. First let's handle the normalization constant: $$ \frac{\frac{1}{\sqrt{2\pi(t-t_1)}} \frac{1}{\sqrt{2\pi(t_2-t)}}}{\frac{1}{\sqrt{2\pi(t_2-t_1)}}} = \frac{1}{\sqrt{2\pi \frac{(t-t_1)(t_2-t)}{t_2-t_1}}} = \frac{1}{\sqrt{2\pi\sigma^2}}. $$ Then lets look at the exponentials. This would be a huge mess except we will introduce a new variable to simplify things considerably. Since $t_1 < t < t_2$, we can write $t$ as a convex combination of $t_1, t_2$, say $t = \alpha t_1 + (1-\alpha)t_2$ with $\alpha \in (0,1)$. Then $t_2-t = (t_2-t_1)\alpha$ and $t - t_1 = (t_2-t_1)(1-\alpha)$ so we begin the calculation with the exponentials $$\begin{align} e^{-\frac12(\frac{(x-y)^2}{t-t_1} + \frac{(y-z)^2}{t_2-t} - \frac{(x-z)^2}{t_2-t_1})} &= e^{-\frac{1}{2(t_2-t_1)\alpha(1-\alpha)}((x-y)^2\alpha + (y-z)^2(1-\alpha) - \alpha(1-\alpha)(x-z)^2)} \\ &= e^{-\frac{1}{2(t_2-t_1)\alpha(1-\alpha)}(y^2 - 2y(\alpha x + (1-\alpha)z) + (\alpha x + (1-\alpha)z)^2} \\ &= e^{-\frac{1}{2(t_2-t_1)\alpha(1-\alpha)}(y-(\alpha x + (1-\alpha)z))^2} \\ &= e^{-\frac{1}{2\sigma^2}(y-\mu)^2} \end{align} $$ completing the proof.

Now we move on to your question.

Note that because $a$ and $b\neq 0$ are constants, knowing $X$ is the same as knowing $W$. We have $X = a t + b W$, and $W = \frac{1}{b}(X-at)$. Thus conditioning on $X_{t_1}=x_1, X_{t_2}=x_2$ is the same as conditioning on $W_{t_1} = \frac{1}{b}(x_1-at_1), W_{t_2} = \frac{1}{b}(x_2-at_2)$. Since you know the conditional distribution of $W_t$ given $W_{t_1} = \frac{1}{b}(x_1-at_1), W_{t_2} = \frac{1}{b}(x_2-at_2)$, then you know the conditional distribution of $X_t$ given $W_{t_1} = \frac{1}{b}(x_1-at_1), W_{t_2} = \frac{1}{b}(x_2-at_2)$ is whatever your answer for $W_t$ was, rescaled. Note that for fixed $t$, $g_t(w) := at + bw$ with $g_t^{-1}(x) = \frac{1}{b}(x-at)$ has $g_t(W) = X_t$ and $g_t^{-1}(X_t) = W_t$, so

$$ \begin{align} &P(X_t \in dy | X_{t_1}=x_1, X_{t_2}=x_2) \\ &= P(g_t(W_t) \in dy | W_{t_1} = \frac{1}{b}(x_1-at_1), W_{t_2} = \frac{1}{b}(x_2-at_2)) \\ &=|g_t^{-1}(y)'|P(W_t \in d(g_t^{-1}(y)) | W_{t_1} = \frac{1}{b}(x_1-at_1), W_{t_2} = \frac{1}{b}(x_2-at_2)) \\ &=\frac{1}{b}P(W_t \in d(g_t^{-1}(y)) | W_{t_1} = \frac{1}{b}(x_1-at_1), W_{t_2} = \frac{1}{b}(x_2-at_2)) \\ &= \frac{1}{b}\frac{1}{\sqrt{2\pi \sigma^2}} e^{-\frac{1}{2}(g_t^{-1}(y)-\mu)^2/2\sigma^2} \end{align} $$ where $\mu = \frac{1}{b}(x_1-at_1) + \frac{t-t_1}{t_2-t_1}( \frac{1}{b}(x_2-at_2) - \frac{1}{b}(x_1-at_1))$ and $\sigma^2 = \frac{(t_2-t)(t-t_1)}{t_2-t_1}$. You can simplify it further if you like, I'm not sure whether it gets rid of much complexity.

Related Question