Intuition of conditioning on future observation in brownian motion

brownian motionprobability theorystochastic-processes

I've been learning about Brownian motion and have verified a result that for a Brownian motion process $\{B_t : t \geq 0\}$, $B_0 =0 $ and $0 \leq s < t$,
$$
B_s | B_t \sim \mathcal{N}\left(\frac{s}{t}B_t, s(1-\frac{s}{t}) \right)
$$

Is there intuition behind why the mean is a proportion of the value of the Brownian motion at time $t$ in the future and why specifically this expression? My rough intuition just looking at the expression is that as $s \to t$ we must have the conditional mean approach $B_t$, but why is it in this linear fashion? Wouldn't $(2 – \frac{s}{t})B_t$ also work as well?

Best Answer

I don't know which proof you followed, but maybe it is instructive to proceed as follows. Rewrite:

$$B_s = \frac{s}{t}B_t + (B_s - \frac{s}{t}B_t),$$

where the term in bracket should be understood a Brownian motion "conditioned to return at $0$ at time $t$" (a Brownian bridge of duration $t$, if you have seen it). Now, it seems intuitive that this term will be independent of $B_t$ since it comes back to $0$ at time $t$ anyway, and indeed you can check that

$$\mathbb{E}[(B_s - \frac{s}{t}B_t)B_t]=0$$

(recall that for Gaussian processes, it is sufficient for the correlation to be zero to have independence). Furthermore, $B_s - \frac{s}{t}B_t$ follows the law $\mathcal{N}(0,s(1-\frac{s}{t}))$. This allows you to conclude that the conditional law of $B_s$ given $B_t$ is given by the expression you wrote, where according to the decomposition above, $\frac{s}{t}B_t$ provides the expectation of the Gaussian and $B_s - \frac{s}{t}B_t$ the variance.

The linear term appearing in the mean and the variance just reflects that to bring the Brownian motion back to $0$ at time $t$--making it independent of $B_t$--it suffices to apply a linear transformation.