The conditional distribution of $W_{t+h}$ conditionally on $\mathcal F_t$ is a random distribution, that is, a map $M:\Omega\to\mathcal M_1^+(\mathbb R,\mathcal B(\mathbb R))$, measurable with respect to $\mathcal F_t$, and such that, for every bounded measurable function $u$,
$$
\mathrm E(u(W_{t+h})\mid\mathcal F_t)=\int_{\mathbb R} u\mathrm dM\quad\text{almost surely}.
$$
The OP explains why
$$
\mathrm E(u(W_{t+h})\mid\mathcal F_t)=\mathrm E(u(W_{t}+Z_h)\mid W_t)\quad\text{almost surely},
$$
where $Z_h$ is centered normal with variance $h$ and independent of $W_t$. Thus, for every bounded measurable function $u$,
$$
\int_{\mathbb R} u\mathrm dM=\int_{\mathbb R} u(W_t+z)\mathrm d\gamma_h(z)\quad\text{almost surely},
$$
where $\gamma_h$ is the centered normal distribution with variance $h$. This proves that, for $\mathrm P$-almost every $\omega$, the distribution $M(\omega)$ is normal with mean $W_t(\omega)$ and variance $h$.
Now, my suggestion would be to forget what is written above and, instead, to find and read (and meditate) an excellent (and most congenial) presentation of this stuff (and much more) given in the little (but excellent) blue book called Probability with martingales by David Williams.
Recall the following characterization of (one-dimensional) Brownian motion
A stochastic process $(W_t)_{t \geq 0}$ is a Brownian motion, if and only if,
- $(W_t)_t$ has continuous sample paths.
- $(W_t)_t$ is a Gaussian process with mean $0$ and covariance $\mathbb{E}(W_s W_t) = \min\{s,t\}$ for all $s,t \geq 0$.
As $(W_t)_t$ has obviously continuous sample paths, we just have to check the second property.
Since $(B_t)_{t \geq 0}$ is a Brownian motion, it is in particular a Gaussian process and so
$$B_t - \sum_{j=0}^{n-1} (B_1-B(t_j)) \frac{1}{1-t_j} (t_{j+1}-t_j)$$
is Gaussian for each $n \in \mathbb{N}$ where $t_j := \frac{t}{n} j$. If we let $n \to \infty$, then we get
$$W_t = \lim_{n \to \infty} \left( B_t - \sum_{j=0}^{n-1} (B_1-B(t_j)) \frac{1}{1-t_j} (t_{j+1}-t_j) \right)$$
is Gaussian as a limit of Gaussian random variables. Since this argumentation applies in exactly the same way to the joint distributions $(W_{s_1},\ldots,W_{s_m})$ where $s_j \geq 0$, we get that $(W_t)_{t \geq 0}$ is a Gaussian process. It remains to check mean and covariance.
By Fubini's theorem, we have
$$\begin{align*} \mathbb{E}(W_t) &= \underbrace{\mathbb{E}(B_t)}_{0} - \mathbb{E} \left( \int_0^t\frac{B_1-B_s}{1-s} \, ds \right) = - \int_0^t \underbrace{(\mathbb{E}(B_1-B_s)}_{0} \frac{1}{1-s} \, ds = 0. \end{align*}$$
Now fix $r \leq t$.
$$\begin{align*} \mathbb{E}(W_r W_t) &= \mathbb{E}(B_t B_r)- \mathbb{E} \left( B_t \int_0^r \frac{B_1-B_s}{1-s} \, ds \right) - \mathbb{E} \left( B_r \int_0^t \frac{B_1-B_s}{1-s} \, ds \right) \\ &\quad + \mathbb{E} \left( \int_0^t \int_0^r \frac{B_1-B_u}{1-u} \frac{B_1-B_v}{1-v} \, du \, dv \right) \\ &=: \mathbb{E}(B_r B_t) +I_2+I_3+I_4 \end{align*}$$
If we can show that $$I_2+I_3+I_4 = 0$$ we are done. Using $\mathbb{E}(B_u B_v) = \min\{u,v\}$ for any $u,v \in [0,1]$ and Fubini's theorem, we find
$$ \begin{align*} I_2 &= \int_0^r \frac{\mathbb{E}(B_1 B_t-B_tB_s)}{1-s} \, ds = \int_0^r \frac{t-s}{1-s} \, ds \\ &= - \log (1-r) t + r + \log(1-r) \end{align*}$$
as $r \leq t$. Similarly,
$$\begin{align*} I_3 &= \int_0^t \frac{r- \min\{r,s\}}{1-s} \, ds = \int_0^r \frac{r-s}{1-s} \, ds + \int_r^t \underbrace{\frac{r-r}{1-s}}_{0} \, ds = \int_0^r \frac{r-s}{1-s} \, ds \\ &= (1-\log(1-r)) r + \log(1-r) \end{align*}$$
and, finally,
$$\begin{align*} I_4 &= \int_0^t \int_0^r \frac{1-v-u+ \min\{u,v\}}{(1-u)(1-v)} \, du \, dv \\ &= \int_r^t \int_0^r \frac{1-v-u+ u}{(1-u)(1-v)} \, du \, dv + \int_0^r \int_0^r \frac{1-v-u+ \min\{u,v\}}{(1-u)(1-v)} \, du \, dv \\ &= (t-r) \int_0^r \frac{1}{1-u} \, du + 2 \int_0^r \int_v^r \frac{1}{1-v} \, du \, dv\\ &= -(t-r) \log(1-r) + 2 ((1-\log(1-r))r + \log(1-r)) \end{align*}$$
where we have used in the penultimate equation that
$$\begin{align*} \int_0^r \int_0^r \frac{1-v-u+ \min\{u,v\}}{(1-u)(1-v)} \, du \, dv &= \int_0^r \int_0^v \frac{1}{1-u} \, du \, dv + \int_0^r \int_v^r \frac{1}{1-v} \, du \, dv \\ &= \int_0^r \int_v^r \frac{1}{1-u} \, dv \, du + \int_0^r \int_v^r \frac{1}{1-u} \, dv \, du \\ &= 2 \int_0^r \int_v^r \frac{1}{1-v} \, du \, dv. \end{align*}$$
Adding all up, we get $I_2+I_3+I_4 = 0$ and this finishes the proof.
Best Answer
This is called a Brownian bridge.
To analyze the distribution of the Brownian bridge at a particular intermediate time, consider the prior probability density to go from $A$ to $x$ to $B$, which is $f_{s-t_1}(x-A) f_{t_2-s}(B-x)$, where $f_t$ is the $N(0,t)$ PDF, i.e. $f_t(z)=\frac{1}{\sqrt{2\pi t}} e^{-z^2/2t}$. This uses the independence of the increments.
Then consider the prior probability density to go from $A$ to $B$ overall, which is $f_{t_2-t_1}(B-A)$.
So the conditional PDF is the ratio of these:
$$\frac{\frac{1}{\sqrt{2\pi(s-t_1)}} e^{-\frac{(x-A)^2}{2(s-t_1)}} \frac{1}{\sqrt{2\pi(t_2-s)}} e^{-\frac{(x-B)^2}{2(t_2-s)}}}{\frac{1}{\sqrt{2\pi(t_2-t_1)}} e^{-\frac{(B-A)^2}{2(t_2-t_1)}}}.$$
It's an ugly simplification job to see that this gives a normal with the mean and variance you expect.