Recall the following characterization of (one-dimensional) Brownian motion
A stochastic process $(W_t)_{t \geq 0}$ is a Brownian motion, if and only if,
- $(W_t)_t$ has continuous sample paths.
- $(W_t)_t$ is a Gaussian process with mean $0$ and covariance $\mathbb{E}(W_s W_t) = \min\{s,t\}$ for all $s,t \geq 0$.
As $(W_t)_t$ has obviously continuous sample paths, we just have to check the second property.
Since $(B_t)_{t \geq 0}$ is a Brownian motion, it is in particular a Gaussian process and so
$$B_t - \sum_{j=0}^{n-1} (B_1-B(t_j)) \frac{1}{1-t_j} (t_{j+1}-t_j)$$
is Gaussian for each $n \in \mathbb{N}$ where $t_j := \frac{t}{n} j$. If we let $n \to \infty$, then we get
$$W_t = \lim_{n \to \infty} \left( B_t - \sum_{j=0}^{n-1} (B_1-B(t_j)) \frac{1}{1-t_j} (t_{j+1}-t_j) \right)$$
is Gaussian as a limit of Gaussian random variables. Since this argumentation applies in exactly the same way to the joint distributions $(W_{s_1},\ldots,W_{s_m})$ where $s_j \geq 0$, we get that $(W_t)_{t \geq 0}$ is a Gaussian process. It remains to check mean and covariance.
By Fubini's theorem, we have
$$\begin{align*} \mathbb{E}(W_t) &= \underbrace{\mathbb{E}(B_t)}_{0} - \mathbb{E} \left( \int_0^t\frac{B_1-B_s}{1-s} \, ds \right) = - \int_0^t \underbrace{(\mathbb{E}(B_1-B_s)}_{0} \frac{1}{1-s} \, ds = 0. \end{align*}$$
Now fix $r \leq t$.
$$\begin{align*} \mathbb{E}(W_r W_t) &= \mathbb{E}(B_t B_r)- \mathbb{E} \left( B_t \int_0^r \frac{B_1-B_s}{1-s} \, ds \right) - \mathbb{E} \left( B_r \int_0^t \frac{B_1-B_s}{1-s} \, ds \right) \\ &\quad + \mathbb{E} \left( \int_0^t \int_0^r \frac{B_1-B_u}{1-u} \frac{B_1-B_v}{1-v} \, du \, dv \right) \\ &=: \mathbb{E}(B_r B_t) +I_2+I_3+I_4 \end{align*}$$
If we can show that $$I_2+I_3+I_4 = 0$$ we are done. Using $\mathbb{E}(B_u B_v) = \min\{u,v\}$ for any $u,v \in [0,1]$ and Fubini's theorem, we find
$$ \begin{align*} I_2 &= \int_0^r \frac{\mathbb{E}(B_1 B_t-B_tB_s)}{1-s} \, ds = \int_0^r \frac{t-s}{1-s} \, ds \\ &= - \log (1-r) t + r + \log(1-r) \end{align*}$$
as $r \leq t$. Similarly,
$$\begin{align*} I_3 &= \int_0^t \frac{r- \min\{r,s\}}{1-s} \, ds = \int_0^r \frac{r-s}{1-s} \, ds + \int_r^t \underbrace{\frac{r-r}{1-s}}_{0} \, ds = \int_0^r \frac{r-s}{1-s} \, ds \\ &= (1-\log(1-r)) r + \log(1-r) \end{align*}$$
and, finally,
$$\begin{align*} I_4 &= \int_0^t \int_0^r \frac{1-v-u+ \min\{u,v\}}{(1-u)(1-v)} \, du \, dv \\ &= \int_r^t \int_0^r \frac{1-v-u+ u}{(1-u)(1-v)} \, du \, dv + \int_0^r \int_0^r \frac{1-v-u+ \min\{u,v\}}{(1-u)(1-v)} \, du \, dv \\ &= (t-r) \int_0^r \frac{1}{1-u} \, du + 2 \int_0^r \int_v^r \frac{1}{1-v} \, du \, dv\\ &= -(t-r) \log(1-r) + 2 ((1-\log(1-r))r + \log(1-r)) \end{align*}$$
where we have used in the penultimate equation that
$$\begin{align*} \int_0^r \int_0^r \frac{1-v-u+ \min\{u,v\}}{(1-u)(1-v)} \, du \, dv &= \int_0^r \int_0^v \frac{1}{1-u} \, du \, dv + \int_0^r \int_v^r \frac{1}{1-v} \, du \, dv \\ &= \int_0^r \int_v^r \frac{1}{1-u} \, dv \, du + \int_0^r \int_v^r \frac{1}{1-u} \, dv \, du \\ &= 2 \int_0^r \int_v^r \frac{1}{1-v} \, du \, dv. \end{align*}$$
Adding all up, we get $I_2+I_3+I_4 = 0$ and this finishes the proof.
Fix $s < t$. By symmetry of Brownian motion, we have $$\mathbb{P}(B_s B_t \geq 0) = \mathbb{P}(B_s \geq 0, B_t \geq 0) + \mathbb{P}(-B_s \geq 0, -B_t \geq 0) = 2 \mathbb{P}(B_s \geq 0, B_t \geq 0).$$ Consequently, it suffices to compute the probability on the right-hand side. Conditioning on $\mathcal{F}_s := \sigma(B_r; r \leq s)$, we find from the Markov property that
$$\mathbb{P}(B_s \geq 0, B_t \geq 0)= \mathbb{E}\left( 1_{\{B_s \geq 0\}} \mathbb{P}(B_{t-s}+x \geq 0) \bigg|_{x=B_s}\right). \tag{1}$$
Denote by $\Phi$ the cdf of the standard Gaussian distribution. Since $B_{t-s}$ is Gaussian with mean $0$ and variance $t-s$, we have
$$\mathbb{P}(B_{t-s} +x \geq 0) = \mathbb{P}(B_{t-s} \leq x) = \Phi(x/\sqrt{t-s}).$$
Hence, by $(1)$,
$$\mathbb{P}(B_s \geq 0,B_t \geq 0) = \mathbb{E}(1_{\{B_s \geq 0\}} \Phi(B_s/\sqrt{t-s})).$$
If we denote by $\varphi$ the density of the Gaussian distribution, then we can write this equivalently as
$$\mathbb{P}(B_s \geq 0,B_t \geq 0) = \int_0^{\infty} \Phi(\sqrt{s} x/\sqrt{t-s}) \varphi(x) \, dx.$$
The latter integral can be calculated explicitly, see e.g. this answer, and we get
$$\mathbb{P}(B_s \geq 0,B_t \geq 0) = \frac{1}{4} + \frac{1}{2\pi} \arctan \sqrt{\frac{s}{t-s}}.$$
Hence,
$$\mathbb{P}(B_s B_t \geq 0)= \frac{1}{2} + \frac{1}{\pi} \arctan \sqrt{\frac{s}{t-s}}.$$
A short sanity check: For $s=t$ both sides equal $1$ - as they should.
Best Answer
Recall that $X \in \sigma(Y_1, \ldots, Y_k)$ iff there a measurable function $g : \mathbb{R}^k \to \mathbb{R}$ so that $X = g(Y_1, \ldots, Y_k)$. (This is the Doob-Dynkin lemma.)
First, note that $\sigma(B_{t_0} - B_{t_1}, B_{t0} - B_{t_2}, B_{t_1} - B_{t_2}) = \sigma(B_{t_0} - B_{t_1}, B_{t_1} - B_{t_2}) =: F$.
Let's note that $B_{t_0} \not \in F$: $B_{t_0}$ is independent from $F$ by independence of increments, and as long as $B_{t_0}$ is not deterministic it cannot be independent from itself. (Except in the case that $t_0 = 0$, so $B_{t_0} = 0$.)
Thus, if $X = B_{t_0} - a B_{t_1} - b B_{t_2} \in F$ for all $a,b$, then since taking limits preserves measurability, $B_{t_0}$ would be. This is a contradiction.
On the other hand, if you add $B_{t_0}$ into your sigma algebra, then it is the same as $\sigma( B_{t_0} , B_{t_1}, B_{t_2})$, so you can express $X$.