Basically, the construction is divided into three steps:
Step 1: Constructing a consistent set of finite dimensional distributions.
Consider any starting point $x\in\mathbb{R}$ and a set of times $0<t_1<t_2<\cdots<t_n<T$. Define a measure on finite dimensional space $\mathbb{R}^n$ as
$$
\nu_{t_1,\cdots,t_n}(F_1,\cdots,F_n)
\triangleq\int_{F_1}dx_1\cdots\int_{F_n}dx_n
\prod_{i=1}^np_{t_i-t_{i-1}}~(x_i,x_{i-1})~~~~~~(1)
$$
where each $F_i$ is a measurable set in $\mathbb{R}$, $x_0=x$ and the transition probability $p$ is Gaussian, i.e.
$$
p_t(x,y)\triangleq (2\pi t)^{-1/2}e^{-(y-x)^2/2t}
$$
Obviously, $p$ is a valid transition probability. Also, it is not hard to verify that for any starting point $x$ such construction satisfies the two consistent assumptions in Kolmogorov extension theorem. Thus, we have constructed a consistent set of finite dimensional distributions.
Step 2: Applying Kolmogorov extension theorem to construct a probability measure on the space of functions with rational domains.
Consider the space
$$\Omega_q=\{\textrm{functions}~~\omega:~\mathbb{Q}\rightarrow\mathbb{R}\},$$
where $\mathbb{Q}$ is the set of rationals. Let $\mathcal{F}_q$ be the $\sigma$- algebra generated by all the finite dimensional measurable sets. Then, Kolmogorov extension theorem tells us that there exists a probability measure $\nu_x$ on $(\Omega_q,~\mathcal{F}_q)$ such that
$$
\nu_x\{\omega:~\omega(0)=x\}=1,
$$
and
$$
\nu_x\{w:~\omega(t_i)\in F_i,~i=1,2,\cdots,n\}=\nu_{t_1,\cdots,t_n}(F_1,\cdots,F_n).
$$
Furthermore, we have the following theorem due to Kolmogorov again:
Theorem: The probability measure $\nu_x$ assigns probability 1 to sample paths $\omega:~\mathbb{Q}\rightarrow\mathbb{R}$ that are uniformly continuous on $\mathbb{Q}\cap [0,T)$.
The proof of above theorem is too tedious and I skipped here for brevity. You might want to check chapter 8.1 of Durrett's book: Probability: Theory and Examples for the proof of this theorem. According to the theorem, we get probability one continuous sample paths on the space $(\Omega_q,~\mathcal{F}_q)$ and the rest three properties are easily verified.
Step 3: Translating probability measure to space of continuous sample paths.
Let $C$ be the space of continuous mappings from $[0,T)$ to $\mathbb{R}$ and $\mathcal{C}$ be the $\sigma$-algebra generated by the coordinate maps $t\rightarrow w(t)$. Here, we apply the following fact: Given $\mathbb{Q}\cap [0,T)$ is a dense subset of $[0,T)$ and $\omega$ is a uniform continuous mapping from $\mathbb{Q}\cap [0,T)$ to $\mathbb{R}$, then $\omega$ has a unique uniformly continuous extension on $[0,T)$. We denote such extension as mapping $\phi$. Furthermore, the mapping $\phi$ is measurable. (I know there are a few jumps, but, anyway, you get the big picture:)). Finally, let
$$P_x\triangleq \nu_x\cdot\phi^{-1}$$
be the probability measure on $(C,\mathcal{C})$, and we finish the construction. All four property checks out.
Remark: Intuitively, it is not clear why we have to go through this step 2, whereas it is possible to directly construct a probability measure on the space
$$\{\textrm{functions}~~\omega:~[0,T)\rightarrow\mathbb{R}\}$$
from the finite dimensional distribution by Kolmogorov extension theorem. Well, it turns out if we do this, then, only property 1, 3, 4 of Brownian motion checks out and almost sure continuity can never be verified. See chapter 8.1 of Durrett's book: Probability: Theory and Examples for more detailed discussion:)
Recall the following characterization of (one-dimensional) Brownian motion
A stochastic process $(W_t)_{t \geq 0}$ is a Brownian motion, if and only if,
- $(W_t)_t$ has continuous sample paths.
- $(W_t)_t$ is a Gaussian process with mean $0$ and covariance $\mathbb{E}(W_s W_t) = \min\{s,t\}$ for all $s,t \geq 0$.
As $(W_t)_t$ has obviously continuous sample paths, we just have to check the second property.
Since $(B_t)_{t \geq 0}$ is a Brownian motion, it is in particular a Gaussian process and so
$$B_t - \sum_{j=0}^{n-1} (B_1-B(t_j)) \frac{1}{1-t_j} (t_{j+1}-t_j)$$
is Gaussian for each $n \in \mathbb{N}$ where $t_j := \frac{t}{n} j$. If we let $n \to \infty$, then we get
$$W_t = \lim_{n \to \infty} \left( B_t - \sum_{j=0}^{n-1} (B_1-B(t_j)) \frac{1}{1-t_j} (t_{j+1}-t_j) \right)$$
is Gaussian as a limit of Gaussian random variables. Since this argumentation applies in exactly the same way to the joint distributions $(W_{s_1},\ldots,W_{s_m})$ where $s_j \geq 0$, we get that $(W_t)_{t \geq 0}$ is a Gaussian process. It remains to check mean and covariance.
By Fubini's theorem, we have
$$\begin{align*} \mathbb{E}(W_t) &= \underbrace{\mathbb{E}(B_t)}_{0} - \mathbb{E} \left( \int_0^t\frac{B_1-B_s}{1-s} \, ds \right) = - \int_0^t \underbrace{(\mathbb{E}(B_1-B_s)}_{0} \frac{1}{1-s} \, ds = 0. \end{align*}$$
Now fix $r \leq t$.
$$\begin{align*} \mathbb{E}(W_r W_t) &= \mathbb{E}(B_t B_r)- \mathbb{E} \left( B_t \int_0^r \frac{B_1-B_s}{1-s} \, ds \right) - \mathbb{E} \left( B_r \int_0^t \frac{B_1-B_s}{1-s} \, ds \right) \\ &\quad + \mathbb{E} \left( \int_0^t \int_0^r \frac{B_1-B_u}{1-u} \frac{B_1-B_v}{1-v} \, du \, dv \right) \\ &=: \mathbb{E}(B_r B_t) +I_2+I_3+I_4 \end{align*}$$
If we can show that $$I_2+I_3+I_4 = 0$$ we are done. Using $\mathbb{E}(B_u B_v) = \min\{u,v\}$ for any $u,v \in [0,1]$ and Fubini's theorem, we find
$$ \begin{align*} I_2 &= \int_0^r \frac{\mathbb{E}(B_1 B_t-B_tB_s)}{1-s} \, ds = \int_0^r \frac{t-s}{1-s} \, ds \\ &= - \log (1-r) t + r + \log(1-r) \end{align*}$$
as $r \leq t$. Similarly,
$$\begin{align*} I_3 &= \int_0^t \frac{r- \min\{r,s\}}{1-s} \, ds = \int_0^r \frac{r-s}{1-s} \, ds + \int_r^t \underbrace{\frac{r-r}{1-s}}_{0} \, ds = \int_0^r \frac{r-s}{1-s} \, ds \\ &= (1-\log(1-r)) r + \log(1-r) \end{align*}$$
and, finally,
$$\begin{align*} I_4 &= \int_0^t \int_0^r \frac{1-v-u+ \min\{u,v\}}{(1-u)(1-v)} \, du \, dv \\ &= \int_r^t \int_0^r \frac{1-v-u+ u}{(1-u)(1-v)} \, du \, dv + \int_0^r \int_0^r \frac{1-v-u+ \min\{u,v\}}{(1-u)(1-v)} \, du \, dv \\ &= (t-r) \int_0^r \frac{1}{1-u} \, du + 2 \int_0^r \int_v^r \frac{1}{1-v} \, du \, dv\\ &= -(t-r) \log(1-r) + 2 ((1-\log(1-r))r + \log(1-r)) \end{align*}$$
where we have used in the penultimate equation that
$$\begin{align*} \int_0^r \int_0^r \frac{1-v-u+ \min\{u,v\}}{(1-u)(1-v)} \, du \, dv &= \int_0^r \int_0^v \frac{1}{1-u} \, du \, dv + \int_0^r \int_v^r \frac{1}{1-v} \, du \, dv \\ &= \int_0^r \int_v^r \frac{1}{1-u} \, dv \, du + \int_0^r \int_v^r \frac{1}{1-u} \, dv \, du \\ &= 2 \int_0^r \int_v^r \frac{1}{1-v} \, du \, dv. \end{align*}$$
Adding all up, we get $I_2+I_3+I_4 = 0$ and this finishes the proof.
Best Answer
It turns out that the independence of $\frac{B_{t_1}}{t_1} - \frac{B_{s_1}}{s_1}$ and $\frac{B_{t_2}}{t_2} - \frac{B_{s_2}}{s_2}$ relies crucially on the fact that these two random variables are jointly Gaussian; independence of the increments alone does not imply that these random variables are independent.
For example, consider a rate-1 Poisson process $(N_t)_{t \geq 0}$. This process has independent increments, and for $s < t$, $N_t - N_s$ is distributed as a Poisson random variable with parameter $t-s$. It also has the same covariance function as Brownian motion, namely $\mathrm{Cov}(B_s, B_t) = \min\{s,t\}$, so that $$ \mathrm{Cov} \left( \frac{N_{t_1}}{t_1} - \frac{N_{s_1}}{s_1}, \frac{N_{t_2}}{t_2} - \frac{N_{s_2}}{s_2} \right) = 0 $$ just as above. However, I claim that $\frac{N_{t_1}}{t_1} - \frac{N_{s_1}}{s_1}$ and $\frac{N_{t_2}}{t_2} - \frac{N_{s_2}}{s_2}$ are not in general independent. Consider $N_1 - 2N_{1/2} = U$ and $\frac{1}{2} N_2 - N_1 = V$. Denote the increments $X = N_{1/2} - N_{0}$, $Y = N_{1} - N_{1/2}$, and $Z = N_2 - N_1$. All three are independent Poisson, the first two have parameter $1/2$, and the last has parameter $1$. Write $$ U = X-Y, \qquad V = \frac 1 2 (-X-Y+Z) ,$$ and use the characteristic function for Poisson random variables to find a value $(s,t)$ for which $$ E(e^{isU + itV}) \neq E(e^{isU}) \, E(e^{isV}) . $$ If my calculation is correct, for $(s,t) = (\pi, 2 \pi)$, the above reads $e^{-3} \neq e^{-6}$. I would appreciate if someone could double-check this!