Using your defintion of Wiener process :
b)The independence of the increments implies that $cov(W_t,W_s)=\sigma s$ if $s<t$
However, $cov(\sqrt{t}W_1,\sqrt{s}W_1)=\sqrt{ts}\sigma$
c) same thing, let $X_t=W(2t)-W(t)$, if it was a Wiener process, $cov(X_{2t}-X_{t},X_{t})=0$
We have $X_{2t}-X_{t}=W_{4t}-W_{t}$, and finally $cov(X_{2t}-X_{t},X_{t})=2t\sigma$
Fix $0=t_0 < t_1 < \ldots < t_n$.
Lemma: $(2)$ is equivalent to $$\mu((B_{t_1},\ldots,B_{t_n}) \in U) = \int_U p(x) \, dx$$ for $$p(x) := \frac{1}{\sqrt{2\pi}^n} \frac{1}{\sqrt{\det C}} \exp \left(- \frac{1}{2} \langle x, C^{-1} x \rangle \right),$$ where $C \in \mathbb{R}^{n \times n}$ is defined by $c_{ij} := \min\{t_i,t_j\}$, $i,j=1,\ldots,n$ and $\langle x,y \rangle = \sum_{i=1}^n x_iy_i$ is the scalar product in $\mathbb{R}^n$.
Note that the result tells us that $(B_{t_1},\ldots,B_{t_n})$ is Gaussian with mean vector $m=(0,\ldots,0) \in \mathbb{R}^n$ and covariance matrix $C=(\min\{t_i,t_j\})_{i,j}$. This is not at all surprising: If $(B_t)_{t \geq 0}$ is indeed a Brownian motion, then this is exactly how the finite-dimensional distributions should look like.
Proof of the lemma: Denote by $M \in \mathbb{R}^{n \times n}$ the lower triangular matrix with entries $1$ on and below the diagonal. Denote by $D \in \mathbb{R}^{n \times n}$ the diagonal matrix with entries $d_i = t_i-t_{i-1}$ on the diagonal. Since $M^{-1}$ is a two-band matrix with $+1$ on the diagonal and $-1$ on the first sub-diagonal (below the diagonal), we can write
\begin{align*} \sum_{j=1}^n \frac{(x_j-x_{j-1})^2}{t_j-t_{j-1}}= \langle M^{-1} x, D^{-1} M^{-1} x \rangle &= \langle x, (M^{-1})^T \cdot (D^{-1} M^{-1} x) \rangle \\ &= \langle x, C^{-1} x \rangle \end{align*} for $C:=M D M^T$. (Note that $(M^{-1})^T = (M^T)^{-1}$.) Performing the matrix-multiplication of the above-defined matrices, we see that $C=(\min\{t_i,t_j\})_{i,j}$. As $\det(M)=1$, it also follows that $$\det(C) = \det(D) = \prod_{j=1}^n (t_j-t_{j-1}).$$ Plugging this into $(2)$, proves the lemma.
Note that, by the definition of $M$, we can write $$\Gamma:=\begin{pmatrix} B_{t_1} \\ \vdots \\ B_{t_n} \end{pmatrix} = M \cdot \Delta$$ where $\Delta := (B_{t_1}-B_{t_0},\ldots,B_{t_n}-B_{t_{n-1}})$. Equivalently, $$\Delta = M^{-1} \Gamma.$$ Since we know from our lemma that $\Gamma=(B_{t_1},\ldots,B_{t_n})$ is Gaussian, it follows that $\Delta$ is Gaussian as a linear combination of Gaussian random variables; more precisely,
\begin{align*} \mathbb{E}\exp(i \langle \xi, \Delta \rangle) = \mathbb{E}\exp(i \langle \xi, M^{-1} \Gamma \rangle) &= \mathbb{E}\exp(i \langle (M^{-1})^T \xi, \Gamma \rangle) \\ &= \exp(- \frac{1}{2} \langle (M^{-1})^T \xi, C (M^{-1})^T \xi \rangle) \\ &= \exp (-\frac{1}{2} \langle \xi, M^{-1} C (M^{-1})^T \xi \rangle ) \\ &=\exp(- \frac{1}{2} \langle \xi, D \xi \rangle), \end{align*} where we used in the last step that $C = MDM^T$ (see the proof of the lemma). This shows that the random vector $\Delta=(B_{t_1}-B_{t_0},\ldots,B_{t_n}-B_{t_{n-1}})$ is Gaussian with mean vector $0$ and covariance matrix $D$. Since $D$ is a diagonal matrix, this means, in particular, that $B_{t_1}-B_{t_0},\ldots,B_{t_n}-B_{t_{n-1}}$ are independent.
Best Answer
I think what's intended is that $\mathcal G$ be the collection of all sets that satisfy the second part (i.e. the collection of all sets $A$ such that there is a sequence of times $t_1,t_2,\ldots$ and a $B\in \mathcal R^{1,2,\ldots}$ so that $A = \{w:(w(t_1),w(t_2),\ldots)\in B\}$). In other words, the problem is to show that $F_o=\mathcal G,$ and the "only if" part of the problem is to show $F_o\subseteq \mathcal G.$ This can be accomplished by showing that $\mathcal G$ is a $\sigma$-field containing all the finite-dimensional sets, since $F_o$ is the $\sigma$-field generated by those sets. $\mathcal G$ contains all the finite-dimensional sets, since $$A_1\times \ldots \times A_n \times \mathbb R\times\mathbb R\times\ldots\in \mathcal R^{1,2\ldots},$$ so what's left is to show $\mathcal G$ is a $\sigma$-field.