Hints
- Choose $t=s=0$ in $\mathbb{E}(W_s \cdot W_t) = \min\{s,t\}$. (Note that $N(0,0)$ equals -by definition- $\delta_0$.)
- Let $s<t$. By assumption, the random vector $X:=(W_s,W_t)$ is Gaussian with mean $0$ and covariance matrix $$C:=\begin{pmatrix} s & s \\ s & t \end{pmatrix}$$ Since $X$ is Gaussian, we know that $\ell^T \cdot X = \ell_1 \cdot W_s+\ell_2 \cdot W_t$ is Gaussian for $\ell \in \mathbb{R}^2$. There are known formulas how to calculate the mean and variance of $\ell^T \cdot X$. Find a suitable $\ell$.
- Let $0:=t_0 < t_1 < \ldots < t_n$. Note that $$\Delta := \begin{pmatrix} W_{t_1}-W_{t_0} \\ \vdots \\ W_{t_n}-W_{t_{n-1}} \end{pmatrix} = \underbrace{\begin{pmatrix} -1 & 1 & 0 & 0 &\ldots & 0 \\ 0 &-1 & 1 & 0 & \ldots & 0 \\ \vdots & & \ddots & \ddots & & \\ 0 & 0 & 0 & \ldots & -1 & 1 \end{pmatrix}}_{=:M} \cdot \begin{pmatrix} W_{t_0} \\ \vdots \\ W_{t_n} \end{pmatrix}$$
Since $(W_{t_0},\ldots,W_{t_n})$ is Gaussian, by assumption, we conclude that $\Delta$ is Gaussian. Therefore, it suffices to show that the covariance matrix of $\Delta$ is a diagonal matrix. Again, there are known formulas how to calculate the covariance matrix of a linear transformation of a Gaussian random vector.
Remark In general, one requires that the paths $t \mapsto W(t,w)$ are continuous for almost all $w$. The theorem of Kolmogorov-Chentsov shows that, under the given assumptions, there exist always a modification $(\tilde{W}_t)_t$ of the process $(W_t)_t$ such that the sample paths $t \mapsto \tilde{W}(t,w)$ are continuous almost surely.
The process $X$ is not gaussian and its increments are not independent.
Note first that $X$ is a Brownian martingale, hence a Brownian motion with a change of time, thus, it is distributed like $(\beta_{\langle X\rangle_t})$, where $\beta$ is a Brownian motion independent of $X$. For example, $X_1$ has the distribution of $\beta_{\langle X\rangle_1}=\sqrt{\alpha}\cdot\gamma$ where $\gamma$ is standard normal independent of $(X_t)$ and $\alpha=\langle X\rangle_1$. Thus, $E[X_1]=0$, $E[X_1^2]=E[\alpha]\cdot E[\gamma^2]=E[\alpha]$ and $E[X_1^4]=E[\alpha^2]\cdot E[\gamma^4]=3E[\alpha^2]$.
Since $E[Z^4]=3E[Z^2]^2$ for every centered normal random variable $Z$, these remarks show that if $X_1$ is normal then $E[\alpha^2]=E[\alpha]^2$, that is, $\alpha$ is almost surely constant. But $\alpha=\int\limits_0^1B_t^4\,\mathrm dt$ hence this is not so and $X_1$ is not normal.
To study the independence of the increments of $X$, fix some $s\geqslant0$ and consider the sigma-algebras $\mathcal F^X_s=\sigma(X_u;u\leqslant s)$ and $\mathcal F^B_s=\sigma(B_u;u\leqslant s)$, and the Brownian motion $C$ defined by $C_u=B_{s+u}-B_s$ for every $u\geqslant0$. Then $C$ is independent of $\mathcal F^B_s$. Furthermore, for every $t\geqslant0$,
$$
X_{t+s}=X_s+\int_0^t(B_s+C_u)^2\mathrm dC_u=X_s+B_s^2C_t+2B_s\int_0^tC_s\mathrm dC_s+\int_0^tC_s^2\mathrm dC_s.
$$
Rewrite this as
$$
X_{t+s}-X_s=B_s^2C_t+B_sD_t+G_t,
$$
where $D_t$ and $G_t$ are functionals of $C$ hence independent of $\mathcal F^B_s$. Thus,
$$
E[(X_{t+s}-X_s)^2\mid\mathcal F^B_s]=B_s^4E[C_t^2]+B_s^2E[D_t^2]+E[G_t^2]+2B_s^3E[C_tD_t]+2B_s^2E[C_tG_t]+2B_sE[D_tG_t].
$$
One can check that $E[C_tD_t]=E[D_tG_t]=0$, $E[C_t^2]=t$, $E[D_t^2]=2t^2$, $E[G_t^2]=t^3$ and $E[C_tG_t]=\frac12t^2$ hence
$$
E[(X_{t+s}-X_s)^2\mid\mathcal F^B_s]=tB_s^4+3t^2B_s^2+t^3.
$$
Note that $\mathrm d\langle X\rangle_s=B_s^4\mathrm ds$ and that $\langle X\rangle$ is $\mathcal F^X$-adapted hence $B_s^4$ and every function of $B_s^4$, for example $B_s^2$, are measurable with respect to $\mathcal F^X_s$. This yields
$$
E[(X_{t+s}-X_s)^2\mid\mathcal F^X_s]=tB_s^4+3t^2B_s^2+t^3.
$$
The RHS is not almost surely constant hence $(X_{t+s}-X_s)^2$ is not independent of $\mathcal F^X_s$, in particular the increments of $X$ are not independent.
Edit: One may feel that the computation of the conditional expectation of $(X_{t+s}-X_s)^2$ above is rather cumbersome (it is) and try to replace it by the (definitely simpler) computation of the conditional expectation of $X_{t+s}-X_s$. Unfortunately,
$$
E[X_{t+s}-X_s\mid\mathcal F^X_s]=0,
$$
hence this computation is not sufficient to decide whether the conditional distribution of $X_{t+s}-X_s$ conditionally on $\mathcal F^X_s$ is constant or not (which is the reformulation of the independence of a random variable and a sigma-algebra this solution relies on). Another way of looking at the situation is that, fortunately, already the conditional second moments are not constant.
Best Answer
Let $u_1<u_2<\cdots<u_n$. We have to show that $(c(W_{u_{i+1}c^{-2}}-W_{u_ic^{-2}}))$ are independent. This is immediate since $u_1c^{-2}<u_ic^{-2}<\cdots u_nc^{-2} $ and BM has independent increments.