your first definition is the definition of a standard one-dimensional Brownian motion.
The second definition is of a non-standard $k$-dimensional Brownian motion.
In particular
$$
Z_t - Z_s \sim N(\mu (t-s), (t-s) \Sigma ).
$$
Therefore, if you set $\mu = 0$ and $\sigma = I_k$, then $\Sigma = I_k$ and
$$
W_t - W_s \sim N(0, (t-s)I_k).
$$
Note that in the second definition it is assumed that $W_t$ consists of $k$ independent (one dimensional) standard Brownian motion. In particular,
$W^2_u$ is independent of $W^1_t$ for all $u$ and $t$.
Fix $0 \leq t_0 < t_1<t_2<...<t_n$, and consider the increments $W(t_i) - W(t_{i-1})$ for $i=1,2,...,n$. We need to show that these are independent.
Note that $W(t_i) - W(t_{i-1}) = p(W_1(t_i) - W_1(t_{i-1})) + \sqrt{1-p^2}(W_2(t_i) - W_2(t_{i-1}))$.
We use a theorem called the disjoint blocks theorem for independent random variables.
Let $X_1,...,X_n$ be independent random variables. Let $S_1 \cup S_2 \cup ... \cup S_m = \{1,2,...,n\}$ be a disjoint decomposition of $1,2,...,n$. Let $f_i: \mathbb R^{|S_i|} \to \mathbb R$ be measurable functions. Then $f_i((X_i : i \in S))$ are independent random variables.
For example, if $X_1,X_2,X_3,X_4$ are independent, then using the disjoint blocks theorem we get that $e^{X_4}+\cos X_2$ and $X_3X_1^2 + \sin X_3$ are independent (see what the $S_i$ and $f_i$ are in this case).
Now, we have the random variables $W_j(t_i) - W_j(t_{i-1})$ for $i=1,2,...,n$ and $j=1,2$. By independence of increments for $W_1$ and $W_2$, and by independence of $W_1$ from $W_2$, these random variables are independent.
Now form the blocks $S_i = \{W_1(t_i) - W_1(t_{i-1}), W_2(t_i) - W_{2}(t_{i-1})\}$ which are a disjoint decomposition of those random variables. Then the functions $f_i$ are the same i.e. $f_i(x,y) = px + \sqrt{1-p^2}y$. Applying the disjoint blocks theorem tells you instantly that the increments of $W(t)$ are independent.
Hence, we are done.
ADDED FOR FURTHER READING
Note that $p^2 + (\sqrt{1-p^2})^2 = 1$ and both $p,\sqrt{1-p^2}$ are non-negative. $W$ can be seen as the first component of an orthogonal transformation (a rotation) of the two-dimensional Brownian motion $(W_1(t),W_2(t))$.
That is, $(p,\sqrt{1-p^2})$ is the first row of an orthogonal matrix $O$, such that $W(t) = [O(W_1(t),W_2(t))]_1$ where $[]_1$ means the first column.
It is well known that combining independent Brownian motions as components provides a Brownian motion of higher dimensions.
Orthogonal transformations preserve the standard Brownian motion
and that components of Brownian motion are lower-dimension Brownian motions.
Combining these facts, $W$ is a Brownian motion.
Best Answer
Note that $$1-\Phi\left(\frac{\epsilon-\mu(t-s)}{\sigma\sqrt{t-s}}\right)+\Phi\left(\frac{-\epsilon-\mu(t-s)}{\sigma\sqrt{t-s}}\right)$$ is equivalent to $$1-\Phi\left(\frac{\epsilon}{\sigma\sqrt{t-s}}-\frac\mu\sigma\sqrt{t-s}\right)+\Phi\left(\frac{-\epsilon}{\sigma\sqrt{t-s}}-\frac\mu\sigma\sqrt{t-s}\right)$$ so we have $$\lim_{s\to t^-}P(|X_s-X_t|>\epsilon)=1-\Phi(+\infty)+\Phi(-\infty)=1-1+0=0.$$ It does not matter that the limit is one-sided as WLOG you set $s<t$, similar calculations hold when you set $s>t$.