Once you have proved that the $3$ vectors are linearly independent, you automatically have that they are a basis for $\mathbb{R}^3$, since they generate a subspace with dimension $3$ of a space of dimension $3$ - so they must generate the entire space! As for proving linear independence, the determinant approach proposed in the question is general and works well.
In this particular case, a simpler approach is to see that $v-u=(1 0 0)^T$, $w-u=(010)^T$, $u=(001)^T$ form what is called the canonical basis of $\mathbb{R}^3$, so $u,v,w$ must also form a basis of $\mathbb{R}^3$.
This is not a proper answer, but it might be useful anyway. It's just what I've found so far reading books and trying to make sense of everything I learned about vectors in both calculus and algebra.
You can geometrically picture vectors in $\mathbb{R}^n$ as arrows placed at the origin. Every vector can be uniquely expressed as a linear combination of $n$ linearly independent vectors, so for every basis of this vector space $(\vec{e}_1,\dots,\vec{e}_n)$ we can write:
$$
v=v^1 \vec{e}_1 + \cdots + v^n \vec{e}_n
$$
Although using a vector basis allows us to uniquely identify every vector in $\mathbb{R}^n$, it isn't really useful when trying to identify every "point" in $\mathbb{R}^n$ because of its limitations (your axes will have to be straight lines and will have to include the "canonical" origin).
In order to identify every point of $\mathbb{R}^n$ with more freedom, our first approach could be affine geometry. In $\mathbb{R}^n$ viewed as an affine space, we can define a coordinate system $(O,\mathcal{B})$, with $O$ a point in $\mathbb{R}^n$ and $\mathcal{B}$ a basis of $\mathbb{R}^n$ as a vector space. Axes are still straight lines, but now we can move from the origin to any other point $O$. This affine coordinate system is in a way a coordinate system, and it is definitely not the same as a basis — because $(O,\mathcal{B}) \neq \mathcal{B}$ — but we can do better.
We can define a system of $n$ equations (not necessarily linear, a system of linear equations would bring us back to the affine case) that uniquely identify every point in $\mathbb{R}^n$:
$$
x^i =\Phi^i(q^1,\dots,q^n), \space\space\space i=1,\dots,n
$$
This is precisely what we do when we define cylindrical or spherical coordinates: express $x,y,z$ in terms of three new variables ($\rho,\varphi,z$ and $r,\theta,\varphi$ respectively).
This means that the values $(q^1,\dots,q^n)$ will be our new coordinates. Note that, if this system were linear, we would need to require the coefficient matrix to have a non-zero determinant in order for this system to have a (unique) solution. For a general system of equations, by virtue of the implicit function theorem, the analogous condition is:
$$
\frac{\partial(x^1,\dots,x^n)}{\partial(q^1,\dots,q^n)} \neq 0
$$
i.e. the Jacobian of the transformation must be non-zero.
This new coordinates $(q^1,\dots,q^n)$ aren't related to any basis, but they induce one for every point $(q^1,\dots,q^n)$ in $\mathbb{R}^n$: the so-called coordinate basis of this coordinate system:
$$
\vec{v}_\mu = \frac{\partial\vec{\Phi}}{\partial q^\mu}
$$
In this sense, we can see than the components of the position vector at the point $p\in\mathbb{R}^n$ will be different from the coordinates of the point $p$ itself. The components depend on the coordinate basis (or any other basis which you define in terms of that one), while the coordinates of a point depend on the coordinate system itself.
In fact, now we're not talking about $\mathbb{R}^n$ as a vector space anymore, but this space does have a vector space attatched at every point, a vector space we call the tangent space at $p$: $T_p\mathbb{R}^n$. The coordinate basis at every point is the vector basis that our coordinate system induces for the tangent space at that point.
Note that the components of a vector (or a tensor, for that matter) may be called coordinates. I only see this when reading about pure vector spaces, without any sense of geometry, metrics or anything. For example, a matrix $A\in\mathcal{M}_{n\times n}$ which satisfies $\vec{v}_{\mathcal{B}'} = A\vec{v}_{\mathcal{B}}$ might be called a change of coordinates matrix (from the coordinates from $\mathcal{B}$ to the coordinates from $\mathcal{B}'$) or a change of basis matrix (from the basis $\mathcal{B}'$ to the basis $\mathcal{B}$, because it satisfies $\mathcal{B}'A=\mathcal{B}$, if we allow ourselves this abuse of notation).
Nonetheless, I always say components when referring to a vector (or tensor), and coordinates when referring to a point.
Best Answer
With your approach, what you can prove is that they are linearly independent — and they are. But in order be be a basis, they must span $\mathbb{R}^4$, and they don't. The space $\mathbb{R}^4$ has dimension $4$ and therefore no set with less than $4$ elements can span it.