Assume $\textbf{v} \in U \cap W$. Then $\textbf{v} = a(1,1,0,-1)+b(0,1,3,1)$ and $\textbf{v} = x(0,-1,-2,1)+y(1,2,2,-2)$.
Since $\textbf{v}-\textbf{v}=0$, then $a(1,1,0,-1)+b(0,1,3,1)-x(0,-1,-2,1)-y(1,2,2,-2)=0$. If we solve for $a, b, x$ and $y$, we obtain the solution as $x=1$, $y=1$, $a=1$, $b=0$.
so $\textbf{v}=(1,1,0,-1)$
You can validate the result by simply adding $(0,-1,-2,1)$ and $(1,2,2,-2)$
I think the following article:
Gregory H. Moore. The axiomatization of linear algebra: 1875-1940. Historia Mathematica, Volume 22, Issue 3, 1995, Pages 262–303
(Available here from Elsevier)
may shed some light on your question, although you may not have enough mathematical experience to understand the entire article. Here is my understanding having browsed the article, but I must stress that I am not a mathematical historian, so please don't quote me!
The idea of an abstract space where an addition is defined between elements and there is a field action (rather than a particular realization as, for instance, $\mathbb{R}^n$ or $C([0,1])$) seems to be due to Peano in 1888, where he called them linear systems. The definition of an abstract vector space didn't catch on until the 1920s in the work of Banach, Hahn, and Wiener, each working separately. Hahn defined linear spaces in order to unify the theory of singular integrals and Schur's linear transformations of series (both employing infinite dimensional spaces). Wiener introduced vector systems which seems to be roughly equivalent to Banach's definition, which was motivated by finding a common framework to understand integral operators (Banach's 1922 paper "Sur les operations dans les ensembles abstraites et leur application aux équations intégrales" is available online and is quite readable) which were defined on champs (domains).
I understand the modern name vector space is popular because of a widely circulated 1941 textbook by Birkhoff and MacLane, A Survey of Modern Algebra, where the term is used.
As Asaf and Hans have indicated in their comments, the motivation for calling such spaces vector spaces is because intuitively, they generalize our understanding of "vectors" (differences between points) in a finite dimensional Euclidean. The motivation for calling such spaces linear spaces is because our ability to add together different elements is the crucial feature which lets us apply the general theory to solve specific problems which are not obviously (to the 1920's eye) about vectors (in particular, in PDE and mathematical physics).
In your course, it is unlikely you will cover material that requires this abstraction, but it is a good habit for later mathematics to work in generality while you maintain your intuition in concrete examples.
Best Answer
Yes, it has to be a vector space in order to have a basis.
If your linear system of equations is homogeneous, meaning that the right-hand-side is the $0$-vector, then the solution set will always be a subspace of whatever space it resides in.
Why is this true? Any linear system of equations can be written in terms of a matrix:
$$Ax = 0.$$
The solution set consists of all vectors $x$ that solves this equation, so all you need to check is that the subspace axioms hold.
Certainly $A0 = 0$. If $Ax_1=0$ and $Ax_2=0$, then because $A$ works linearly, we get $A(x_1+x_2) = Ax_1+Ax_2 = 0+0=0$. And finally, if $c$ is a scalar, then again by linearity, we get $A(cx) = cAx = c0 = 0$.
In conclusion, the solution set of $Ax=0$ is indeed a subspace. If $A$ is $m\times n$, then it will be a subspace of $\mathbb{R}^n$.