Consider the set of vectors $\{\mathbf v_1, \mathbf v_2, \dots, \mathbf v_k\}$ in the vector space $V$ over the field $F$ (the vector space might be $\Bbb R^3$ for instance and the field (of scalars) might be $\Bbb R$). Here are some definitions:
- A linear combination of these $k$ vectors is another vector
$\mathbf w=a_1\mathbf v_1 + a_2\mathbf v_2 + \cdots + a_k\mathbf v_k$, where $a_1, a_2, \dots, a_k$ are scalars.
- The span of these vectors is the set of all linear combinations
of these vectors: i.e. $\operatorname{span}\{\mathbf v_1, \mathbf v_2, \dots, \mathbf v_k\} = \{\mathbf w \in V \mid w=a_1\mathbf v_1 + a_2\mathbf v_2 + \cdots + a_k\mathbf v_k \wedge a_1, a_2, \dots, a_k \in F\}$
- This set of vectors is a linearly independent set if the only
linear combination of these vectors that produces the zero vector is
the trivial one. I.e. $a_1\mathbf v_1 + a_2\mathbf v_2 + \cdots + a_k\mathbf v_k = 0 \implies a_1=a_2=\cdots=a_k=0$.
- A set of vectors is a linearly dependent set if it is not a linearly independent set.
Now let's show that a linearly dependent set has at least one vector which is a linear combination of the others. Let $\mathbf a,\mathbf b,\mathbf c \in \Bbb R^3$ be a set of linearly dependent vectors. Then by definition, the equation $$x\mathbf a+y\mathbf b+z\mathbf c=\mathbf 0$$ for scalars $x,y,z$ has more than one solution ($x=y=z=0$ is definitely a solution, but it's not the only one). Thus at least one of the scalars $x,y,z$ are nonzero. WLOG let's say it's $x$. Then we can rearrange this equation by subtracting all of the vectors except $x\mathbf a$ on both sides. $$x\mathbf a=-y\mathbf b-z\mathbf c$$ Now because $x \ne 0$, we can divide it on both sides to get $$\mathbf a=-\frac yx\mathbf b -\frac zx\mathbf c$$ Thus $\mathbf a$ is a linear combination of $\mathbf b$ and $\mathbf c$. You can see why this would fail in the linearly independent case -- you wouldn't be able to divide out any of the coefficients because they are all zero.
Now let's consider the set of vectors $\{\mathbf 0\}$. That is the set only containing the zero vector. Is this set linearly independent or linearly dependent? It is linearly dependent because $x\mathbf 0=\mathbf 0$ has infinitely many solutions. Likewise, any set which contains the zero vector will be a linearly dependent set (confirm this for yourself).
I now claim that the zero vector in a vector space $V$ is a linear combination of any non-empty set of of vectors in $V$. Can you see why that must be true?
Does this answer you questions or is there something else I need to hit on?
Your definition is correct: if $\{v_1,\ldots, v_n\}$ is a linearly independent set, then no $v_i$ can be written as a linear combination of the other vectors in the set. Call this "definition 1."
Note that this is equivalent to saying that the zero vector can be uniquely written as a linear combination of the $v_i$, namely $\vec{0} = 0\cdot v_1+0\cdot v_2+\cdots+0\cdot v_n$. Call this "definition 2."
[To show definitions 1 and 2 are equivalent, just do some rearranging. For example, if $v_1=v_2+2v_3$, this violates linear independence in definition 1. Rearranging gives $\vec{0}=-v_1+v_2+2v_3$, which also violates linear independence in definition 2.]
Now, consider the statement in the book. If $x$ is in the span of $U=\{v_1,\ldots,v_n\}$, then by definition it can be written as a linear combination of elements in $U$. In particular, $x$ may not be in $U$, as pointed out by lulu, so this does not violate definition 1, which I think is your confusion. Now, to show that this linear combination is unique, you need to use linear independence.
Suppose for sake of contradiction $x$ can be expressed as a linear combination in more than one way (not unique):
\begin{align}
x&=a_1v_1+\cdots+a_n v_n\\
x&=b_1v_1+\cdots +b_n v_n
\end{align}
Then subtracting the two equations gives
$$\vec{0} = (a_1-b_1) v_1 + \cdots + (a_n-b_n) v_n.$$
Why does this violate linear independence? (See definition 2.)
Best Answer
Well, the RHS is quite trivially a subset of the LHS. Conversely, you can recover both $x$ and $y$ very easily from the vectors $x+y$ and $x-y$. Namely, $1/2\cdot(x+y)+1/2\cdot(x-y)=x$. Then once you have $x$, you can subtract it from $x+y$ to get $y$. Thus we have the reverse inclusion.
Hence they are equal.