The proof in the book is correct and yours isn't.
Indeed, it cannot be proved that a subspace of a finitely generated vector space $V$ is finitely generate only on the basis of the vector space axioms, without using a very specific property of the field of scalars, namely that it is a field, so a noetherian ring.
It is true that any $u\in U$ is a linear combination of a spanning set for $V$, but what you need is a finite set of vectors in $U$ (not just in $V$) that spans $U$.
The key points in the proof are:
no linearly independent set of vectors can have more elements than the dimension of the space $V$;
if $(v_1,\dots,v_{j-1})$ is linearly independent and $v_j\notin\operatorname{span}(v_1,\dots,v_{j-1})$, then also $(v_1,\dots,v_{j-1},v_j)$ is linearly independent.
If $U$ were not finitely generated, then the process outlined in the proof, which uses the second key point, would not stop, contradicting the first key point.
Note that the first key point relies on the fact that nonzero scalars have an inverse. Without this property, one cannot prove it.
The authors state that the empty set spans the zero subspsace $\{ 0 \}$ by convention.
However, this really depends on your definition of subspace spanned by a set. The definition I use is the following:
the subspace spanned by a set $S \subset V$ is defined to be the intersection of all subspaces of $V$ that contain $S$. That is, if $\langle S \rangle$ denotes the subspace spanned by $S$, then
$$
\langle S \rangle := \bigcap_{S \subset W \leq V} W,
$$
where $W \leq V$ indicates that $W$ is a subspace of $V$. So, if $S$ is the empty set, then the zero subspace $\{ 0 \}$ contains the empty set, and every vector space contains the zero subspace, so $\langle \emptyset \rangle = \{ 0 \}$.
For the second question, there appears to be a typo. The sentence should read:
By theorem $(5.1)$ any set of $n+1$ vectors in $V$ is linearly dependent, and since set consisting of a single nonzero vector in linearly independent, it follows that, for some integer, $m \geq 1$, $V$ contains linearly independent vectors $b_1,\dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.
Perhaps that should clear the confusion. To elaborate on why this corrected statement is true, proceed by contradiction:
suppose it is false that
for some integer, $m \geq 1$, $V$ contains linearly independent vectors $b_1,\dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.
What would this mean? This means that for each $m \geq 1$, if $b_1,\dots,b_m$ is any set of $m$ linearly independent vectors, then there is a vector $b_{m+1}$ such that $b_1,\dots,b_{m+1}$ is also linearly independent. However, $(5.1)$ says that this is not possible for $m = n$, where $n$ is the size of the given generating set of $V$.
Edit: based on the comments requesting clarification.
I am not sure that the statement under consideration is of the form "(not P) or Q". I always prefer to reason out the negation in a step-by-step fashion rather than work with formal statements and the rules for their negation. It leads to less confusion, at least in my mind.
Now, the negation of
There exists $m \geq 0$ such that ~blah~.
is
For every $m \geq 0$ we have ~not blah~.
Here ~blah~ is
There exists a set of linearly independent vectors $b_1,\dots,b_m$ such that ~foo~.
So, ~not blah~ is
For any set of linearly independent vectors $b_1,\dots,b_m$, we have ~not foo~.
Here, ~foo~ is
Any set of $m+1$ vectors in $V$ is linearly dependent.
So, ~not foo~ is
Some set of $m+1$ vectors in $V$ is linearly independent.
So, the negation of the statement in consideration is:
For every $m \geq 0$, we have that for any set of linearly independent vectors $b_1,\dots,b_m$, we have some set of $m+1$ vectors in $V$ is linearly independent.
There is no loss of generality in taking the set of $m+1$ linearly independent vectors in $V$ to be of the form $b_1,\dots,b_{m+1}$ because any subset of a linearly independent set is linearly independent. So, if we start with the set $b_1,\dots,b_m$ of $m$ linearly independent vectors and get $v_1,\dots,v_{m+1}$ a set of $m+1$ linearly independent vectors as per the claim, then in particular $v_1,\dots,v_m$ is a set of $m$ linearly independent vectors such that there exists $v_{m+1}$ such that $v_1,\dots,v_{m+1}$ is linearly independent. So, we might as well relabel the $v_i$'s as $b_i$'s and proceed inductively, since this does not change the proof.
Hope this helps. Feel free to reply in the comments for any clarifications.
Best Answer
You may use the assumption about $\ S\,\ $ being a subspace of $\ \mathbb R^A,\ $ where $\ A\ $ is a finite set (and $\ \mathbb R^A\ $ is the set of all functions $\ x:A\rightarrow\mathbb R$).
It's not a place here to say a lot but nevertheless, the construction below has several important implications which go beyond the scope of the given Q.
=====================================
Case $\ A=\emptyset\ $ is trivial. Let $\ A\ne\emptyset.\ $
Define (construct by induction), a decreasing sequence of linear subspaces
$$ S=S_1 \supseteq S_2 \supseteq S_3 \supseteq \ldots $$ and two sequences of elements $\ b_k\in S_k\ \mbox{and}\ a_k\in A\ (k=1\ 2\ \ldots)\ $
-- both sequences being a priori finite or infinite -- as follows, subject to the following four conditions:
The above sequences are finite since $\ A\ $ was assumed to be finite -- at one moment we will have $\ S_{n+1}=\{0\}.\ $ Then
$$ \{b_1\ \ldots\ b_n\} $$
is the base of $S$. Obviously, that last non-zero $\ n\le|A|,\ $ i.e.
$$ \dim S\ \le\ \dim(\mathbb R^A)$$
=====================================
REMARK 1. The theorem about base holds for arbitrary linear spaces (over arbitrary fields), including the infinite-dimensional spaces -- you don't even need to talk about subspaces (every linear subspace is a linear space, that's all that you need to know).
However, the above proof for the finite-dimensional case (which holds for arbitrary fields, of course) is extra useful in linear algebra.
REMARK 2. In general, when you include the infinite-dimensional spaces, the theorem about base requires transcendental induction or any other property of sets which is equivalent to the axiom of choice. The axiom of choice is necessary, meaning that a complete ignoring the axiom of choice will not lead to the theorem about the base.