Your first attempt is completely wrong, I'm afraid. The second attempt contains a good idea that can be fixed to provide a proof.
Suppose $\{v_1,v_2,\dots,v_n\}$ is a linearly independent set and that (without loss of generality) $\{v_1,\dots,v_i\}$ is a linearly dependent subset. Then $i\ge1$, because the empty set is linearly independent. Thus, again without loss of generality, we can write
$$
v_1=b_2v_2+\dots+b_iv_i
$$
that can be also written
$$
v_1=b_2v_2+\dots+b_iv_i+0v_{i+1}+\dots+0v_{n}
$$
which is a contradiction.
However, it's better to use a different definition/characterization of linearly dependent sets: saying that $\{v_1,\dots,v_i\}$ is linearly dependent means that there exist scalars $a_1,\dots,a_i$ not all zero such that
$$
a_1v_1+\dots+a_iv_i=0
$$
Now we can rewrite this as
$$
a_1v_1+\dots+a_iv_i+a_{i+1}v_{i+1}+\dots+a_nv_n=0
$$
where $a_{i+1}=\dots=a_n=0$. The scalars $a_1,\dots,a_i,a_{i+1},\dots,a_n$ are not all zero, so the set $\{v_1,\dots,v_n\}$ is linearly dependent.
The authors state that the empty set spans the zero subspsace $\{ 0 \}$ by convention.
However, this really depends on your definition of subspace spanned by a set. The definition I use is the following:
the subspace spanned by a set $S \subset V$ is defined to be the intersection of all subspaces of $V$ that contain $S$. That is, if $\langle S \rangle$ denotes the subspace spanned by $S$, then
$$
\langle S \rangle := \bigcap_{S \subset W \leq V} W,
$$
where $W \leq V$ indicates that $W$ is a subspace of $V$. So, if $S$ is the empty set, then the zero subspace $\{ 0 \}$ contains the empty set, and every vector space contains the zero subspace, so $\langle \emptyset \rangle = \{ 0 \}$.
For the second question, there appears to be a typo. The sentence should read:
By theorem $(5.1)$ any set of $n+1$ vectors in $V$ is linearly dependent, and since set consisting of a single nonzero vector in linearly independent, it follows that, for some integer, $m \geq 1$, $V$ contains linearly independent vectors $b_1,\dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.
Perhaps that should clear the confusion. To elaborate on why this corrected statement is true, proceed by contradiction:
suppose it is false that
for some integer, $m \geq 1$, $V$ contains linearly independent vectors $b_1,\dots,b_m$ such that any set of $m+1$ vectors in $V$ is linearly dependent.
What would this mean? This means that for each $m \geq 1$, if $b_1,\dots,b_m$ is any set of $m$ linearly independent vectors, then there is a vector $b_{m+1}$ such that $b_1,\dots,b_{m+1}$ is also linearly independent. However, $(5.1)$ says that this is not possible for $m = n$, where $n$ is the size of the given generating set of $V$.
Edit: based on the comments requesting clarification.
I am not sure that the statement under consideration is of the form "(not P) or Q". I always prefer to reason out the negation in a step-by-step fashion rather than work with formal statements and the rules for their negation. It leads to less confusion, at least in my mind.
Now, the negation of
There exists $m \geq 0$ such that ~blah~.
is
For every $m \geq 0$ we have ~not blah~.
Here ~blah~ is
There exists a set of linearly independent vectors $b_1,\dots,b_m$ such that ~foo~.
So, ~not blah~ is
For any set of linearly independent vectors $b_1,\dots,b_m$, we have ~not foo~.
Here, ~foo~ is
Any set of $m+1$ vectors in $V$ is linearly dependent.
So, ~not foo~ is
Some set of $m+1$ vectors in $V$ is linearly independent.
So, the negation of the statement in consideration is:
For every $m \geq 0$, we have that for any set of linearly independent vectors $b_1,\dots,b_m$, we have some set of $m+1$ vectors in $V$ is linearly independent.
There is no loss of generality in taking the set of $m+1$ linearly independent vectors in $V$ to be of the form $b_1,\dots,b_{m+1}$ because any subset of a linearly independent set is linearly independent. So, if we start with the set $b_1,\dots,b_m$ of $m$ linearly independent vectors and get $v_1,\dots,v_{m+1}$ a set of $m+1$ linearly independent vectors as per the claim, then in particular $v_1,\dots,v_m$ is a set of $m$ linearly independent vectors such that there exists $v_{m+1}$ such that $v_1,\dots,v_{m+1}$ is linearly independent. So, we might as well relabel the $v_i$'s as $b_i$'s and proceed inductively, since this does not change the proof.
Hope this helps. Feel free to reply in the comments for any clarifications.
Best Answer
For part $(a)$
Think of the linear map
$T:\mathbb{R}^n\to \mathbb{R}$ given by
$T(a_1,a_2,...,a_n)=a_1+a_2+..+a_n$
Notice that $T$ is non-zero linear transformation and the co-domain space is of dimension $1$
What is $\operatorname{Ker}(T)$ ? What happens if we apply the Rank-Nullity Theorem ?
Similarly for $(b)$ ,take the mapping
$U:M_n(\mathbb{R})\to \mathbb{R}$ defined by
$U(A) =\operatorname{tr}(A)$ .