Those are not clear statements of dependence and independence of the set$\{v_1,\dots v_n\}$.
The set would be dependent if:
There exist $\alpha_i$, not all of which are zero, such that $\sum \alpha_iv_i=0$
The set would be independent if it satisfies the negation, that there will not be such a set of nonzero coefficients:
If $\sum \alpha_iv_i=0$, then all the $\alpha_i=0$
If all the $\alpha_i=0$ then $\sum \alpha_iv_i=0$ holds all the time, so it is not interesting! It is a special case that always works. A linearly independent set is special precisely because you can't get a combination to add up to zero unless you use all zeros (which will always work.)
In any number of dimensions, linear independence expresses the idea that one vector is not in the span of the other vectors.
For example, if $\sum \alpha_iv_i=0$ where at least one of the alphas is nonzero, (say for convenience, $\alpha_1$) then $v_1=\sum_{i=2}^n \alpha_1^{-1}\alpha_iv_i$, and so $v_1$ is generatable by $v_2\dots v_n$. Then we could just throw $v_1$ out, since we know the other $v_i$ can already generate it.
So when a set is linearly independent, it means that each member really does contribute to the vector space they generate. Each element adds something new that can't be produced by the other vectors.
It is true that two vectors are dependent if they "point in the same (or opposite) direction", i.e. if they are aligned.
But that is not totally true for three vectors in $3$D or more.
In the sense that, when the three vectors are aligned, i.e. parallel, i.e. when they are scalar multiples of each other, they are for sure dependent.
But the definition of linear dependency of three vectors is wider than being parallel: it includes also the case in which they are co-planar, although not parallel.
If you want to see that geometrically, taking the three vectors as position vectors from the origin, if they define a full $3$D parallelepiped then they are independent, if instead the parallelepiped collapses into a flat figure or segment then the vectors are dependent.
Algebraically this translates into the fact whether the matrix formed by the three vectors has full rank ($3$) or less.
Similarly for $n$ vectors of $m$ dimensions.
Then from the theory of linear system you know that, in a homogeneous system, if the matrix has full rank then it has the only solution $(0,0, \cdots, 0)$ which corresponds to the combination coefficients to be all null.
In reply to your comment, in ${\mathbb R}^2$ if you have two non-aligned = independent vectors, then a third one will lie on their same plane (the $x,y$ plane).
In the geometric interpretation, the parallelepiped (the hull) will be flat, i.e. dimension 2, which is less than 3, the number of vectors.
In the algebraic interpretation, a matrix $3 \times 2$ cannot have a rank greater than two: so 3 (or more) 2D vectors are necessarily dependent.
final note (to clarify what might be the source of your confusion)
The (in)dependence of $n$ vectors in ${\mathbb R}^m$ is defined for the whole set of $n$ vectors: they might be dependent, notwithstanding that a few of them ($q<n, \; q\le m$) could be independent. Yet if one is dependent on another (or other two, etc.), then the whole set is dependent.
And in fact it is a common task, given $n$ vectors, to find which among them represent an independent subset: the minor in the matrix with non-null determinant, the larger giving the rank.
Best Answer
The second question is easier. The answer exploits the fact that every set containing at least a vector and the zero vector is linearly dependent. Alternatively, you can solve it simply choosing two scalars $\alpha, \beta \in \Bbb R$ so that $z=\alpha u+ \beta v$.
As to the first one, the formal way would be use the definition of linear independence and go through a linear system in four equations. Nonetheless with a bit of logic you can easily solve it without using any kind of computation. For example, if you call $w=(3,3,0,0)$ the set will still be linearly independent. Why? Try to think about the linear combinations of the first two components of the vectors $v,u$. Can you find two scalars such that their linear combination with the first two components of the vectors $v,u$ gives you back the first two components of the vector $w$? In this way you can construct as many example as you want.