Statement A: $v_0,v_1,...,v_k$ are affinely independent.
Statement B: $v_1-v_0,v_2-v_0,...,v_k-v_0$ are linearly independent.
First let us prove that $A \implies B$.
Consider some $\lambda_1,...,\lambda_k$, such that:
$$\sum_{i=1}^{k} \lambda_i (v_i - v_0) = 0 \tag{1}$$
We have to show that, if affine independence holds, all the cofficients ($\lambda$'s) must be zero.
Now, consider some $\lambda_0 \in \mathbb{R}$, such that: $$\sum_{i=0}^{k} \lambda_i = 0\tag{2}$$
Also, we have: $$\sum_{i=0}^{k} \lambda_i v_i = \sum_{i=1}^{k} \lambda_i (v_i - v_0) + (\sum_{i=0}^{k} \lambda_i)v_0 \tag{3}$$
Using equations 1 and 2, we observe that both terms on the RHS of (3) are zero. This means that: $$\sum_{i=0}^{k} \lambda_i v_i = 0 \tag{4}$$
From equations 2 and 4, we can say that $\lambda_i = 0$, $\forall i$ due to affine independence. Therefore, since all coefficients must be zero, this implies that B is true.
Now, let us prove the converse, that $B \implies A$.
Consider some $\lambda_0,\lambda_1,...,\lambda_k$, such that: $\sum_{i=0}^{k} \lambda_i v_i = 0$ and $\sum_{i=0}^{k} \lambda_k = 0$.
We have to show that all these coefficients must be zero under the condition of linear independence.
Using equation 3, and the above two conditions, we can conclude that $\sum_{i=1}^{k} \lambda_i (v_i - v_0) = 0$.
Therefore, due to linear independence of $(v_i - v_0)$, we conclude that: $$\lambda_1 = \lambda_2 = .... = \lambda_k = 0$$
Also, $\sum_{i=0}^{k} \lambda_k = 0 \implies \lambda_0 = 0$.
This proves that they are affinely independent (B is true).
We have shown $A \implies B$ and $B \implies A$.
$\therefore A \iff B$.
It is true that two vectors are dependent if they "point in the same (or opposite) direction", i.e. if they are aligned.
But that is not totally true for three vectors in $3$D or more.
In the sense that, when the three vectors are aligned, i.e. parallel, i.e. when they are scalar multiples of each other, they are for sure dependent.
But the definition of linear dependency of three vectors is wider than being parallel: it includes also the case in which they are co-planar, although not parallel.
If you want to see that geometrically, taking the three vectors as position vectors from the origin, if they define a full $3$D parallelepiped then they are independent, if instead the parallelepiped collapses into a flat figure or segment then the vectors are dependent.
Algebraically this translates into the fact whether the matrix formed by the three vectors has full rank ($3$) or less.
Similarly for $n$ vectors of $m$ dimensions.
Then from the theory of linear system you know that, in a homogeneous system, if the matrix has full rank then it has the only solution $(0,0, \cdots, 0)$ which corresponds to the combination coefficients to be all null.
In reply to your comment, in ${\mathbb R}^2$ if you have two non-aligned = independent vectors, then a third one will lie on their same plane (the $x,y$ plane).
In the geometric interpretation, the parallelepiped (the hull) will be flat, i.e. dimension 2, which is less than 3, the number of vectors.
In the algebraic interpretation, a matrix $3 \times 2$ cannot have a rank greater than two: so 3 (or more) 2D vectors are necessarily dependent.
final note (to clarify what might be the source of your confusion)
The (in)dependence of $n$ vectors in ${\mathbb R}^m$ is defined for the whole set of $n$ vectors: they might be dependent, notwithstanding that a few of them ($q<n, \; q\le m$) could be independent. Yet if one is dependent on another (or other two, etc.), then the whole set is dependent.
And in fact it is a common task, given $n$ vectors, to find which among them represent an independent subset: the minor in the matrix with non-null determinant, the larger giving the rank.
Best Answer
Your characterization of linear (in)dependence is not quite correct. Every set of vectors is contained in some kind of hyperplane through the origin, namely its span.
Instead, I would say that a finite set of vectors is linearly dependent if they lie in a hyperplane through the origin whose dimension is less than the number of vectors in the set.
And in a similar vein, a finite set of points in $\mathbb R^n$ is affinely dependent if it lies in a hyperplane whose dimension is less than the number of points in the set minus 1. Thus, 3 different points on a line are affinely dependent, but 2 different points on a line are affinely independent.
There is another nice geometric picture of affine independence: