Statement A: $v_0,v_1,...,v_k$ are affinely independent.
Statement B: $v_1-v_0,v_2-v_0,...,v_k-v_0$ are linearly independent.
First let us prove that $A \implies B$.
Consider some $\lambda_1,...,\lambda_k$, such that:
$$\sum_{i=1}^{k} \lambda_i (v_i - v_0) = 0 \tag{1}$$
We have to show that, if affine independence holds, all the cofficients ($\lambda$'s) must be zero.
Now, consider some $\lambda_0 \in \mathbb{R}$, such that: $$\sum_{i=0}^{k} \lambda_i = 0\tag{2}$$
Also, we have: $$\sum_{i=0}^{k} \lambda_i v_i = \sum_{i=1}^{k} \lambda_i (v_i - v_0) + (\sum_{i=0}^{k} \lambda_i)v_0 \tag{3}$$
Using equations 1 and 2, we observe that both terms on the RHS of (3) are zero. This means that: $$\sum_{i=0}^{k} \lambda_i v_i = 0 \tag{4}$$
From equations 2 and 4, we can say that $\lambda_i = 0$, $\forall i$ due to affine independence. Therefore, since all coefficients must be zero, this implies that B is true.
Now, let us prove the converse, that $B \implies A$.
Consider some $\lambda_0,\lambda_1,...,\lambda_k$, such that: $\sum_{i=0}^{k} \lambda_i v_i = 0$ and $\sum_{i=0}^{k} \lambda_k = 0$.
We have to show that all these coefficients must be zero under the condition of linear independence.
Using equation 3, and the above two conditions, we can conclude that $\sum_{i=1}^{k} \lambda_i (v_i - v_0) = 0$.
Therefore, due to linear independence of $(v_i - v_0)$, we conclude that: $$\lambda_1 = \lambda_2 = .... = \lambda_k = 0$$
Also, $\sum_{i=0}^{k} \lambda_k = 0 \implies \lambda_0 = 0$.
This proves that they are affinely independent (B is true).
We have shown $A \implies B$ and $B \implies A$.
$\therefore A \iff B$.
The points $x_k$ are affine independent when
$$
\sum \lambda_k x_k = 0 \text{ with }\sum \lambda_k =0
$$
implies all $\lambda_k = 0$.
The vectors $\hat x_i = (1, x_i)$ are linearly independent if
$$
\sum \lambda_k \hat x_k = 0
$$
implies $\lambda_k=0$ for all $k$. But since the first component of $\hat x_k$ is always $1$ the sum of the first components is $\sum \lambda_k$ which needs to be zero. So the same coefficients can be used to prove/disprove affine independency.
edit
To understand the definition of affine independency: suppose that there exist $\lambda_k$ such that $\sum_k \lambda_k = 0$ and
$$
\sum_k \lambda_k v_k = 0.
$$
If the coefficients $\lambda_k$ are not all equal to zero there exists one which is different from $0$. Suppose $\lambda_1 \neq 0$. By dividing all by $-\lambda_1$ you can also suppose that $\lambda_1 = -1$. This means that $\lambda_2+\dots+\lambda_n=1$ and that
$$
v_1 = \lambda_2 v_2 + \dots + \lambda_n v_n
$$
i.e. $v_1$ is in the affine hull of $\lambda_2,\dots, \lambda_n$. So the hull cannot be $n-1$ dimensional.
Best Answer
Roughly speaking, affine independence is like linear independence but without the restriction that the subset of lower dimension the points lie in contains the origin. So three points in space are affinely independent if the smallest flat thing containing them is a plane. They're affinely dependent if they lie on a line (or are the same point).
A set of points is affinely dependent if and only if when you subtract one of them from the others the resulting set (excluding the $0$ vector that results from subtracting the one you chose from itself) is linearly dependent.
The language of affine independence is useful if you don't really care where the origin is in your representation of $n$-space. That might be the case if the points are vectors of $n$ numerical attributes, one vector for each participant in a survey. The page you link to suggests "free vectors" in physics as another motivation for affine geometry.