Imagine you have a collection of arrows pointing in various directions. If they're linearly dependent, then you can stretch, shrink, and reverse (but not rotate) them in such a way that if you lay them head-to-tail then they form a closed loop. For example, if you have three arrows that happen to all lie in the same plane (linearly dependent), then you can form a triangle out of them, but you can't if one of them sticks out of the plane formed by the other two (linearly independent).
First off, be careful with assuming a precise difference between $\Leftrightarrow$ and $\leftrightarrow$. While some authors will maintain a particular technical distinction between them, many will not -- and those who do distinguish may do so in a different way than you expect.
Second, the definition you quote is badly written. In particular, the "Let $\alpha_1,\ldots,\alpha_n$ be scalars" in the beginning should not be there -- saying so at that point gives the impression that you need to choose particular $\alpha_1,\ldots,\alpha_n$ before you can say whether the vectors are independent (and that the answer to that could depend on which scalars you choose).
Of course what is meant is really:
Let $V$ be a vector space, and let $\textbf{v}_1,\dots,\textbf{v}_n \in V$. Let $\textbf{0}$ be the zero element of $V$.
$\textbf{v}_1,\dots,\textbf{v}_n$ are said to be linearly independent if $$\forall \alpha_1,\ldots,\alpha_n: \bigl[ \alpha_1 \textbf{v}_1 + \dots + \alpha_n \textbf{v}_n = \textbf{0} \Leftrightarrow \alpha_1,\dots,\alpha_n = 0 \bigr]$$
The $\alpha_i$s are quantified inside the defining condition for "independent". This may be what you say you're understanding implicitly by the use of $\Leftrightarrow$ rather than $\leftrightarrow$, but making the quantification explicit is important when negating the definition.
The negated property would be
$$ \neg\forall \alpha_1,\ldots,\alpha_n: \bigl[ \alpha_1 \textbf{v}_1 + \dots + \alpha_n \textbf{v}_n = \textbf{0} \Leftrightarrow \alpha_1,\dots,\alpha_n = 0 \bigr]$$
which is the same as
$$ \exists \alpha_1,\ldots,\alpha_n: \bigl[ \alpha_1 \textbf{v}_1 + \dots + \alpha_n \textbf{v}_n = \textbf{0} \not\Leftrightarrow \alpha_1,\dots,\alpha_n = 0 \bigr]$$
So we should be looking for a choice of scalars such that the two sides have different truth values. Since we're talking about a vector space, if the scalars are all zero, then the combination of the left will also be, so the only way for the truth values to be different is the vector sum is zero but the scalars are not.
So in the presence of the vector space axioms, the negated condition is equivalent to
$$ \exists \alpha_1,\ldots,\alpha_n: \bigl[ \alpha_1 \textbf{v}_1 + \dots + \alpha_n \textbf{v}_n = \textbf{0} \land (\alpha_1\ne 0\lor \cdots\lor \alpha_n \ne 0) \bigr]$$
Best Answer
Your definition of linear dependence is valid, although we usually only write $\implies$ since the left-hand arrow is trivial. Of course, to say we can deduce all $\alpha_i$ is equivalent to saying there does not exist any other choice of the $\alpha_i$ that works. Therefore, the negation is as expected to say that one does.