Why we pick $0$ vector to check in linear indepenence

linear algebra

I know that linear independence means each vector is not the linear combination of the others. But, I don't know why when we check whether a set of vectors are linearly independent, we only check for the equation $a_1v_1 + a_2v_2 +… + a_nv_n = 0$ has only trivial solution $x_1 = x_2=…=x_p=0$.

Why not check for other vectors, like $(1,0,…,0)$? Is it enough to just check for the $0$ vector and say that it is linearly independent?

My thought:

The argument of linear dependence is more easy for me:

If a system of vector $v_1,v_2,…,v_n$ is called linearly dependent if $0$ can be represented as nontrivial linear combination.

I am think does the above line implies that "if a system of vector $v_1,v_2,…,v_n$ is called linearly independence if $0$ can only be represented as trivial linear combination." which is the statement of linearly independence.

Best Answer

This is the definition of linear independence. It is equivalent to the statement that none of the vectors $v_1,\dots,v_n$ can be written as a linear combination of the remaining vectors.

Here's why $0$ is special: You can always write it as a linear combination of any vectors, by taking $0v_1+\dots+0v_n$. Linear independence is the criterion for that to be the only solution. If you picked $(1,0,\dots,0)$, you wouldn't know, first of all, that it is a linear combination of $v_1,\dots,v_n$ at all, and, even if it were, you wouldn't know what linear combination had to work.

Here's an equivalent formulation of linear independence: Any vector $v$ that is in the span of $v_1,\dots,v_n$ (i.e., that can be written as a linear combination of them) must be a unique linear combination of them. There cannot be two different ways of writing it as a linear combination.

Related Question