[Math] Meaning of linear independence with row vectors

linear algebravector-spaces

So far I have understood that a set of vectors $S = {v_1, v_2, . . . , v_k }$ in a vector space V is linearly independent
when the vector equation
$c_1v_1 + c_2v_2 + . . . + c_kv_k = 0$
has only the trivial solution$c_1 = 0, c_2 = 0, . . . , c_k = 0.$

An example in matrix form is:

$\begin{bmatrix}1 & 1 & 2 & 4 \\
0 & -1 & -5 & 2 \\
0 & 0 & -4 & 1 \\
0 & 0 & 0 & 6 \\
\end{bmatrix}
\begin{bmatrix}
c_1\\
c_2 \\
c_3 \\
c_4 \\
\end{bmatrix} =
\begin{bmatrix}
0\\
0 \\
0 \\
0 \\
\end{bmatrix}
$

But a matrix of this form

$\begin{bmatrix}1 & 1 & 2 & 4 \\
0 & -1 & -5 & 2 \\
0 & 0 & -4 & 1 \\
0 & 0 & 0 & 0 \\
\end{bmatrix}
\begin{bmatrix}
c_1\\
c_2 \\
c_3 \\
c_4 \\
\end{bmatrix} =
\begin{bmatrix}
0\\
0 \\
0 \\
0 \\
\end{bmatrix}
$

is linearly dependent because it has more than a trivial solution

However, I am confused about row vectors, specifically the idea that to get a basis for a subspace using row vectors we must put the matrix in reduced row echelon form to find the linearly independent vectors. For example here the accepted answer gives an example of finding a basis with row vectors using this

$\begin{bmatrix}1 & 1 & 2 & 4 \\
2 & -1 & -5 & 2 \\
1 & -1 & -4 & 0 \\
2 & 1 & 1 & 6 \\
\end{bmatrix} \Rightarrow \begin{bmatrix}1 & 1 & 2 & 4 \\
0 & -3 & -9 & -6 \\
0 & -2 & -6 & -4 \\
0 & -1 & -3 & -2 \\
\end{bmatrix} \Rightarrow \begin{bmatrix}1 & 1 & 2 & 4 \\
0 & -3 & -9 & -6 \\
0 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 \\
\end{bmatrix}$

and then goes on to say that "Only two of the four original vectors were linearly independent." In what respect are these two vectors linearly independent? This looks exactly like the second example that I gave in which the vectors were dependent because they had more than a trivial solution? Does linear independence with regard to row vectors mean something else? Or does this also only have a trivial solution, and if so, how?

Best Answer

Linear independence is linear independence is linear independence. It's defined entirely independently of matrices. It is, instead, defined in terms of a vector equation, which in finite dimensions, can be turned into a system of linear equations. As such, matrices are an excellent tool to determine linear independence.

The definition of linear independence is precisely what you wrote. We say $v_1, \ldots, v_n$ are linearly independent if the only solution of $$a_1 v_1 + \ldots + a_n v_n = 0 \tag{$\star$}$$ for scalars $a_1, \ldots, a_n$, is the trivial solution $a_1 = a_2 = \ldots = a_n = 0$. That is, no other possible choices of scalars will make the above linear combination into the $0$ vector.

This doesn't matter if they are row vectors, column vectors, or more abstract vectors (such as matrices, functions, graphs on a fixed vertex set, algebraic numbers, etc). All you need to define linear independence is an (abstract) vector space.

For example, the real functions $\sin^2(x)$, $\cos^2(x)$, and the constant function $1$ are not linearly independent because $$2 \cdot \sin^2(x) + 2 \cdot \cos^2(x) - 2 \cdot 1 \equiv 0,$$ i.e. the linear combination is exactly the $0$ function, even though the scalars aren't all $0$.

On the other hand, the functions $\sin^2$ and $\cos^2$ are linearly independent, because, if we assume $$a_1 \sin^2(x) + a_2 \cos^2(x) \equiv 0,$$ that is, is equal to $0$ for all $x$, then trying $x = 0$ yields $$0 = a_1 \sin^2(0) + a_2 \cos^2(0) = a_2$$ and trying $x = \pi/2$ yields $$0 = a_1 \sin^2(\pi/2) + a_2 \cos^2(\pi/2) = a_1.$$ Thus, we logically come to the conclusion that $a_1 = a_2 = 0$, i.e. the functions are linearly independent.


So, where do matrices come in? If our vectors belong to $\Bbb{R}^m$ (or $\Bbb{C}^m$, or indeed $\Bbb{F}^m$ where $\Bbb{F}$ is a field), then equation $(\star)$ turns into a system of homogeneous linear equations. When you turn this system of linear equations into a matrix of coefficients, the columns will turn out to be precisely the vectors $v_1, \ldots, v_n$, expressed as column vectors. It doesn't matter whether $v_1, \ldots, v_n$ are expressed originally as column vectors or row vectors! Once you turn them into equations, then a matrix, they will become columns. (You should try this for yourself to convince yourself of this fact.)

So, if you take the rows of a given matrix, and try to figure out (by definition) whether they are linearly independent or not, you'll inevitably end up with these vectors being columns, i.e. you'll get the same matrix, just transposed.

Further, we also get a nice technique for proving linear (in)dependence of vectors in $\Bbb{R}^3$, and pruning them down to a linearly independent set: stick them as columns in a matrix $A$, row reduce to a row-echelon form $B$, and if the $i$th column of $B$ does not have a pivot in it, then the $i$th column of $A$ depends linearly on the previous columns of $A$, and hence can be removed without damaging the span.

If you instead stick the vectors in as rows in a matrix and reduce as above, then this will not tell you which vectors depend on each other, in the same way that the column approach does. However, row operations preserve the span of the row vectors, hence the non-zero rows of a row-echelon form of a matrix will be a basis for the span of your vectors. This basis may have no vectors in common with your original set of vectors, however!

Related Question