When do the zero rows of the reduced system determine the column space

linear algebra

Suppose we have a linear system $Ax = b$. Let us reduce this to the system $Rx = d$, where $R$ is in row-echelon form. Let us look at the zero rows at the bottom of this system. When do these zero rows determine the column space? In some sense, these zero rows are like a homogeneous system, except with the $0$’s on the left hand side and the unknowns on the right hand side.

Let me illustrate what I mean by an example.

Let $A =
\begin{bmatrix} 1 & 2 & 1 & -1 \\ 2 & 4 & 4 & 1 \\ -1 & -2 & 1 & 4 \end{bmatrix}$
.

The system $Ax = b$ reduces as follows:
$$\left[
\begin{array}{cccc|c} 1 & 2 & 1 & -1 & b_1 \\ 2 & 4 & 4 & 1 & b_2 \\ -1 & -2 & 1 & 4 & b_3 \end{array} \right] \xrightarrow[b_1 + b_3 \to b_3]{-2b_1 + b_2 \to b_2} \left[
\begin{array}{cccc|c} 1 & 2 & 1 & -1 & b_1 \\ 0 & 0 & 2 & 3 & -2b_1 + b_2 \\ 0 & 0 & 2 & 3 & b_1 + b_3 \end{array} \right] \xrightarrow{-b_2 + b_3 \to b_3} \left[
\begin{array}{cccc|c} 1 & 2 & 1 & -1 & b_1 \\ 0 & 0 & 2 & 3 & -2b_1 + b_1 \\ 0 & 0 & 0 & 0 & 3b_1 – b_2 + b_3 \end{array} \right].
$$

A vector $b^* = (b_1, b_2, b_3)$ is in the column space of $A$ if $Ax = b^*$ has a solution. Now if $Ax = b^*$ has a solution $x^*$, then $x^*$ must also satisfy the equivalent reduced system, which in particular asserts that we must have $0 = 3b_1 – b_2 + b_3$ (which we can see corresponds to the zero row of the reduced system). We have just showed that any vector $b^*$ in the column space of $A$ must lie in the plane $3b_1 – b_2 + b_3 = 0$. In other words, the column space is contained in the plane $3b_1 – b_2 + b_3 = 0$.

In fact, in this case, the containment in the other direction also holds (so that we have equality): If we look at the reduced system, we can see that the rank of $A$ is $2$, so the column space is $2$-dimensional. But the plane $3b_1 – b_2 + b_3 = 0$ is a $2$-dimensional subspace of the codomain $\mathbb{R}^3$. So we have a $2$-dimensional subspace contained in another $2$-dimensional subspace, hence they are equal.

In other words, the column space is the plane $3b_1 – b_2 + b_3 = 0$. So the column space was "determined" by the zero rows of the system (in this case, there was only one zero row).

It seems interesting that we can just “ignore” the non-zero rows of the reduced system. It seems like they have become redundant/unnecessary, and we needed to only consider the zero row(s). In general, if we just look at the zero rows of the reduced system, and think of it as a homogeneous system, do the set of solutions to that system produce the column space? Is this always true? Or only true sometimes? When? And why?

Best Answer

I think what you have claimed is true i.e., Ker $B = $ Col $A$, where $B$ is the matrix representation of the homogeneous system you mention.

Clearly, Col $A \subseteq$ Ker $B$ since if $v \notin$ Ker $B$, there will be a "zero row" in the reduced system that gives a contradiction (i.e., the LHS will be zero and RHS non-zero), so that $v \notin$ Col $A$.

Also, Ker $B \subseteq$ Col $A$, since if $v \in$ Ker $B$ the "zero rows" are now all satisfied, and the reduced system clearly has a solution (e.g., set the pivot variables equal to the RHS and set the free variables to $0$.) This solution gives $Ax = v$ so that $v \in$ Col $A$.

You may want to note that it's probably easier to use the reduced system to determine a basis for Col $A$ than it would be to try to find a basis for Ker $B$.


A more intuitive explanation

Let $A$ be $m$ x $n$. We begin with

$$\left[ \begin{array}{c|c} A & I \end{array} \right], \tag{1}$$

We then row reduce this by making the last row $0$ on the LHS. In particular, we have

$$a_m + \sum_{i=1}^{m-1} c_i a_i = 0$$

where $a_i$ denotes the $i^{th}$ row of $A$. This means $$\begin{bmatrix} c_1 \\ \vdots \\ c_{m-1} \\ 1 \end{bmatrix} \in Null A^T$$

On the RHS, the bottom row simply becomes $$\begin{bmatrix} c_1 & \cdots & c_{m-1} & 1 \end{bmatrix} $$

Continuing in this fashion we find that row $k + 1$ on the RHS will be $$\begin{bmatrix} c'_1 & \cdots & c'_{k} & 1 & 0 & \cdots & 0 \end{bmatrix} $$

where we define $k$ to be the rank of $A$. Since these vectors are independent, we see they are a basis for Null $A^T$.

Therefore, since having a dot product of $0$ with these vectors is equivalent to being in Col $A$, we see why Col $A = $ Null $B$ (where again $B$ is the submatrix consisting of the bottom $m-k$ rows of the RHS of $(1)$ after row reduction)

Intuitively, these row reductions extract information about Col $A$ by placing vectors associated with Null $A^T$ into the RHS of $(1)$.

$\square$