Using only elementary row operations, how to show that two duplicate columns yield a zero column

linear algebramatrices

The analogous statement in terms of duplicate rows is easy, as we can produce a zero row by one single row operation which subtracts one of the duplicates from the other. But, given a matrix which is such that it has duplicate columns, how do I show that this matrix is row-equivalent to a matrix which has a zero column? I am doing this as I am adding some notes to Hoffman and Kunze Chapter 1 for simple cases in which one can see by inspection that a matrix is not invertible.

Best Answer

As noted by Sassatelli Giullio, it is not true that if a (square) matrix has two identical columns then it can be row-reduced to have a zero column. (In fact, this is very uncommon; the original matrix needs to have a zero column already, since row operations preserve the nullspace.) For instance it's not hard to see that $\begin{bmatrix} 1&1\\2&2 \end{bmatrix}$ cannot be row-reduced to make either column zero.

In the comments the followup question was posed of whether it can be row-reduced to have a zero row, which is what I address here. We take a totally matrix-theoretic approach that also avoids any kind of "invertible matrix theorems".


Suppose that we have an $n\times n$ matrix with the first two columns identical. If they are zero columns there is nothing to do. Otherwise, permute the rows so that these columns' first entries are nonzero. When performing Gaussian elimination on the first column, note that this also converts all the non-first entries of the second column to zero. This means that the second column cannot have a pivot (a first nonzero entry in its row).

Every column has at most one pivot after Guassian elimination is complete, otherwise one of the pivots is lower and should have been zeroed out. Therefore, there are at most $n-1$ pivots, and so some row has no pivot. But this means that that row has no first nonzero entry; in other words, it is a zero row.


About the assumptions on the matrix $A$:

  • If $A$ is not square, the statement is true for uninteresting reasons when there are more rows than columns (i.e. the repeated column doesn't matter), and false when there are more columns than rows. Counterexamples are easily found, including silly ones like $\big[~1~~1~\big]$.
  • If $A$ has two columns identical which are not the first two, the argument can be salvaged by using a modification of Gaussian elimination tailored for this matrix: instead of prioritizing columns left-to-right, place the top priority on one of the duplicate columns, then prioritize zeros out the first, second, and so on. The definition of a pivot is more tedious in this setting, but everything goes through fine.
  • This argument can also be extended to show that if the first $k$ columns of a matrix are linearly dependent, a zero row can be formed. There is more to track, but the idea is the same: the $k^\text{th}$ column will fail to have a pivot. This is a justification for the usual algorithm to compute column spaces.
  • To follow on Will Jagy's comment, the number of pivots of a matrix after Gaussian elimination is usually called its rank (although authors define it in many ways), which is both the row and the column rank. The only point when we used the fact that $A$ was square is to conclude that some row had no pivot, so the argument can be recycled to prove theorems in that case as well.
Related Question