[Math] Can we perform row and column operations while calculating eigenvectors

eigenvalues-eigenvectorslinear algebra

I have a small question regarding performing row and column operations while calculating eigenvectors of a matrix. In an attempt to solve the following system of differential equations,

\begin{align}
\begin{bmatrix}
x' \\
y' \\
z'
\end{bmatrix}
= \begin{bmatrix}
1 & 0 & 1 \\
0 & 1 & 1 \\
0 & 0 & 2
\end{bmatrix}
\begin{bmatrix}
x \\
y \\
z
\end{bmatrix},
\end{align}

I attempted to diagonalise the coefficient matrix, $A$. I found athat $\lambda = 1$ is a eigenvalue, and I then attempted to calculated the null space of the matrix.

\begin{align}\
A – \mathbf{1}_{3 \times 3} = \begin{bmatrix}
0 & 0 & 1 \\
0 & 0 & 1 \\
0 & 0 & 1 \tag{1}
\end{bmatrix}
\end{align}

Clearly, since the above matrix implies that $z = 0$, the null space can be written is,

\begin{align}
\Bigg \{
\begin{bmatrix}
1 \\
0 \\
0
\end{bmatrix} ,
\begin{bmatrix}
0 \\
1 \\
0
\end{bmatrix}
\Bigg \}
\end{align}

This is the $x$-$z$ plane. But, instead, since the solution space of a homogeneous system of linear equations is left unchanged if one performs row and/or column operations, I choose to perform the following operations in the matrix given in $(1)$:

Switch the first and the third columns, and then add $-1$ times the first row to the other two rows.

This sequence of operations gives,

\begin{align}\
A – \mathbf{1}_{3 \times 3} \longrightarrow \begin{bmatrix}
1 & 0 & 0 \\
0 & 0 & 0 \\
0 & 0 & 0 \tag{1}
\end{bmatrix}
\end{align}

Now, even though the matrix is not in the row-reduced echelon form, the matrix has been obtained by performing a sequence of column and row operations, so the solution space must not have changed (right?). The null space is read as:

\begin{align}
\Bigg \{
\begin{bmatrix}
0 \\
1 \\
0
\end{bmatrix} ,
\begin{bmatrix}
0 \\
0 \\
1
\end{bmatrix}
\Bigg \}
\end{align}

This is the $y$-$z$ plane. What went wrong?

Best Answer

Row and columns are described algebraically by multiplying by invertible matrices.

You are looking for solutions to the equation $(A - I) v = 0$. The specific form of the matrix doesn't matter, so let's just consider the general problem of solving $Bv = 0$.

Row operations are nice for this, because they multiply on the left: if $E$ encodes the row operation, then applying it to the matrix $B$ gives the matrix $EB$.

We can make $EB$ appear in the equation simply by multiplying $Bv = 0$ on the left by $E$ to get $EBv = 0$.

So, that's why you are taught to use row operations to solve systems of linear equations!

Column operations multiply on the right; unfortunately it's harder to make $BE$ appear in the equation: the simplest way to do so is that $Bv = 0$ is equivalent to the equation $(BE) (E^{-1} v) = 0$

So, you can make using column operations work. The trick, through, is that once you've found the solutions to $(BE) x = 0$, the solutions you were actually looking for are those with $E^{-1} v = x$, or equivalently, $v = Ex$.

In other words, once you get the solution, you have to take all of the column operations you applied and correctly reinterpret them as column operations to be applied (in reverse order!) to the solution vectors.

In your example, this means after obtaining the solution you got to the column-modified matrix, you need to swap the first and third rows of the solution vectors to get the solution to the original system. (both operations are represented by the same elementary matrix, which is why this is the correct operation to do)

This is tricky, and (IMO) doesn't really offer any benefit; the usual method taught for solving systems of equations has no problem just doing row operations here, to compute

$$ \begin{pmatrix} 0 &0&1\\0 &0&1\\0 &0&1\\\end{pmatrix} \to\begin{pmatrix} 0 &0&1\\0 &0&0\\0 &0&0\\\end{pmatrix} $$

and then to extract the solution space from this row reduced echelon form.