Why is the determinant necessary to find out the eigenvalues of a matrix

determinanteigenvalues-eigenvectorslinear algebramatrices

Say I have a $2\times2$ matrix $A$:

$$A = \begin{bmatrix}1&2\\4&3\\ \end{bmatrix}.$$

To find the eigenvalues, I have to solve

$$Au = \lambda u,$$ where $u$ is a non-zero vector. Solving this I get

$$0 = \lambda u -Au \Leftrightarrow \\
0 = (\lambda*I_n -A)u.$$

Since $u$ is non-zero, $(\lambda I_n-A) = 0$. Why can't I then find the values of $\lambda$ for which this yields the null matrix? Why do I have to do
$$\det(\lambda I_n -A)=0$$

instead?

I think that to get a null vector you don't have to multiply it by a null matrix necessarily, so I figure it has something to do with that, but I don't understand why I have to use the determinant.

Best Answer

You partially answered your own question in saying that "I think that to get a null-vector you don't have to multiply it by a null-matrix necessarily".

The other part is that you don't need to use the determinant. If you find an eigenvalue however you do, that's good. You can try to solve $Av = \lambda v$, you can do it by inspection, you can do it by heavenly inspiration (provided you check, ha!).

The fact of matter is, that for an $n\times n$ matrix $A$, $\lambda$ is an eigenvalue of $A$ if and only if $\det(\lambda I - A) = 0$.
This is because if an $n\times n$ matrix $M$ has a nonzero vector $v$ in its kernel, then that $v$ is an eigenvector associated to the eigenvalue $0$. It follows that $\det M$, as the product of $M$'s eigenvalues, is $0$.
The thing is, this also goes in reverse: if $\det M = 0$, then some eigenvalue must be $0$, and so there must be nonzero vector in $M$'s kernel.