Linear Algebra – Fastest Way to Find the Characteristic Polynomial of a Matrix

determinanteigenvalues-eigenvectorslinear algebramatrices

Finding the characteristic polynomial of a matrix of order $n$ is a tedious and boring task for $n > 2$.

I know that:

  • the coefficient of $\lambda^n$ is $(-1)^n$,
  • the coefficient of $\lambda^{n-1}$ is $(-1)^{n-1}(a_{11} + a_{22} + \dots + a_{nn})$,
  • the constant term is $\det{A}$.

When finding the coefficient of the linear term $\lambda$ of the characteristic polynomial of a $3\times 3$ matrix, one has to calculate the determinant of the matrix $A – \lambda I_n$ anyway. (But you don't have to sum all the terms, only the linear terms.)

Does anybody know a faster way?

Best Answer

Once upon a less enlightened time, when people were less knowledgeable in the intricacies of algorithmically computing eigenvalues, methods for generating the coefficients of a matrix's eigenpolynomial were quite widespread. One of the more prominent methods for computing the coefficients was a method ascribed to both the Frenchman Leverrier, and the Russian Faddeev (who was an (co-)author of one of the oldest references on the practice of numerical linear algebra).

The (Faddeev-)Leverrier method is a method that will require you to do a number of matrix multiplications to generate the coefficients of the characteristic polynomial. Letting the $n\times n$ matrix $\mathbf A$ have the monic characteristic polynomial $(-1)^n \det(\mathbf A-\lambda\mathbf I)=\lambda^n+c_{n-1}\lambda^{n-1}+\cdots+c_0$, the algorithm proceeds like so:

$\mathbf C=\mathbf A;$
$\text{for }k=1,\dots,n$

$\text{if }k>1$
$\qquad \mathbf C=\mathbf A\cdot(\mathbf C+c_{n-k+1}\mathbf I);$

$c_{n-k}=-\dfrac{\mathrm{tr}(\mathbf C)}{k};$

$\text{end for}$

If your computing environment can multiply matrices, or take their trace (sum of the diagonal elements, $\mathrm{tr}(\cdot)$), then you can easily program (Faddeev-)Leverrier. The method works nicely in exact arithmetic, or in hand calculation (assuming you have the stamina to repeatedly multiply matrices), but is piss-poor in inexact arithmetic, as the method tends to greatly magnify rounding errors in the matrix, ever yielding coefficients that become increasingly inaccurate as the iteration proceeds. But, for the simple $3\times 3$ case envisioned by the OP, this should work nicely.

People interested in this old, retired method might want to see this paper.