[Math] How to get $p(A)=0$ without Cayley-Hamilton theorem when A is diagonalizable

eigenvalues-eigenvectorslinear algebra

Suppose $A$ is a $n\times n$ real matrix, and the characteristic polynomial is $p(\lambda)=\det(A-\lambda I_n).$
By the Cayley-Hamilton theorem, $p(A)=0$. Here is my question:

Assume that $A=S\Lambda S^{-1}$, where $\Lambda=diag(\lambda_1,\cdots,\lambda_n)$, how can I get $p(A)=0$ by substitution?

I wrote $p(\lambda)=\prod_{i=1}^{n}(\lambda-\lambda_i)$. In the spirit of Polya, I try the most simply case.
When $n=2$, we have:
$$p(A)=S\Lambda^2S^{-1}-(\lambda_1+\lambda_2)A+\lambda_1\lambda_2I_2.$$

But I have no idea how to go on.

Source: 6.2.32, P311, Intro to Linear Algebra 4th ed, by Gilbert Strang

Best Answer

It's actually best, in my opinion at least, to use the usual polynomial expansion for $p$. Note that taking powers with a diagonalizable matrix is easy: $A^k = S\Lambda^k S^{-1}$ (by induction). Thus you can write $p(A) = \sum_{k=0}^n a_k A^k = \sum_{k=0}^n a_k S \Lambda^k S^{-1}=S(\sum_{k=0}^n a_k\Lambda^k)S^{-1}=S\,p(\Lambda)S^{-1}$. Finally, diagonal matrices in polynomials are easy: $p(\Lambda)=\mathrm{diag}(p(\lambda_1),p(\lambda_2),\dots,p(\lambda_n))$. But the eigenvalues of $A$ are preciesly the zeroes of $p(\cdot)$, so $p(\Lambda)=0$, and hence $p(A)=0$.

EDIT: If the problem specifically indicates for you to substitute the diagonalization in to the product, then that's what you should do. The idea behind that route is that the $(A-\lambda_iI)$'s can be refactored as $S(\Lambda-\lambda_i)S^{-1}$, making $p(A)=S(\Lambda-\lambda_1)(\Lambda-\lambda_2)\cdots(\Lambda-\lambda_n)S^{-1}$. Ignoring the outside factors, we know that the product of diagonal matrices is just the pairwise scalar product of their diagonal entries. Thus, using the product expression for $p(\cdot)$ in each of the diagonal entries, we find that $p(A)$ comes to $S\, \mathrm{diag}(p(\lambda_1),\cdots,p(\lambda_n))S^{-1}$, which is of course just $0$.

Related Question