I know that If $f$ is diagonalisable then its minimal polynomial is the product of distinct linear factors.
Now, how to prove the converse. That is:
Let $A:E\to E$ be a linear transformation from a finite dimensional vector space. If the minimum polynomial is the product of distinct factors as $m(\lambda)=(\lambda-\lambda_1)…(\lambda-\lambda_k)$ then $A$ is diagonalizble.
I believe that the characteristic polynomial must be like
$$p(\lambda)=(\lambda-\lambda_1)^{r_1}…(\lambda-\lambda_k)^{r_k}$$
right?
If so, then with the basis which is formed by the eigenvectors of $A$, $A$ is diagonalizble.
This argument is true?
Best Answer
I see two ways of proving this. The first is a very quick proof, if you know the following result about the relationship between the minimal polynomial and the Jordan canonical form (this is a useful result to know, but perhaps not so obvious to prove):
In your question, all the $n_i$'s are $1$ by assumption, so by Theorem $1$, the maximum size of the Jordan block corresponding to $\lambda_i$ is $1$. It follows that all the Jordan blocks are $1 \times 1$. This is equivalent to $A$ being diagonalizable.
The second proof uses the following lemma about polynomials of operators and the relationship between their kernels:
The proof of the lemma is by induction on $k$, which you should definitely attempt. To apply this, we take $f_i(t) = t-\lambda_i$. Then the lemma says \begin{equation} \text{ker}\left( m_A(A) \right) = \bigoplus_{i=1}^k \text{ker}(A-\lambda_iI). \end{equation} Since $m_A(t)$ is the minimal polynomial of $A$, we have $m_A(A) = 0$; i.e the kernel is all of $E$. Also, note that $\text{ker}(A-\lambda_iI)$ is precisely the eigenspace $E_{\lambda_i}$ of $A$ correpsonding to $\lambda_i$. Hence, we have shown that
\begin{equation} E = \bigoplus_{i=1}^k E_{\lambda_i}. \end{equation}
Recall that $A$ is diagonalizable if and only if we have such a direct sum decomposition; hence this completes the proof.