This is very basic, and you do not need to use the characteristic polynomial or the fact that the minimal polynomial divides it at all. You just need to realise that "diagonalisable" means that the sum of the eigenspaces fills the whole space, so a linear operator is zero if (and obviously only if) it is zero on each of the eigenspaces.
Now on the eigenspace for an eigenvalue$~\lambda$, our $f$ acts by scalar multiplication by$~\lambda$. It easily follows that on this eigenspace any polynomial $P[f]$ acts by scalar multiplication by$~P[\lambda]$ (just check that $f^k$ acts by multiplication by $\lambda^k$, and then combine the monomials of the polynomial $P$ linearly). So by the above, $P[f]=0$ iff $P[\lambda]=0$ for every eigenvalue$~\lambda$. The minimal monic polynomial$~P$ with that property is the product of (just) one factor $X-\lambda$ for each distinct eigenvalue$~\lambda$ of$~f$; there are distinct linear factors.
I see two ways of proving this. The first is a very quick proof, if you know the following result about the relationship between the minimal polynomial and the Jordan canonical form (this is a useful result to know, but perhaps not so obvious to prove):
Theorem $1$: Let $\lambda_1, \dots, \lambda_k$ be all the distinct eigenvalues of the linear transformation $A: E \to E$. For each $i$, let $n_i$ be the maximum size of the Jordan blocks corresponding the eigenvalue $\lambda_i$. Then, the minimal polynomial of $A$ is
\begin{equation}
m_A(t) = \prod_{i=1}^k (t-\lambda_i)^{n_i}.
\end{equation}
(FYI: the number $n_i$ is also the nilpotency index of $(A-\lambda_iI)$ when restricted to the generalised eigenspace corresponding to $\lambda_i$.)
In your question, all the $n_i$'s are $1$ by assumption, so by Theorem $1$, the maximum size of the Jordan block corresponding to $\lambda_i$ is $1$. It follows that all the Jordan blocks are $1 \times 1$. This is equivalent to $A$ being diagonalizable.
The second proof uses the following lemma about polynomials of operators and the relationship between their kernels:
Lemma: Suppose $E$ is a vector space over a field $\mathbb{F}$, and let $f_1(t), \dots, f_k(t)$ be pairwise coprime polynomials with coefficients in $\mathbb{F}$. Then, for any linear map $A: E \to E$, we have that
\begin{equation}
\text{ker}\left( f_1(A) \circ \dots \circ f_k(A)\right) = \bigoplus_{i=1}^k \text{ker}f_i(A),
\end{equation}
i.e \begin{equation}
\text{ker} \left( \prod_{i=1}^k f_i(A) \right) = \bigoplus_{i=1}^k \text{ker}f_i(A)
\end{equation}
The proof of the lemma is by induction on $k$, which you should definitely attempt. To apply this, we take $f_i(t) = t-\lambda_i$. Then the lemma says
\begin{equation}
\text{ker}\left( m_A(A) \right) = \bigoplus_{i=1}^k \text{ker}(A-\lambda_iI).
\end{equation}
Since $m_A(t)$ is the minimal polynomial of $A$, we have $m_A(A) = 0$; i.e the kernel is all of $E$. Also, note that $\text{ker}(A-\lambda_iI)$ is precisely the eigenspace $E_{\lambda_i}$ of $A$ correpsonding to $\lambda_i$. Hence, we have shown that
\begin{equation}
E = \bigoplus_{i=1}^k E_{\lambda_i}.
\end{equation}
Recall that $A$ is diagonalizable if and only if we have such a direct sum decomposition; hence this completes the proof.
Best Answer
I won't use any other eigenvalues explicitly, so set $\lambda=\lambda_1$ and $r=r_1$ for brevity.
Let $P=(X-\lambda)^r$ and let $Q=f/P$ be the remaining factor of the characteristic polynomial $f$. By assumption (certainly you meant the $\lambda_i$ to be distinct, although this is not said explicitly), $\lambda$ is not a root of$~Q$, and so $P$ and $Q$ are relatively prime in $F[X]$. Therefore there exist Bézout coefficients $S,T\in F[X]$ with $SP+TQ=1$. Since $(PQ)[A]=0$ (by Cayley-Hamilton) it is easy to see that $(SP)[A]$ and $(TQ)[A]$ are projectors on complementary subspaces $U,W$ of $V$. Then $P[A]$ acts invertibly on $U$ and vanishes on $W$, so $W=\ker(P[A])=\ker((A-\lambda I)^r)$, and $U$ is the image of $P[A]$, in particular $\operatorname{rank}(P[A])=\dim(U)$.
Also the characteristic polynomial $f$ of $A$ is the product of the characteristic polynomials of the restrictions of$~A$ to $U$ and $W$, the first of which does not have $\lambda$ as a root (as $P[A]$ acts invertibly on $U$) while the latter is a power of $X-\lambda$ (since $(X-\lambda)^r$ is an annihilating polynomial). But then the characteristic polynomial of the restriction of$~A$ to$~W$ is $(X-\lambda)^r$, and $\dim(W)$ is its degree$~r$, and $\dim(U)=\dim(V)-r$, completing the proof.
Note that my second paragraph only depends of $PQ$ being an annihilating polynomial with $P$ grouping all its factors $X-\lambda$; in particular it could have been the minimal polynomial, and the exponent could have been different from $r$. This is in fact a bit confusing in a proof of something that eventually says something about$~r$. Eventually $r$ is gotten as $\dim(W)$, not as $\deg(P)$.