Your proof is only correct if by "diagonalizable" you mean "diagonalizable" over an extension field of $\mathbb F$.
However, in my experience this is not the most usual interpretation of diagonalizable.
The rotation $\begin {pmatrix}
0&-1\\1&0\end {pmatrix}$ by $\pi/2$ in the plane over $\mathbb R$ for example is not diagonalizable over $\mathbb R$, even though its characteristic polynomial is $X^2+1$ has nonzero discriminant.
In your proof however it counts as diagonalizable, and that is the controversial point.
Edit
I have just checked that Hoffman-Kunze explicitly write on page 185 of their Linear Algebra that the above matrix is not diagonalizable.
As proven in this post, the idea goes as follows: take $W$ an $B$-invariant subspace. Now, since $B$ is diagonalizable with eigenvalues $\mu_1, \dots, \mu_k$,
$$
\mathbb{k}^n = E_{\mu_1} \oplus \cdots \oplus E_{\mu_k}
$$
It suffices to see that $W = (W\cap E_{\mu_1}) \oplus \cdots \oplus ( W\cap E_{\mu_k})$ in which case one can form a basis from basis of each $W \cap E_{\mu_i}$, which will be made of eigenvalues of $B$ because it is contained in $E_{\mu_i}$. In effect, let's see both inclusion: the immediate one is that $(W\cap E_{\mu_1}) \oplus \cdots \oplus ( W\cap E_{\mu_k})\subseteq W$ since each space is contained in $W$, and the latter is a subspace.
As for the other, since $W = W \cap \mathbb{k}^k = W \cap \bigoplus_{i=1}^n E_{\mu_i}$, any element $w$ of $W$ is a sum of eigenvectors,
$$w = e_1 + \dots + e_l$$
with $e_i$ eigenvector of eigenvalue $\mu_{j_i}$. Therefore, it is sufficient to show that if $\sum_{i=1}^ke_l \in W$, then $e_1, \dots, e_l \in W$. We proceed by induction on $l$. If $l = 1$, then $e_1 = w \in W$. If $l >1$, since
$$
Bw - \mu_{j_1}w = (\mu_{j_1} - \mu_{j_1})e_1 + \dots + (\mu_{j_l} - \mu_{j_1})e_l \in W
$$
and $\mu_{j_i} - \mu_{j_1} \neq 0$, by inductive hypothesis $e_i \in W$ for $i >1$, and so finally $e_1 = w - e_2 - \dots - e_l \in W$, completing the proof.
Best Answer
Your approach is correct (in fact, you just have to take $A$ non-invertible).
The set of diagonalizable matrices over $\Bbb R$ is not dense, for example the matrix $\pmatrix{0&1\\ -1&0}$ is not diagonalizable, and by continuity of the roots of a polynomial, cannot be approached by diagonalizable matrices.
However, it's true when the field is $\Bbb C$. Take $A\in\mathcal M_n(\Bbb C)$, then after changing the basis if necessary, we can assume $A$ upper-triangular. Either all the eigenvalues of $A$ are distinct, and in this case $A$ is diagonalizable, or not. In this case, let $A_j$ the diagonal matrix $\operatorname{diag}(j^{-1},(2j)^{—1},\ldots,(nj)^{-1})$. Then the eigenvalues of $A+A_j$ are distinct by construction and $A_j\to 0$.