I cannot really follow the reasoning you are hinting in your question, but here's my take:
To talk about density you need a topology. Since $M_n(\mathbb{C})$, the space of complex $n\times n$ matrices is finite-dimensional, a very natural notion of convergence is entry-wise; so we can consider the metric
$$
d(A,B)=\max\{ |A_{kj}-B_{kj}|\ : k,j=1,\ldots,n\}, \ \ \ A,B\in M_n(\mathbb{C}).
$$
It is not hard to check that for any matrix $C$,
$$
d(CA,CB)\leq d(A,B)\,\sum_{k,j=1}^n |C_{kj}|,
$$
and the same inequality holds for multiplication on the right (this will be used in the last inequality below).
Now take any $A\in M_n(\mathbb{C})$. Let $J$ be its Jordan canonical form; then there exists a non-singular matrix $S$ such that $J=SAS^{-1}$. Fix $\varepsilon>0$. Let
$$
m=\left(\sum_{k,j=1}^n |S_{kj}|\right)\,\left(\sum_{k,j=1}^n |(S^{-1})_{kj}|\right)
$$
Now, the matrix $J$ is upper triangular, so its eigenvalues (which are those of $A$) are the diagonal entries. Let $J'$ be the matrix obtained from $J$ by perturbing the diagonal entries of $J$ by less than $\varepsilon/m$ in such a way that all the diagonal entries of $J'$ are distinct.
But now $J'$ is diagonalizable, since it has $n$ distinct eigenvalues. And $d(J,J')<\varepsilon/m$. Then $S^{-1}J'S$ is diagonalizable and
$$
d(S^{-1}J'S,A)=d(S^{-1}J'S,S^{-1}JS)\leq m\,d(J',J)<\varepsilon.
$$
I think a very useful notion here is the idea of a "generalized eigenvector".
An eigenvector of a matrix $A$ is a vector $v$ with associated value $\lambda$ such that
$$
(A-\lambda I)v=0
$$
A generalized eigenvector, on the other hand, is a vector $w$ with the same associated value such that
$$
(A-\lambda I)^kw=0
$$
That is, $(A-\lambda I)$ is nilpotent on $w$. Or, in other words:
$$
(A - \lambda I)^{k-1}w=v
$$
For some eigenvector $v$ with the same associated value.
Now, let's see how this definition helps us with a non-diagonalizable matrix such as
$$
A = \pmatrix{
2 & 1\\
0 & 2
}
$$
For this matrix, we have $\lambda=2$ as a unique eigenvalue, and $v=\pmatrix{1\\0}$ as the associated eigenvector, which I will let you verify. $w=\pmatrix{0\\1}$ is our generalized eiegenvector. Notice that
$$
(A - 2I) = \pmatrix{
0 & 1\\
0 & 0}
$$
Is a nilpotent matrix of order $2$. Note that $(A - 2I)v=0$, and $(A- 2I)w=v$ so that $(A-2I)^2w=0$. But what does this mean for what the matrix $A$ does? The behavior of $v$ is fairly obvious, but with $w$ we have
$$
Aw = \pmatrix{1\\2}=2w + v
$$
So $w$ behaves kind of like an eigenvector, but not really. In general, a generalized eigenvector, when acted upon by $A$, gives another vector in the generalized eigenspace.
An important related notion is Jordan Normal Form. That is, while we can't always diagonalize a matrix by finding a basis of eigenvectors, we can always put the matrix into Jordan normal form by finding a basis of generalized eigenvectors/eigenspaces.
I hope that helps. I'd say that the most important thing to grasp from the idea of generalized eigenvectors is that every transformation can be related to the action of a nilpotent over some subspace.
Best Answer
Yes. Here is a proof over $\mathbb{C} $.