I cannot really follow the reasoning you are hinting in your question, but here's my take:
To talk about density you need a topology. Since $M_n(\mathbb{C})$, the space of complex $n\times n$ matrices is finite-dimensional, a very natural notion of convergence is entry-wise; so we can consider the metric
$$
d(A,B)=\max\{ |A_{kj}-B_{kj}|\ : k,j=1,\ldots,n\}, \ \ \ A,B\in M_n(\mathbb{C}).
$$
It is not hard to check that for any matrix $C$,
$$
d(CA,CB)\leq d(A,B)\,\sum_{k,j=1}^n |C_{kj}|,
$$
and the same inequality holds for multiplication on the right (this will be used in the last inequality below).
Now take any $A\in M_n(\mathbb{C})$. Let $J$ be its Jordan canonical form; then there exists a non-singular matrix $S$ such that $J=SAS^{-1}$. Fix $\varepsilon>0$. Let
$$
m=\left(\sum_{k,j=1}^n |S_{kj}|\right)\,\left(\sum_{k,j=1}^n |(S^{-1})_{kj}|\right)
$$
Now, the matrix $J$ is upper triangular, so its eigenvalues (which are those of $A$) are the diagonal entries. Let $J'$ be the matrix obtained from $J$ by perturbing the diagonal entries of $J$ by less than $\varepsilon/m$ in such a way that all the diagonal entries of $J'$ are distinct.
But now $J'$ is diagonalizable, since it has $n$ distinct eigenvalues. And $d(J,J')<\varepsilon/m$. Then $S^{-1}J'S$ is diagonalizable and
$$
d(S^{-1}J'S,A)=d(S^{-1}J'S,S^{-1}JS)\leq m\,d(J',J)<\varepsilon.
$$
The formulation in terms of the characteristic polynomial leads immediately to an easy answer. For once one uses knowledge about the eigenvalues to find the characteristic polynomial instead of the other way around. Since $A$ has rank$~1$, the kernel of the associated linear operator has dimension $n-1$ (where $n$ is the size of the matrix), so there is (unless $n=1$) an eigenvalue$~0$ with geometric multiplicity$~n-1$. The algebraic multiplicity of $0$ as eigenvalue is then at least $n-1$, so $X^{n-1}$ divides the characteristic polynomial$~\chi_A$, and $\chi_A=X^n-cX^{n-1}$ for some constant$~c$. In fact $c$ is the trace $\def\tr{\operatorname{tr}}\tr(A)$ of$~A$, since this holds for the coefficient of $X^{n-1}$ of any square matrix of size$~n$. So the answer to the second question is
The characteristic polynomial of an $n\times n$ matrix $A$ of rank$~1$ is $X^n-cX^{n-1}=X^{n-1}(X-c)$, where $c=\tr(A)$.
The nonzero vectors in the $1$-dimensional image of$~A$ are eigenvectors for the eigenvalue$~c$, in other words $A-cI$ is zero on the image of$~A$, which implies that $X(X-c)$ is an annihilating polynomial for$~A$. Therefore
The minimal polynomial of an $n\times n$ matrix $A$ of rank$~1$ with $n>1$ is $X(X-c)$, where $c=\tr(A)$. In particular a rank$~1$ square matrix $A$ of size $n>1$ is diagonalisable if and only if $\tr(A)\neq0$.
See also this question.
For the first question we get from this (replacing $A$ by $-A$, which is also of rank$~1$)
For a matrix $A$ of rank$~1$ one has $\det(A+\lambda I)=\lambda^{n-1}(\lambda+c)$, where $c=\tr(A)$.
In particular, for an $n\times n$ matrix with diagonal entries all equal to$~a$ and off-diagonal entries all equal to$~b$ (which is the most popular special case of a linear combination of a scalar and a rank-one matrix) one finds (using for $A$ the all-$b$ matrix, and $\lambda=a-b$) as determinant $(a-b)^{n-1}(a+(n-1)b)$.
Best Answer
Let $P=A+B$ and $Q=A-B$. Then $A=\frac 12(P+Q)$ and $B=\frac 12(P-Q)$. Hence the hypothesis reads as $PQ+QP=2I$ and we have to prove that $\det(QP-PQ)=0$.
We have $I-PQ=-(I-QP)$. Therefore $\det(I-PQ)=(-1)^{2n+1}\det(I-QP)=-\det(I-QP)$. On the other hand, by Sylvester we have $\det(I-PQ)=\det(I-QP)$. It follows that $\det(I-PQ)=0$.
Since $\det(I-PQ)=0$, there is a nonzero vector $v$ with $(I-PQ)v=0$ i.e. $PQv=v$. It follows from $PQ+QP=2I$ that $QPv=v$ as well. Therefore $(QP-PQ)v=0$. Since $v$ is nonzero, the last equality gives $\det(QP-PQ)=0$, as desired.