The formulation in terms of the characteristic polynomial leads immediately to an easy answer. For once one uses knowledge about the eigenvalues to find the characteristic polynomial instead of the other way around. Since $A$ has rank$~1$, the kernel of the associated linear operator has dimension $n-1$ (where $n$ is the size of the matrix), so there is (unless $n=1$) an eigenvalue$~0$ with geometric multiplicity$~n-1$. The algebraic multiplicity of $0$ as eigenvalue is then at least $n-1$, so $X^{n-1}$ divides the characteristic polynomial$~\chi_A$, and $\chi_A=X^n-cX^{n-1}$ for some constant$~c$. In fact $c$ is the trace $\def\tr{\operatorname{tr}}\tr(A)$ of$~A$, since this holds for the coefficient of $X^{n-1}$ of any square matrix of size$~n$. So the answer to the second question is
The characteristic polynomial of an $n\times n$ matrix $A$ of rank$~1$ is $X^n-cX^{n-1}=X^{n-1}(X-c)$, where $c=\tr(A)$.
The nonzero vectors in the $1$-dimensional image of$~A$ are eigenvectors for the eigenvalue$~c$, in other words $A-cI$ is zero on the image of$~A$, which implies that $X(X-c)$ is an annihilating polynomial for$~A$. Therefore
The minimal polynomial of an $n\times n$ matrix $A$ of rank$~1$ with $n>1$ is $X(X-c)$, where $c=\tr(A)$. In particular a rank$~1$ square matrix $A$ of size $n>1$ is diagonalisable if and only if $\tr(A)\neq0$.
See also this question.
For the first question we get from this (replacing $A$ by $-A$, which is also of rank$~1$)
For a matrix $A$ of rank$~1$ one has $\det(A+\lambda I)=\lambda^{n-1}(\lambda+c)$, where $c=\tr(A)$.
In particular, for an $n\times n$ matrix with diagonal entries all equal to$~a$ and off-diagonal entries all equal to$~b$ (which is the most popular special case of a linear combination of a scalar and a rank-one matrix) one finds (using for $A$ the all-$b$ matrix, and $\lambda=a-b$) as determinant $(a-b)^{n-1}(a+(n-1)b)$.
Remember that an $n$-by-$m$ matrix with real-number entries represents a linear map from $\mathbb{R}^m$ to $\mathbb{R}^n$ (or more generally, an $n$-by-$m$ matrix with entries from some field $k$ represents a linear map from $k^m$ to $k^n$). When $m=n$ - that is, when the matrix is square - we're talking about a map from a space to itself.
So really your question amounts to:
Why are maps from a space to itself - as opposed to maps from a space to something else - particularly interesting?
Well, the point is that when I'm looking at a map from a space to itself inputs to and outputs from that map are the same "type" of thing, and so I can meaningfully compare them. So, for example, if $f:\mathbb{R}^4\rightarrow\mathbb{R}^4$ it makes sense to ask when $f(v)$ is parallel to $v$, since $f(v)$ and $v$ lie in the same space; but asking when $g(v)$ is parallel to $v$ for $g:\mathbb{R}^4\rightarrow\mathbb{R}^3$ doesn't make any sense, since $g(v)$ and $v$ are just different types of objects. (This example, by the way, is just saying that eigenvectors/values make sense when the matrix is square, but not when it's not square.)
As another example, let's consider the determinant. The geometric meaning of the determinant is that it measures how much a linear map "expands/shrinks" a unit of (signed) volume - e.g. the map $(x,y,z)\mapsto(-2x,2y,2z)$ takes a unit of volume to $-8$ units of volume, so has determinant $-8$. What's interesting is that this applies to every blob of volume: it doesn't matter whether we look at how the map distorts the usual 1-1-1 cube, or some other random cube.
But what if we try to go from $3$D to $2$D (so we're considering a $2$-by-$3$ matrix) or vice versa? Well, we can try to use the same idea: (proportionally) how much area does a given volume wind up producing? However, we now run into problems:
If we go from $3$ to $2$, the "stretching factor" is no longer invariant. Consider the projection map $(x,y,z)\mapsto (x,y)$, and think about what happens when I stretch a bit of volume vertically ...
If we go from $2$ to $3$, we're never going to get any volume at all - the starting dimension is just too small! So regardless of what map we're looking at, our "stretching factor" seems to be $0$.
The point is, in the non-square case the "determinant" as naively construed either is ill-defined or is $0$ for stupid reasons.
Best Answer
Such a function cannot exist. Let $A = \begin{pmatrix} 1 & 0 \\ 0 & 1 \\ 0 & 0\end{pmatrix}$ and $B = \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \end{pmatrix}$. Then, since both $AB$ and $BA$ are square, if there existed a function $D$ with the properties 1-3 stated there would hold \begin{align} \begin{split} 1 &= \det \begin{pmatrix} 1 & 0 \\ 0 & 1 \end{pmatrix} = \det(BA) = D(BA) = D(B)D(A) \\ &= D(A)D(B) = D(AB) = \det(AB) = \det \begin{pmatrix} 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 0 \end{pmatrix} = 0. \end{split} \end{align}