Since,
$\det(A-\lambda I)=\det(A_1-\lambda I)\det(A_2-\lambda I)...\det(A_n-\lambda I)$,
the eigenvalues of $A$ are just the list of eigenvalues of each $A_i$.
You can say a lot more about the matrix you presented. Lets define the following two vectors
$$
u=\begin{pmatrix}1\\1\\\vdots\\1\end{pmatrix}
\:{\rm and}\:v=\begin{pmatrix}a_{1}\\a_{2}\\\vdots\\a_{n}\end{pmatrix}
$$
Then your matrix is exactly
$$A=uv^{T}$$
Assume $v\neq 0$ (because the zero matrix is a trivial case)
First Case: $\sum_{i=1}^{n}a_{i}\neq 0$.
You can easily see that
- The eigenvalue $\lambda=0$ is of multiplicity $n-1$, with $n-1$ linearly independent eigenvectors given by any basis of the subspace ${\rm Span}\left\{v\right\}^{\perp}$ (that's true because ${\rm Span}\left\{v\right\}\oplus{\rm Span}\left\{v\right\}^{\perp}=\mathbb{R}^{n}$ and ${\rm Span}\left\{v\right\}$ is of dimension $1$).
- The eigenvalue $\lambda=\sum_{i=1}^{n}a_{i}$ corresponds to the eigenvector $u$, since
$$Au=uv^{T}u=\left(v^{T}u\right)u=\left(\sum_{i=1}^{n}a_{i}\right)u$$
Therefore, in this case you have $n$ linearly independent eigenvectors and the matrix is diagonalizable.
Second Case: $\sum_{i=1}^{n}a_{i}=0$.
In this case the first point still applies, but the
matrix is not diagonalizable since it has only $n-1$ linearly independent eigenvectors (you can't have $n$ linearly independent eigenvectors with eigenvalue $\lambda=0$, unless the matrix is zero). It does, however, admits the following Jordan's canonical form
$$A\sim\begin{pmatrix}J_{2}\left(0\right)&0_{2\times\left(n-2\right)}\\0_{\left(n-2\right)\times2}&0_{\left(n-2\right)\times\left(n-2\right)}\end{pmatrix}=J_{2}\left(0\right)\oplus 0_{\left(n-2\right)\times\left(n-2\right)}$$
where
$$J_{2}\left(0\right)=\begin{pmatrix}0&1\\0&0\end{pmatrix}$$
is a $2\times 2$ Jordan's block of eigenvalue $\lambda=0$.
Best Answer
I will answer your question just for the cases $N = 2$ and $N = 3$:
Let
$$ B_2 = \left( \begin{array}{cccc} 0 & 1 & 1 & 0 \\ 1 & 0 & 0 & 1 \\ 1 & 0 & 0 & 1 \\ 0 & 1 & 1 & 0 \end{array} \right), \quad B_3 = \left( \begin{array}{cccccc} 0 & 1 & 1 & 1 & 1 & 0 \\ 1 & 0 & 1 & 1 & 0 & 1 \\ 1 & 1 & 0 & 0 & 1 & 1 \\ 1 & 1 & 0 & 0 & 1 & 1 \\ 1 & 0 & 1 & 1 & 0 & 1 \\ 0 & 1 & 1 & 1 & 1 & 0 \end{array} \right), $$
then, $\text{Spec}{(B_2)} = {(-2,2,0,0)} \, $ and $\text{Spec}{(B_3)} = (-2,-2,0,0,0,4). $
With the help of numerics, I've been able to show (at least for sufficiently large values of $N$) that the characteristic polynomial is given by:
which tells you that the only eigenvalues of this kind of matrices are $-2,0,2N-2 \ $ with the corresponding multiplicities given by $p(\lambda)$.
Here is an animation showing the spectrum of the matrices $B_N$ for $N \in (2,30)$:
Here's the same approach in the case we have the $B_N$ matrices defined as:
$$B_N = \begin{bmatrix} C_N & A_N \\ A_N & C_N \end{bmatrix},$$
then:
which tells you that the only eigenvalues of this kind of matrices are $-2,0,2,2N-2 \ $ with the corresponding multiplicities given by $p(\lambda)$.
Here is another animation showing the spectrum of the matrices $B_N$ for $N \in (2,30)$:
pretty cool!
Hope somebody can shed some light on these results.
Cheers!