As discussed in the comment section, we believe that the problem comes with a typo and the correct matrix is
$$\mathbf{A}=\begin{bmatrix}
a_1^2+1&a_1a_2+1&a_1a_3+1&\cdots&a_1a_n+1\\
a_2a_1+1&a_2^2+1&a_2a_3+1&\cdots&a_2a_n+1\\
a_3a_1+1&a_3a_2+1&a_3^2+1&\cdots&a_3a_n+1\\
\vdots&\vdots&\vdots&\ddots&\vdots\\
a_na_1+1&a_na_2+1&a_na_3+1&\cdots&a_n^2+1
\end{bmatrix}=\boldsymbol{1}\,\boldsymbol{1}^\top+\boldsymbol{a}\,\boldsymbol{a}^\top\,,$$
where $\boldsymbol{1}:=(\underbrace{1,1,1,\ldots,1}_{n\text{ ones}})$ and $\boldsymbol{a}:=(a_1,a_2,a_3,\ldots,a_n)$. I shall provide a generalization below. From this generalization, we conclude that the eigenvalues of this matrix are $n,\sum\limits_{k=1}^n\,a_k^2,\underbrace{0,0,\ldots,0}_{n-2\text{ zeros}}$.
If $\boldsymbol{a}$ is nonzero, then respective eigenvectors are $\boldsymbol{1},\boldsymbol{a},\boldsymbol{b}_1,\boldsymbol{b}_2,\ldots,\boldsymbol{b}_{n-2}$, where $\boldsymbol{b}_1,\boldsymbol{b}_2,\ldots,\boldsymbol{b}_{n-2}$ are any $n-2$ linearly independent vectors orthogonal (in the usual sense) to both $\boldsymbol{1}$ and $\boldsymbol{a}$ (they exist since the map $\boldsymbol{\varphi}:\mathbb{R}^n\to\mathbb{R}^2$ sending $\boldsymbol{c}\in\mathbb{R}^n$ to $\big(\langle\boldsymbol{c},\boldsymbol{1}\rangle,\langle\boldsymbol{c},\boldsymbol{a}\rangle\big)\in\mathbb{R}^2$ is a linear map of rank $2$, where $\langle\_,\_\rangle$ is the usual inner product on $\mathbb{R}^n$). If $\boldsymbol{a}$ is the zero vector, then there are $n-1$ linearly independent vectors $\boldsymbol{b}_1,\boldsymbol{b}_2,\ldots,\boldsymbol{b}_{n-1}$ orthogonal to $\boldsymbol{1}$, and respective eigenvectors are then $\boldsymbol{1},\boldsymbol{b}_1,\boldsymbol{b}_2,\ldots,\boldsymbol{b}_{n-1}$.
Lemma. Let $\mathbb{K}$ be a field and $n$ a positive integer. The vector field $V:=\mathbb{K}^n$ is equipped with the usual nondegenerate symmetric bilinear form $\langle\_,\_\rangle$ defined by
$$\big\langle (u^1,u^2,\ldots,u^n),(v^1,v^2,\ldots,v^n)\big\rangle:=u_1v_1+u_2v_2+\ldots+u_nv_n$$
for all $u^1,u^2,\ldots,u^n,v^1,v^2,\ldots,v^n\in\mathbb{K}$. For a nonnegative integer $m$, suppose that $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_m$ are linearly independent vectors in $V$, which are mutually orthogonal, namely, $\langle \mathbf{x}_i,\mathbf{x}_j\rangle =0$ for all $i,j\in\{1,2,\ldots,m\}$ with $i\neq j$. Write $\lambda_k:=\langle \mathbf{x}_k,\mathbf{x}_k\rangle$ for $k=1,2,\ldots,m$. Let $\xi_1,\xi_2,\ldots,\xi_m$ be arbitrary nonzero scalars (elements of $\mathbb{K}$). Then, the matrix $$\mathbf{X}=\sum_{k=1}^m\,\xi_k\,\mathbf{x}_k\,\mathbf{x}_k^\top$$ is of rank $m$ with eigenvalues $\xi_1\lambda_1,\xi_2\lambda_2,\ldots,\xi_m\lambda_m,\underbrace{0,0,\ldots,0}_{n-m\text{ zeros}}$.
First, we see that $\mathbf{X}$ has rank $m$ (in this paragraph, mutual orthogonality of $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_m$ is not used). To show this, we define $W$ to be the subspace of $V$ spanned by $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_m$. Thus, $\dim_\mathbb{K}(W)=m$. Take the linear functional $\psi_k:V\to\mathbb{K}$ to be the map sending $\mathbf{v}\mapsto \langle \mathbf{v},\mathbf{x}_k\rangle$ for each $k=1,2,\ldots,m$ and $\mathbf{v}\in V$. These linear functionals are linearly independent element of the dual space $V^*$ of $V$ because $\langle\_,\_\rangle$ is a nondegenerate bilinear form. In particular, there exist $\mathbf{w}_1,\mathbf{w}_2,\ldots,\mathbf{w}_m\in V$ such that $\psi_i(\mathbf{w}_j)=\delta_{i,j}$ for every $i,j=1,2,\ldots,m$, where $\delta$ is the Kronecker delta. Therefore, $\mathbf{X}\,\mathbf{w}_k=\xi_k\,\mathbf{x}_k$ for every $k=1,2,\ldots,m$. This means $\text{im}(\mathbf{X})=W$, whence $\mathbf{X}$ is a matrix of rank $m$ over $\mathbb{K}$.
Without loss of generality, we can rearrange $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_m$ so that, for some nonnegative integer $r\leq m$, $\lambda_1,\lambda_2,\ldots,\lambda_r$ are nonzero, whereas $\lambda_{r+1},\lambda_{r+2},\ldots,\lambda_{m}$ are zero. Observe that, for each integer $t\geq 2$,
$$\mathbf{X}^t=\sum_{k=1}^r\,\xi_k^t\lambda_k^{t-1}\,\mathbf{x}_k\,\mathbf{x}_k^\top\,.$$
Using the same argument as before, we determine that $\mathbf{X}^t$ has rank $r$ for every $t=2,3,\ldots$. That is, $0$ is an eigenvalue of $\mathbf{X}$ with multiplicity exacly $n-r$.
For the remaining part, we just determine $r$ eigenvectors of $\mathbf{X}$ corresponding to the nonzero eigenvalues $\xi_1\lambda_1,\xi_2\lambda_2,\ldots,\xi_r\lambda_r$. This is easy, as $\mathbf{X}\,\mathbf{x}_k=\xi_k\lambda_k\,\mathbf{x}_k$ for all $k=1,2,\ldots,m$, recalling that $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_m$ are mutually orthogonal. Thus, $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_r$ are eigenvectors associated to the eigenvalues $\lambda_1,\lambda_2,\ldots,\lambda_r$, respectively. The lemma is now verified.
Remark. We can say a bit more about the Jordan canonical form of $\mathbf{X}$. First, note that $\ker(\mathbf{X})$ is $(n-m)$-dimensional. As $\dim_\mathbb{K}\big(\ker(\mathbf{X}^t)\big)=n-r$ for all $t=2,3,\ldots$, we conclude that, for the eigenvalue $0$, $\mathbf{X}$ has $m-r$ Jordan blocks of size $2$ and $n+r-2m$ Jordan blocks of size $1$. We can also see that $\mathbf{x}_{r+1},\mathbf{x}_{r+2},\ldots,\mathbf{x}_m\in\ker(\mathbf{X})\subseteq \ker(\mathbf{X}^2)=\ker(\mathbf{X}^3)=\ker(\mathbf{X}^4)=\ldots$. Indeed, for each $k=r+1,r+2,\ldots,m$, the two vectors $\mathbf{x}_k$ and $\mathbf{w}_k$ are generalized eigenvectors for a Jordan block of $\mathbf{X}$. You can then use the map $\boldsymbol{\varphi}:\mathbb{K}^n\to\mathbb{K}^{2m-r}$ sending $$\mathbf{v}\mapsto \big(\langle \mathbf{v},\mathbf{x}_1\rangle,\langle\mathbf{v},\mathbf{x}_2\rangle,\ldots,\langle\mathbf{v},\mathbf{x}_m\rangle,\langle\mathbf{v},\mathbf{w}_{r+1}\rangle,\langle\mathbf{v},\mathbf{w}_{r+2}\rangle,\ldots,\langle\mathbf{v},\mathbf{w}_m\rangle\big)$$
for all $\mathbb{v}\in V$ and take $n+r-2m$ linearly independent vectors $\mathbf{y}_1,\mathbf{y}_2,\ldots,\mathbf{y}_{n+r-2m}\in \ker(\boldsymbol{\varphi})$. Then, you have found generalized eigenvectors of $\mathbf{X}$:
$$\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_r,\mathbf{x}_{r+1},\mathbf{w}_{r+1},\mathbf{x}_{r+2},\mathbf{w}_{r+2},\ldots,\mathbf{x}_m,\mathbf{w}_m,\mathbf{y}_1,\mathbf{y}_2,\ldots,\mathbf{y}_{n+r-2m}\,.$$
In the case where $\mathbb{K}$ is a symmetric subfield of $\mathbb{C}$, that is $\mathbb{K}$ contains the complex conjugates of all its elements, we can use the same argument, except that $\langle\_,\_\rangle$ is replaced by the standard sesquilinear inner product on $\mathbb{C}^n$ and that the transpose operator $\top$ is replaced by the Hermitian operator $\dagger$. In other words, we have the following corollary.
Corollary. Let $\mathbb{K}$ be a symmetric subfield of $\mathbb{C}$ and $n$ a positive integer. For each The vector field $V:=\mathbb{K}^n$ is equipped with the usual Hermitian form $\langle\_,\_\rangle$ defined by
$$\big\langle (u^1,u^2,\ldots,u^n),(v^1,v^2,\ldots,v^n)\big\rangle:=u_1\bar{v}_1+u_2\bar{v}_2+\ldots+u_n\bar{v}_n$$
for all $u^1,u^2,\ldots,u^n,v^1,v^2,\ldots,v^n\in\mathbb{K}$. For a nonnegative integer $m$, suppose that $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_m$ are linearly independent vectors in $V$, which are mutually orthogonal, namely, $\langle \mathbf{x}_i,\mathbf{x}_j\rangle =0$ for all $i,j\in\{1,2,\ldots,m\}$ with $i\neq j$. Write $\lambda_k:=\langle \mathbf{x}_k,\mathbf{x}_k\rangle$ for $k=1,2,\ldots,m$. Let $\xi_1,\xi_2,\ldots,\xi_m$ be arbitrary nonzero scalars (elements of $\mathbb{K}$). Then, the matrix $$\mathbf{X}=\sum_{k=1}^m\,\xi_k\,\mathbf{x}_k\,\mathbf{x}_k^\dagger$$ is of rank $m$ with eigenvalues $\xi_1\lambda_1,\xi_2\lambda_2,\ldots,\xi_m\lambda_m,\underbrace{0,0,\ldots,0}_{n-m\text{ zeros}}$.
Unlike the lemma, all $\lambda_1,\lambda_2,\ldots,\lambda_m\in\mathbb{K}$ are positive real numbers (i.e., they cannot be $0$). Thus, $\mathbf{x}_1,\mathbf{x}_2,\ldots,\mathbf{x}_m$ are all eigenvectors corresponding to the nonzero eigenvalues $\xi_1\lambda_1,\xi_2\lambda_2,\ldots,\xi_m\lambda_m$. Plus, the linear map $\boldsymbol{\varphi}:\mathbb{K}^n\to\mathbb{K}^m$ sending $\mathbf{v}\mapsto\big(\langle\mathbf{v},\mathbf{x}_1\rangle,\langle\mathbf{v},\mathbf{x}_2\rangle,\ldots,\langle\mathbf{v},\mathbf{x}_m\rangle\big)$ for all $\mathbf{v}\in\mathbb{K}^n$ is of rank $m$, so there exist $n-m$ linearly independent elements $\mathbf{y}_1,\mathbf{y}_2,\ldots,\mathbf{y}_{n-m}$ in $\ker(\boldsymbol{\varphi})$. Then, $\mathbf{y}_1,\mathbf{y}_2,\ldots,\mathbf{y}_{n-m}$ are $n-m$ linearly independent eigenvalues of $\mathbf{X}$ associated to the eigenvalue $0$.
P.S. The matrix $\mathbf{B}$ as in the hint of the problem can be taken to be the $n$-by-$m$ matrix $$\mathbf{B}:=\begin{bmatrix}
{\rule[-1ex]{0.5pt}{2.5ex}} & {\rule[-1ex]{0.5pt}{2.5ex}} & & {\rule[-1ex]{0.5pt}{2.5ex}}\\
\textbf{x}_1&\textbf{x}_2&\cdots&\textbf{x}_m\\
{\rule[-1ex]{0.5pt}{2.5ex}} & {\rule[-1ex]{0.5pt}{2.5ex}} & & {\rule[-1ex]{0.5pt}{2.5ex}}
\end{bmatrix}\,,$$ so that $\mathbf{A}=\mathbf{B}\,\mathbf{D}\,\mathbf{B}^\top$ (or $\mathbf{A}=\mathbf{B}\,\mathbf{D}\,\mathbf{B}^\dagger$ in the last part of my answer), where $\mathbf{D}$ is the $m$-by-$m$ diagonal matrix
$$\mathbf{D}:=\text{diag}\left(\xi_1,\xi_2,\ldots,\xi_m\right)\,.$$ You can see that $\mathbf{B}$ has full rank (i.e., $\text{rank}(\mathbf{B})=m$) and $\mathbf{B}^\top\,\mathbf{B}$ (or $\mathbf{B}^\dagger\, \mathbf{B}$ in the last part of my answer) is the $m$-by-$m$ diagonal matrix
$$\text{diag}\left(\lambda_1,\lambda_2,\ldots,\lambda_m\right)\,.$$
Best Answer
EDIT: So here we go with a completely rewrite in the language of linear algebra.
In $V=\mathbb C^n$ with standard scalar product $\langle e_j,e_k\rangle=\delta_{jk}$ consider the linear map $$\begin{matrix}\phi\colon &V&\longrightarrow &V\\ &(x_1,x_2,\ldots,x_{n-1},x_n)&\longmapsto&(x_2,x_3,\ldots,x_n,x_1), \end{matrix}$$ that is $e_1\mapsto e_n$ and $e_k\mapsto e_{k-1}$ for $1<k\le n$. The problem statement asks us to maximize $\langle \phi a,a\rangle$ subject to the conditions
$$\begin{align}\tag{c1}\langle a,v_n\rangle &= 0,\\ \tag{c2}\langle a,a\rangle&=1,&\text{and}\\ \tag{c3}a&\in \mathbb R^n.\end{align}$$
Let $\zeta\in\mathbb C$ be a primitive $n$th root of unity, for example $\zeta=e^{\frac{2\pi}n}=\cos\frac{2\pi}{n}+i\sin\frac{2\pi}{n}$. For $0\le k<n$ let $$v_k=\sum_{\nu=1}^n \zeta^{\nu k}e_\nu=(\zeta^k,\zeta^{2k},\ldots ,\zeta^{nk}).$$ Then $$\langle v_k,v_k\rangle = n,\qquad\langle v_k,v_j\rangle=0\quad\text{for }j\ne k,\qquad \phi(v_k)=\zeta^kv_k,$$i.e. the $v_k$ are an orthogonal eigenvector basis of $V$. Thus if $$\tag1a=\sum_{k=1}^n c_kv_k$$ with $c_k\in\mathbb C$, then $$\langle a,a\rangle = n\sum_{k=1}^n|c_k|^2\qquad\text{and}\qquad\langle \phi a,a\rangle = n\sum_{k=1}^n \zeta^k|c_k|^2.$$
Condition (c$3$) implies that especially $\langle \phi a,a\rangle \in\mathbb R$ and condition (c$1$) is simply that $c_n=0$ in $(1)$. From this we obtain the bound $$\begin{align}\langle \phi a,a\rangle& =n\sum_{k=1}^{n} \zeta^k|c_k|^2\\& =n\sum_{k=1}^{n-1} \zeta^k|c_k|^2\\&=n\Re\left(\sum_{k=1}^{n-1} \zeta^k|c_k|^2\right)\\&=n\sum_{k=1}^{n-1} \Re(\zeta^k)|c_k|^2\\\tag2&\le \max\bigl\{\Re(\xi)\mid \xi^n=1,\xi\ne1\bigr\}\cdot n\sum_{k=1}^{n-1}|c_k|^2\\&=\cos\frac{2\pi}{n}\cdot\langle a,a\rangle.\end{align}.$$ Note that the special choice $a=\frac 1{\sqrt n} (v_1+v_{n-1})=\frac 1{\sqrt n} (v_1+\overline{v_1})$ yields $a\in \mathbb R^n$, $\langle a,a\rangle=1$ and equality in $(2)$. Therefore $$\max\bigl\{\langle \phi a,a\rangle\mid a\in\mathbb R^n,\langle a,a\rangle=1,\langle a,v_n\rangle=0\bigr\}= \cos\frac{2\pi}{n} .$$ Note that for $n=2$ and $n=3$, we recover the results $\cos\pi=-1$ and $\cos\frac{2\pi}3=-\frac12$, confirming what you found.