A real symmetric matrix has a full basis of orthonormal eigenvectors, so given $A$ as defined in the question, there exists real orthogonal matrix $U$ such that $U^T A U$ is diagonal. For reasons outlined here the practical methods for computing the eigenvalues and eigenvectors of matrices are iterative rather than "direct" (in the sense of giving an explicit formula).
The premise of the SciComp.SE question linked above is that iterative methods for real symmetric matrix eigenvalue/diagonalization typically begin by reducing to a similar (real symmetric) tridiagonal form using Householder transformations, or in some cases Givens rotations, another type of real orthogonal matrix. Orthogonal matrices are desirable not merely because one gets their matrix inverse cheaply (by taking the transpose) but because their numerical error properties are advantageous (multiplication by an orthogonal matrix preserves the length of a vector, and this has a benefit also in controlling the growth of rounding errors).
While the tridiagonalization process is direct, getting from there to a diagonal matrix is where iterative methods come into play. The main benefit is that instead of working with a "full" matrix, which requires $O(n^2)$ operations to perform matrix-vector multiplication, in tridiagonal form matrix-vector multiplies cost only $O(n)$.
When working with a Toeplitz matrix, matrix-vector multiplication can be done in $O(n \ln n)$ time by embedding the $n \times n$ Toeplitz matrix in a $2n \times 2n$ circulant matrix, padding the vector with zeros and using FFT.
An approach of that kind preserves the Toeplitz structure of the original matrix, and so can be called a structure-preserving algorithm. Structure-preserving algorithms for bisymmetric eigenproblems have been studied, eg. Structure Preserving Algorithms for Perplectic Eigenproblems. We restrict ourselves for now to describing the "standard trick" mentioned in my comment for transforming a $2n \times 2n$ bisymmetric matrix into an orthogonally similar matrix with two $n \times n$ blocks on the diagonal.
Let $ \def\iddots{ {\kern3mu\raise1mu{.}\kern3mu\raise6mu{.}\kern3mu\raise12mu{.}}} P = \begin{pmatrix} 0 & \cdots & 1 \\ \vdots & \iddots & \vdots \\ 1 & \cdots & 0 \end{pmatrix}$ be the order-reversing $n \times n$ permutation matrix, clearly symmetric as well as orthogonal. Then define $U = \frac{1}{\sqrt{2}}\begin{pmatrix} I & P \\ -P & I \end{pmatrix}$, an orthogonal $2n \times 2n$ matrix.
The bisymmetric $2n \times 2n$ matrix $A$ has the form $\begin{pmatrix} M & B \\ B^T & PMP \end{pmatrix}$, where $M$ is symmetric and also $PB = B^T P$ is symmetric. Then:
$$ U^T A U = \begin{pmatrix} M - BP & 0 \\ 0 & PMP + PB \end{pmatrix} $$
Thus diagonalizing $A$ amounts to diagonalizing each of the half-sized symmetric blocks of this reduced form.
Added:
Below are some computations of eigenvalues for small sizes, showing the interlaced nature of the eigenvalues with incrementing $n$:
n=6 n=7 n=8 n=9 n=10
3.945471
3.740494
3.512377 1.559856
3.223818 1.361472
2.960929 1.142162 0.604773
0.930378 0.414335
0.620470 0.205781 -0.024785
0.005202 -0.203630
-0.278546 -0.396355 -0.455340
-0.666005 -0.619805
-0.824663 -0.792701 -0.776527
-0.949686 -0.922321
-1.149049 -1.069249 -1.010681
-1.184892 -1.134022
-1.329141 -1.249268 -1.181594
-1.358814 -1.277188
-1.352748 -1.296923
-1.359335
-1.364250
Let $B := \{ B_{t} \}_{t\geq 0} $ be a fractional Brownian motion of Hurst parameter $1-\alpha/2 $ on some probability space $(\Omega,\mathcal{F},P)$. Since $B$ has the covariance function
$$
E(B_{t}B_{s}) = \frac{1}{2}\left( s^{2(1-\alpha/2)} + t^{2(1-\alpha/2)} - |t-s|^{2(1-\alpha/2)} \right),
$$
a simple calculation yields
$$
E( (B_{n+k}-B_{n+k-1})(B_{n}-B_{n-1}) ) = \frac{1}{2}\left( (k+1)^{2-\alpha} + |k-1|^{2-\alpha} - 2k^{2-\alpha} \right)
$$
for any integers $n\geq 1$ and $ k\geq 0$ ($E$ means the expectation under $P$). Let us set (note that $B_{0}=0$ a.s.)
$$
X=\left[ \begin{array}{c}
B_{1} \\
B_{2} \\
\cdots \\
B_{k}
\end{array}\right],\
\Delta X=\left[ \begin{array}{c}
B_{1}-B_{0} \\
B_{2}-B_{1} \\
\cdots \\
B_{k}-B_{k-1}
\end{array}\right],\
A=\left[ \begin{array}{c}
1 & 0 & 0 & \cdots & 0 \\
-1 & 1 & 0 & \cdots & 0 \\
0 & \ddots & \ddots & \ddots & \vdots \\
\vdots & \ddots &\ddots & \ddots & 0 \\
0 & \cdots &0 & -1 & 1
\end{array}\right],
$$
then the symmetric Toeplitz matrix of interest (which would be denoted by $C$) can be written as
$$
C = E\left( \Delta X(\Delta X)^{\top} \right)= A E(XX^{\top})A^{\top},
$$
where the superscript $\top$ means transpose.
Since $E(XX^{\top})$ is positive-definite (a proof of this fact can be found in, e.g., proposition 1.6 of "Selected Aspects of Fractional Brownian Motion" by Ivan Nourdin), $C$ is also positive-definite.
Best Answer
The matrix $\Sigma$ can be written as $$ \Sigma=\lambda I+(1-\lambda)E, $$ where $E=ee^T$, $e=[1,1,\ldots,1]^T$. If $V$ is an orthogonal matrix such that $V^TEV=D$ is diagonal, then $$\tag{1} V^T\Sigma V=\lambda I+(1-\lambda)D $$ is the diagonalization of $\Sigma$.
The only thing which remains to find are the matrices $D$ and $V$. Since $E$ has rank one, it has one nonzero eigenvalue and $n-1$ zero eigenvalues and we can chose an orthonormal set of eigenvectors since $E$ is symmetric.
We have $$ Ee=ne, $$ hence $n$ is the only nonzero eigenvalue of $E$ with the constant eigenvector $v_1=e/\sqrt{n}$ (normalized to $\|v_1\|=1$). The other eigenvectors $v_2,\ldots,v_n$ of $E$ corresponding to the zero eigenvalues can be any set of orthonormal eigenvectors orthogonal to $e$. Indeed, if $0\neq v\perp e$, then $$ Ev=(ee^T)v=e(e^Tv)=0. $$ Since the eigenvalues of $\Sigma$ are $\lambda$ plus $(1-\lambda)$ times the eigenvalues of $E$, the eigenvalues of $\Sigma$ are $\lambda$ with the multiplicity $n-1$ and the simple eigenvalue $\lambda+(1-\lambda)n$.
Note that there are infinitely many bases of the eigenspace of $E$ corresponding to the zero eigenvalues. You can chose any $n-1$ orthonormal vectors orthogonal to a constant vector.