The matrix of a linear transformation with respect to a basis has as columns the coordinates of the images of the basis elements under the transformation. If you have a basis of eigenvectors then the corresponding matrix will be diagonal because $A v_i = \lambda_i v_i$, that is, there is only one non-zero coordinate (more precisely, at most one, because $\lambda_i$ may be $0$).
You seem to know the theory behind the problem. So there is no reason you cannot find the eigenvalues of $A$ and the corresponding eigenvectors required for the diagonalisation. Note that since $A$ is symmetric, it is orthogonally diagonalisable.
To do this in a slightly different setting but essentially the same method, consider the quadratic form
$$Q(X,Y)=(X\quad Y)\begin{pmatrix}a&h\\h&b\end{pmatrix}\begin{pmatrix}X\\Y\end{pmatrix}=aX^2+2hXY+bY^2\quad,\,\,a\ne b$$
Let $x$ be the angle by which the coordinate axes $X,Y$ should be rotated about the origin so that $Q$ is transformed to another quadratic form in which the product term is absent. The vanishing of the product term is equivalent to diagonalising the matrix $A$ or the quadratic form $Q$.
Suppose $u,v$ is our set of coordinate axes.
The required transformation is given by
$$\begin{pmatrix}u\\v\end{pmatrix}=\begin{pmatrix}\cos x&-\sin x\\\sin x&\cos x\end{pmatrix}\begin{pmatrix}X\\Y\end{pmatrix}$$
Or, $$\begin{pmatrix}X\\Y\end{pmatrix}=\begin{pmatrix}\cos x&\sin x\\-\sin x&\cos x\end{pmatrix}\begin{pmatrix}u\\v\end{pmatrix}$$
So, $$Q(X,Y)=Q'(u,v)=Au^2+2Huv+Bv^2$$
, where
$$A=a\cos^2x+2h\sin x\cos x+b\sin^2 x,$$
$$B=a\sin^2x-2h\sin x\cos x+b\cos^2 x,$$
$$H=(b-a)\sin x\cos x+h(\cos^2x-\sin^2 x)$$
The product term $uv$ in $Q'$ vanishes iff
\begin{align}
H=0&\implies(b-a)\sin x\cos x+h(\cos^2 x-\sin^2 x)=0
\\&\implies (b-a)\sin 2x+2h\cos 2x=0
\\&\implies \tan 2x=\frac{2h}{a-b}
\end{align}
Thus using the rotation matrix $P$, we have transformed the matrix $A$ to a diagonal matrix $D$ which is nothing but the matrix associated with the quadratic form $Au^2+Bv^2$.
Best Answer
To swap the $i$th and $j$th entires along the diagonal of a diagonal matrix: Using $E_{ij}$ to denote the matrix with $1$ in the $i$th row and $j$th column and $0$ in all other positions, multiply your matrix on both sides by $I - E_{ii} - E_{jj} + E_{ij} + E_{ji}$. (Since this matrix is involutory, this can be viewed as conjugation.)
The matrix for swapping adjacent entries in pairs would look like \begin{pmatrix} A & 0 & \cdots & 0 \\ 0 & A & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & A \end{pmatrix} in block form, where $A = \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}$.