Let $T$ be the linear operator on $M_{n}(\textbf{R})$ defined by $T(A) = A^{t}$. Find a basis $\mathcal{B}$ s.t. $[T]_{\mathcal{B}}$ is diagonal.

eigenvalues-eigenvectorslinear algebramatricessolution-verification

Let $T$ be the linear operator on $M_{n\times n}(\textbf{R})$ defined by $T(A) = A^{t}$.

(a) Show that $\pm 1$ are the only eigenvalues of $T$.

(b) Describe the eigenvectors corresponding to each eigenvalue of $T$.

(c) Find an ordered basis $\mathcal{B}$ for $M_{2\times 2}(\textbf{R})$ such that $[T]_{\mathcal{B}}$ is a diagonal matrix.

(d) Find an ordered basis $\mathcal{B}$ for $M_{n\times n}(\textbf{R})$ such that $[T]_{\mathcal{B}}$ is a diagonal matrix for $n > 2$.

MY (NEW) ATTEMPT

(a) Let $A\in M_{n\times n}(\textbf{R})$ s.t. $A
\neq 0$
. Then $A$ is an eigenvector of $T$ iff $T(A) = \lambda A$, that is to say, $A^{t} = \lambda A$.

Based on it, we conclude that $A = \lambda A^{t}$. Finally, one has that
\begin{align*}
A^{t} = \lambda(\lambda A^{t}) \Rightarrow A^{t} = \lambda^{2}A^{t} \Rightarrow \det(A^{t}) = \lambda^{2n}\det(A^{t}) \Rightarrow \lambda^{2n} = 1 \Rightarrow \lambda = \pm 1
\end{align*}

But I still do not know how to justify the implication where we consider $\det(A^{t}) \neq 0$

Any help in this sense is appreciated.

(b) For $\lambda = 1$, we have that $T(A) = A^{t} = A$.

Thus the eigenvectors corresponding to $\lambda = 1$ belongs to the subspace of symmetric $n\times n$ matrices.

For $\lambda = -1$, we have that $T(A) = A^{t} = -A$.

Thus the eigenvectors corresponding to $\lambda = -1$ belongs to the subspace of skew-symmetric $n\times n$ matrices.

(c) Let us determine the eigenspace associated to each eigenvalue. We shall start with $\lambda =1$:
\begin{align*}
T(A) = 1\cdot A \Rightarrow A^{t} = A \Rightarrow
\begin{bmatrix}
a & c\\
b & d
\end{bmatrix} =
\begin{bmatrix}
a & b\\
c & d
\end{bmatrix} \Rightarrow b = c
\end{align*}

Thus the eigenspace associated to the eigenvalue $\lambda = 1$ is spanned by
\begin{align*}
E_{1} = \left\{\begin{bmatrix}
1 & 0\\
0 & 0\\
\end{bmatrix},
\begin{bmatrix}
0 & 1\\
1 & 0\\
\end{bmatrix},
\begin{bmatrix}
0 & 0\\
0 & 1\\
\end{bmatrix}\right\}
\end{align*}

We may now proceed and determine the eigenspace associated to $\lambda = -1$:
\begin{align*}
T(A) = -1\cdot A \Rightarrow A^{t} = -A \Rightarrow \begin{bmatrix}
a & c\\
b & d
\end{bmatrix} =
\begin{bmatrix}
-a & -b\\
-c & -d
\end{bmatrix} \Rightarrow b = -c,\,a = 0,\,d = 0.
\end{align*}

Thus the eigenspace associated to the eigenvalue $\lambda = -1$ is spanned by
\begin{align*}
E_{-1} = \left\{\begin{bmatrix}
0 & 1\\
-1 & 0\\
\end{bmatrix}\right\}
\end{align*}

Since $E_{1}\cup E_{-1}$ is LI and it spans $M_{2\times 2}(\textbf{R})$, we conclude that $\mathcal{B} = E_{1}\cup E_{-1}$ is a basis indeed.

(d) The same reasoning applies to this case. Precisely speaking, there are $\displaystyle\dim\text{span}\{E_{1}\} = \frac{n(n+1)}{2}$ symmetric matrices and $\displaystyle\dim\text{span}\{E_{-1}\} = \frac{n(n-1)}{2}$
skew-symmetric matrices in the corresponding bases.

COMMENT

This question has already been asked here, but I'd like to know if my new approach is fine. Any comments on the wording of my solution would be appreciated.

Best Answer

From $A^t=\lambda A$ you can deduce $\lambda=1$ provided $A$ has a nonzero diagonal entry. But there can be eigenvectors that have zero along the diagonal and you would need to consider other cases.

There's a simpler way: every matrix $A$ can be uniquely written as the sum of a symmetric and an antisymmetric matrix: $A=A_++A_-$, where $$ A_+=\frac{1}{2}(A+A^t),\qquad A_-=\frac{1}{2}(A-A^t) $$ Then $A^t=A_+-A_-$ and from $A^t=\lambda A$ we get $$ A_+ - A_-=\lambda A_+ + \lambda A_- $$ By uniqueness, we obtain that $A_+=\lambda A_+$ and $-A_-=\lambda A_-$. Since $A\ne0$ (being an eigenvector), we see that either $\lambda=1$ or $\lambda=-1$ and, moreover, that an eigenvector is either a symmetric matrix (for the eigenvalue $1$) or an antisymmetric matrix (for the eigenvalue $-1$). Conversely, any (nonzero) symmetric matrix is an eigenvector for $1$ and any (nonzero) antisymmetric matrix is an eigenvector for $-1$.

Since the sum of the eigenspaces is $M_n(\mathbf{R})$, the endomorphism $T$ is diagonalizable and a basis of eigenvectors can be obtained by a basis of the symmetric matrices, with dimension $n(n+1)/2$, and of the antisymmetric matrices, with dimension $n(n-1)/2$. In the case $n=2$ a basis is $$ \biggl\{ \begin{bmatrix} 1 & 0 \\ 0 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 0 \\ 0 & 1 \end{bmatrix}, \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix}, \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} \biggr\} $$


About your attempt: $\lambda^{2n}=1$ doesn't imply by itself that $\lambda=\pm1$. You need to exclude complex eigenvalues and the fact you're working with real matrices is not relevant: the matrix $$ \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix} $$ has real entries, but no real eigenvalue. On the other hand, for its eigenvalues $\lambda$ it is true that $\lambda^4=1$. Of course it's a different situation than yours, but it shows you have to be careful when jumping to conclusions.

Related Question