[Math] Finding ordered basis such that $[T]_\beta$ is a diagonal matrix

linear algebralinear-transformationsmatricesproblem solving

Let T be the linear operator on $M_{n \times n }(R)$ defined by $T(A)=A^{t}$.

1). Find an ordered basis $\beta$ for $M_{2 \times 2}(R)$ such that $[T]_\beta$ is a diagonal matrix.

2). Find an ordered basis $\beta$ for $M_{n \times n}(R)$ such that $[T]_\beta$ is a diagonal matrix for n>2.

The theorem I can think of is

Theorem :linear operator T on a finite-dimensional vector space V is diagonalizable if and only if there exists an ordered basis β for V consisting of eigenvectors of T. Furthermore, if T is diagonalizable, β = {$v_1 , v_2 , . . . , v_n$} is an ordered basis of eigenvectors of T, and $D = [T]_\beta$, then D is a diagonal matrix and $D_{jj}$ is the eigenvalue corresponding to $v_j$ for $1 ≤ j ≤ n$.

Question: I am still not sure how to tackle those problems, any thought?

Best Answer

Start with the first part. As you said, the eigenvalues are $\lambda = \pm 1$. I'm not 100% clear on how you've gone about this, but even if this was just a guess, we will vindicate this in the end.

Let's first look for the eigenvectors corresponding to $\lambda = -1$. We wish to solve the equation $$T(A) = A^\top = (-1)A,$$ where $A$ is a $2 \times 2$ real matrix. Matrices $A$ in $M_{2 \times 2}(\Bbb{R})$ take the form $$A = \begin{pmatrix} a & b \\ c & d\end{pmatrix}.$$ Then, we are solving $$\begin{pmatrix} a & c \\ b & d\end{pmatrix} = \begin{pmatrix} -a & -b \\ -c & -d\end{pmatrix}.$$ By equating entries, \begin{align*} a &= -a \\ c &= -b \\ b &= -c \\ d &= -d. \end{align*} This is now a system of linear equations. The first and fourth equations imply that $a = 0$ and $d= 0$, while the second and third equations contend the same thing: $c = -b$. Thus, our eigenvector must take the following form: $$A = \begin{pmatrix} 0 & -b \\ b & 0\end{pmatrix},$$ where $b \in \Bbb{R}$. Please verify that $T(A) = -A$, as required, so $A$ is definitely an eigenvector for eigenvalue $-1$, so long as $A \neq 0$. That is, so long as $b \neq 0$. As such, our eigenspace is spanned by a single vector: $$\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}.$$


We can do the same thing for the other eigenvalue. We're now solving $$\begin{pmatrix} a & c \\ b & d\end{pmatrix} = \begin{pmatrix} a & b \\ c & d\end{pmatrix}.$$ By equating entries, \begin{align*} a &= a \\ c &= b \\ b &= c \\ d &= d. \end{align*} Now, the first and fourth equations are tautological, and can be ignored. The the second and third terms tell us the same thing again: $c = b$. Thus, our eigenvectors take the form, \begin{align*} A &= \begin{pmatrix} a & b \\ b & d\end{pmatrix} \\ &= a \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix} + b \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix} + d \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}. \end{align*} Thus, $A$ must be a linear combination of the above three matrices. Verify that they are eigenvectors, that they're linearly independent, and hence conclude that they form a basis for the eigenspace.

So, we can form an eigenbasis $$\left\{\begin{pmatrix} 0 & -1 \\ 1 & 0\end{pmatrix}, \begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 1 \\ 1 & 0 \end{pmatrix}, \begin{pmatrix} 0 & 0 \\ 0 & 1 \end{pmatrix}\right\}.$$ Note that if we were missing any eigenvalues, we wouldn't have four linearly independent eigenvectors, so indeed, $-1$ and $+1$ are the only two eigenvalues.

That's it for part 1). For part 2), I would suggest thinking about how this generalises. Our eigenbasis from the previous part consisted of matrices with a single $1$ in the diagonal (and $0$s elsewhere), as well as a matrix with two symmetric off-diagonal $1$s, and another matrix where these $1$s had different signs. Think about how you'd generalise this to more dimensions.

Related Question