Linear Algebra – How to Triangularize Two Commutative Linear Transformations

linear algebralinear-transformationstransformationvector-spaces

Prove two commutative linear transformations on a finite-dimensional vector space $V$ over an algebraically closed field can be simultaneously triangularized.

It is equivalent to show if $AB=BA$, then there exists a basis $X$ such that both $[A:X]$ and $[B:X]$ are triangular, where $A$ and $B$ are both linear transformations.

Hint: Find a subspace $W$ of $V$ that is invariant under both by $A$ and $B$, with this in mind, consider any proper value $λ$ of $A$ and examine the set of all solutions of $Ax=λx$ for the role of $W$.

Notation:Let $A$ be a linear transformation on an $n$-dimensional vector space, and let $X={x_1,x_2,…,X-n}$ be a basis in that space, then $[A]=[A:X]=(a_{ij})$ be the matrix of $A$ in the coordinate system $X$, so that $$Ax_{j}=\sum_{i}a_{ij}x_{i}$$.

Source: Halmos, Finite dimensional vector spaces, section 56 Triangular form exercise #2.

Could anyone please help, I have not idea how to do this. Any help is appreciated. Thanks a lot.

Best Answer

Here is another way, (more matrix-heavy way) of proving it:

Outline:

Step 0: Every matrix $T$ associated with a linear operator on a finite-dimensional vector space over an algebraically closed field has at least one eigenpair ($\lambda$,$v$).**

Step 1: If $AB=BA$, then there exists one common eigenvector.

Step 2: By induction over the dimension $n$, we can conclude, the claim using the previous steps.

Proof of Steps:

Step 0: $det($T$-\lambda I)$ is polynomial with coefficients from the algebraically closed field and thus has a root, which means there is at least one eigenvalue $\lambda$ and associated eigenvector $v$ for $T$.

Step 1: Consider an eigenpair $(\lambda,v)$, thus $v \in \text{null}(A-\lambda I)$ (which exists, due to Step 0), then it holds $ (A-\lambda I) Bv = B (A-\lambda I) v = 0$, thus B is invariant with respect to $\text{null}(A-\lambda I)$. Thus, the restriction of the linear operator associated with $B$ with respect to $\text{null}(A-\lambda I)$ is a linear operator on $\text{null}(A-\lambda I)$, hence by applying Step 0 again, we know there exists an eigenvector $\tilde{v}$ of $B$ in $\text{null}(A-\lambda I)$ and therefore $\tilde{v}$ is a common eigenvector of $A$ and $B$.

Step 2:

Induction basis: Check it works for matrices with dimension $1\times 1$.

Induction step: Let $AB=BA$ and be both matrices of $n\times n$ over the algebraically closed field, then we know from Step $1$, they have a common eigenvector $v$. Choose a basis $\{v,w_1,w_2, ...,w_{n-1}\}$ and transform both matrices with respect to this basis to get $\tilde{A}$, $\tilde{B}$, which look like this: \begin{align} \tilde{A} = \begin{bmatrix} \lambda_1& a^T_1\\0&\tilde{A}_1\end{bmatrix} \end{align}

\begin{align} \tilde{B} = \begin{bmatrix} \lambda_2& b^T_1\\0&\tilde{B}_1\end{bmatrix} \end{align}

Now since $AB=BA$ then equivalently $\tilde{A}\tilde{B}=\tilde{B}\tilde{A}$, which gives us the following, using their mentioned structure: \begin{align} \begin{bmatrix} \lambda_2 \lambda_1& \lambda_2 a^T_1+b^T_1\tilde{A}_1\\0&\tilde{B}_1\tilde{A}_1\end{bmatrix} = \begin{bmatrix} \lambda_1 \lambda_2& \lambda_1 b^T_1+a^T_1\tilde{B}_1\\0&\tilde{A}_1\tilde{B}_1\end{bmatrix} \end{align} which implies $\tilde{A}_1\tilde{B}_1=\tilde{B}_1\tilde{A}_1$ for the lower dimensional matrices of $(n-1)\times(n-1)$. Now by using induction, you know that you can find new set of vectors $\{\tilde{w}_1,\tilde{w}_2, ...,\tilde{w}_{n-1}\}$ in the span of $\{w_1,w_2, ...,w_{n-1}\}$, which make $\tilde{B}_1,\tilde{A}_1$ simultaneously triangularizable and thus $\{v,\tilde{w}_1,\tilde{w}_2, ...,\tilde{w}_{n-1}\}$ makes the original $\tilde{B},\tilde{A}$ simultaneously triangularizable.