Simultaneous Diagonalizability of Multiple Commuting Matrices – Linear Algebra

diagonalizationeigenvalues-eigenvectorslinear algebralinear-transformationsmatrices

I know that for two given diagonalizable matrices $A_1$ and $A_2$, they commute if and only if they are simultaneously diagonalizable. I was wondering if a similar condition held for multiple pairwise commuting matrices.

Specifically, if we have a list of diagonalizable matrices $A_1, \cdots, A_n$ and $A_i$ commutes with $A_j$ for all $1 \leq i, j \leq n$, then does there exist a simultaneous eigenbasis of all the $A_i$? That is, does there exist $S$ such that $S A_i S^{-1}$ is diagonal for all $i$? If this is not in general true, what kinds of non-trivial conditions are sufficient to make such a statement true?

Best Answer

The answer is yes, a collection of commuting diagonalisable matrices admit an basis which is an eigenbasis of each matrix. To think about why, it's better to think about linear operators and eigenspaces.

Let's say we have $A, B, C \colon V \to V$ three linear operators on a finite-dimensional vector space $V$, which pairwise commute and each is diagonalisable. For each $\lambda$, let $V(\lambda) = \{v \in V \mid Av = \lambda v\}$ be the $\lambda$-eigenspace of $A$. If $\lambda \neq \mu$ it is easy to prove that $V(\lambda) \cap V(\mu) = \{0\}$, and since $A$ is diagonalisable we have $$ V = \bigoplus_{\lambda \in K} V(\lambda),$$ where $K$ is the field you are working over (for example $\mathbb{R}$ or $\mathbb{C}$). Note that the sum above looks infinite, but there are only finitely many $\lambda$ such that $V(\lambda) \neq 0$, so it is in fact a finite sum. You should think of the operator $A$ as cutting $V$ up into pieces, each piece labelled by its eigenvalue.

The inductive step is this: since $B$ commutes with $A$, we have $B(V(\lambda)) \subseteq V(\lambda)$ since if $v \in V(\lambda)$ then $$ A(Bv) = B(Av) = B(\lambda v) = \lambda (Bv). $$ Therefore $B$ restricts to a linear operator on each $V(\lambda)$, and the same goes for $C$. So for each eigenvalue $\lambda$, we have the operators $B|_{V(\lambda)} \colon V(\lambda) \to V(\lambda)$ and $C|_{V(\lambda)} \colon V(\lambda) \to V(\lambda)$, which are commuting diagonalisable operators on the subspace $V(\lambda)$. Now applying the same logic as above, the operator $B|_{V(\lambda)}$ cuts the space $V(\lambda)$ up into pieces, each labelled by an eigenvalue of $B$. Let's name these: $$ \begin{aligned} V(\lambda, \mu) &= \{ v \in V(\lambda) \mid Bv = \mu v \} \\ &= \{v \in V \mid Av = \lambda v \text{ and } Bv = \mu v \}. \end{aligned}$$ Since $B$ is diagonalisable this sum is complete, so we have $$ V(\lambda) = \bigoplus_{\mu \in K} V(\lambda, \mu), $$ and by putting all of the $V(\lambda)$ back together we get $$ V = \bigoplus_{\lambda, \mu \in K} V(\lambda, \mu). $$ Now the fact that $C$ commutes with both $A$ and $B$ means that $C$ preserves each simultaneous eigenspace $V(\lambda, \mu)$, and so we do the same thing. It should be clear that as long as you have finitely many linear operators, you can carry out this process to completion. For our $A, B, C$ here we will get the decomposition $$V = \bigoplus_{\lambda, \mu, \nu \in K} V(\lambda, \mu,\nu),$$ where $V(\lambda, \mu, \nu)$ consists of all vectors $v$ for which $Av = \lambda v$, $B v = \mu v$, and $Cv = \nu v$. As we let $\lambda$ range over the finitely many eigenvalues of $A$, and similarly for $\mu, \nu$ we get all the simultaneous eigenspaces, and if you want an eigenbasis just choose any basis for each $V(\lambda, \mu, \nu)$ and take their union.

Related Question