[Math] An operator is semi-simple iff it is diagonalizable

diagonalizationeigenvalues-eigenvectorslinear algebra

I was reading on direct sums when I got into this Wikipedia page

A linear operator T on a finite-dimensional vector space is semi-simple if every T-invariant subspace has a complementary T-invariant subspace.

An important result regarding semi-simple operators is that, a linear operator on a finite dimensional vector space over an algebraically closed field is semi-simple if and only if it is diagonalizable

I thought about how we could prove it, and I am following what is says in the page:

This is because such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane, which itself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any basis for this space can be extended to an eigenbasis.

I understand that we can write $T=\bigoplus_i T_{\lambda_i}$ where $T_{\lambda_i}=T|_{U_{\lambda_i}}$ and ${U_{\lambda_i}}$ is the eigenspace spanned by one eigenvector, but why does that imply that $T$ is diagonalizable?

Best Answer

We denote the algebraically closed ground field by $k$ and the given finite-dimensional $k$-vector space by $V$. There are at least two (equivalent) ways to use induction:


We can construct an increasing chain of subspaces $U_i \subseteq U_{i+1}$ of $V$ which are spanned by eigenvectors of $T$ as follows:

  • Since $k$ is algebraically closed there exists an eigenvector $v_1 \in V$ for $T$, so $U_1 := \langle v_1 \rangle$ is a $T$-invariant subspace of $V$. Since $T$ is semisimple there exists a $T$-invariant subspace $W_1 \subseteq V$ with $V = U_1 \oplus W_1$.

  • Since $k$ is algebraically closed there exists an eigenvector $v_2 \in V$ for the restriction $T|_{W_1}$. This is also an eigenvector for $T$, so $U_2 := \langle v_1, v_2 \rangle$ is a $T$-invariant subspace of $V$. Since $T$ is semisimple there exists a $T$-invariant subspace $W_2 \subseteq V$ with $V = U_2 \oplus W_2$.

By continuing this process we get decompositions \begin{align*} V = U_1 \oplus W_1 = U_2 \oplus W_2 = U_3 \oplus W_3 = \dotsb \end{align*} where the summand $U_i$ are given by $U_i = \langle v_1, v_2, \dotsc, v_i \rangle$ with $v_1, v_2, v_3, \dotsc \in V$ being eigenvectors of $T$. Since $v_{i+1} \notin U_i = \langle v_1, \dotsc, v_i \rangle$ it follows that these eigenvectors are linearly independent, so $\dim U_i = i$.

It follows that the above process terminates after $n = \dim V$ many steps with $U_n = V$ and $W_n = 0$. Then $v_1, \dotsc, v_n$ is a basis of $V$ consisting of eigenvectors of $T$.


For the second proof we will need the following lemma:

Lemma: Let $T \colon V \to V$ be semisimple and let $U \subseteq V$ be a $T$-invariant subspace. Then the restriction $T|_U$ is also semisimple.

Proof: Let $W \subseteq U$ be an invariant subspace. Then $W$ is also a $T$-invariant subspace of $V$, so there exists some $T$-invariant subspace $W' \subseteq V$ with $V = W \oplus W'$. Then $W'' := W' \cap U$ is a $T$-invariant subspace of $U$.

We then have that $U = W \oplus W''$, which can be seen as follows: We have that $$ W \cap W'' = W \cap W' \cap U = 0 $$ since $W \cap W' = 0$. To see that $U = W + W''$ note that we can write $u \in U$ as $u = w + w'$ with $w \in W$ and $w' \in W'$ since $V = W + W'$. Then $w' = u - w \in U + W \subseteq U$ (here we use that $W$ is a subspace of $U$), so $w' \in W' \cap U = W''$.

We now proceed by induction on $\dim V$. For $\dim V = 0$ there is nothing to do.

If $\dim V \geq 1$ then there exists an eigenvector $v_1 \in V$ for $T$ since $k$ is algebraically closed. Then $\langle v_1 \rangle$ is a $T$-invariant subspace, so there exists a $T$-invariant subspace $W \subseteq V$ such that $V = \langle v_1 \rangle \oplus W$. Note that $\dim W = \dim V - 1$ since the eigenvector $v$ is necessarily non-zero.

The restriction $T|_W$ is also semisimple by the above lemma. By induction there exists a basis $v_2, \dotsc, v_n$ of $W$ consisting of eigenvectors of $T|_W$. These are also eigenvectors of $T$, and since $V = \langle v_1 \rangle \oplus W$ we find that $v_1, v_2, \dotsc, v_n$ is a basis of $V$.


Regarding your question:

I understand that we can write $T = \bigoplus_i T_{\lambda_i}$ where $T_{\lambda_i} = T|_{U_{\lambda_i}}$ and $U_{\lambda_i}$ is the eigenspace spanned by one eigenvector, but why does that imply that $T$ is diagonalizable?

I assume that $U$ is the given vector space (which I denoted by $V$) and that $U_{\lambda}$ is the eigenspace of $T$ with respect to the eigenvalue $\lambda$.

If the quoted situation were to happen, then one can pick for each one-dimensional space $U_{\lambda_i}$ some non-zero vector $u_i \in U_{\lambda_i}$. Since $U_{\lambda_i}$ is one-dimensional we have that $U_{\lambda_i} = \langle u_i \rangle$, so $T u_i \in U_{\lambda_i}$ is a scalar multiple of $u_i$ (from your notation I assume that this scalar should by $\lambda_i$). The eigenvectors $(u_i)_i$ form a basis since $T = \bigoplus_i T_{\lambda_i}$.

Note however, that the above proofs (as well as the one given on wikipedia, which is essentially the second one), do not give the decomposition $T = \bigoplus_{\lambda \in k} T_\lambda$ where $T_\lambda = T|_{U_\lambda} = \lambda \operatorname{id}_{U_\lambda}$, but instead a decomposition $T = T_1 \oplus \dotsb \oplus T_n$ where $T_i = T|_{U_i}$ for one-dimensional subspaces $U_1, \dotsc, U_n \subseteq V$. You seem to mix up these two decompositions, but they coincide if and only if every non-trivial eigenspace is one-dimensional, which is not necessarily true.

Related Question