[Math] Existence of eigenvalues for self-adjoint maps in finite-dimensional inner product spaces

inner-productslinear algebra

For a finite-dimensional inner product space over $\mathbb{C}$, it is clear that every linear transformation is diagonalisable. In my lecture notes, the lecturer claims that:

For a finite-dimensional inner product space, every self-adjoint transformation has at least one eigenvector.

This fact is then used in a proof of the Spectral Theorem for Hermitian matrices.

Over $\mathbb{R}$ for example, we have – in general – no reason to assume that a linear map has an eigenvalue, so why is it the case that a self-adjoint map must have one?

Best Answer

Note: over a finite-(nonzero)-dimensional complex vector space, every linear transformation has at least one eigenvalue, since the characteristic polynomial splits. It follows by induction that it is upper trianguarizable. But not every linear transformation, even over $\mathbb{C}$, is diagonalizable. Typically, consider the Jordan block $$ T=\pmatrix{0&1\\0&0}. $$

Key point: for a self-adjoint transformation, every eigenvalue is actually real. Indeed, if $T^*=T$ and if $Tx=\lambda x$ with $x\neq 0$, then $$ \lambda (x,x)=(x,\lambda x)=(x,Tx)=(Tx,x)=(\lambda x,x)=\overline{\lambda}(x,x)\quad\Rightarrow\quad \lambda=\overline{\lambda} $$ where I took an inner-product antilinear in the first variable.

Conclusion: it suffices to prove the result for matrices, up to taking the matrix $A$ of $T$ in an orthonormal basis. Since the characteristic polynomial of $A$ splits over $\mathbb{C}$ and since every root is real, it splits over $\mathbb{R}$. And the characteristic polynomial of $A$ is the same, whether you look at it as an element of $M_n(\mathbb{R})$, or as an element of $M_n(\mathbb{C})$. So the charateristic polynomial of $T$ splits over $\mathbb{R}$. And in general, for a linear transformation $T$ over a finite-dimensional $K$-vector space, every root $\lambda$ in $K$ of the characteristic polynomial in $K[X]$ is an eigenvalue of $T$. And conversely. Since both are equivalent to $T-\lambda \mbox{Id}$ not being invertible. Via the determinant for the root side. And via the rank-nullity theorem on the eigenvalue side.

Another way to reach the conclusion: now take $A$ the matrix of $T$ in an orthonormal basis. Then $T$ is self-adjoint if and only if $A^*=A$ is Hermitian (equal to its transconjugate). Over $\mathbb{R}$, this is just $A^T=A$ (i.e. $A$ symmetric, equal to its transpose). But we can see $A$, even if it is in $M_n(\mathbb{R})$, as an element of $M_n(\mathbb{C})$. So it has at least an an eigenpair $(\lambda,x)$ in $\mathbb{C}\times \mathbb{C}^n$. But as we have just shown, $\lambda $ must be real. Therefore, writing $x=y+iz$ where $y$ and $z$ are the real and imaginary parts of $x\in\mathbb{C}^n$ in $\mathbb{R}^n$, we get $$ Ay+iAz=A(y+iz)=Ax=\lambda x=\lambda (y+iz)=\lambda y+i\lambda z\quad \Rightarrow \quad Ay=\lambda y\quad Az=\lambda z. $$ Si $x\neq 0$, one of $y,z$ must be nonzero, giving a real eigenpair for $A$. Whence for $T$ going back from the matrix to the operator.