Proof that the eigenvectors span the eigenspace for normal operators.

linear algebraspectral-theory

I am trying to understand the proof of the spectral decomposition theorem. In the book Sadri Hassani, the author shows that for any normal operator, say $A$, the eigenspace, say $M_\lambda$ reduces the operator. Then, he shows that individual eigenspaces are orthogonal to each other. He then takes the direct sum of these eigenspaces, say $M$ and shows that it also reduces A. Then, he says that we can treat $M^\perp$ separately. He then proves that $M^\perp$ must be $0$ as every finite dimension matrix contain at least one eigenvector and all the eigenvector have been contained in the eigenspaces. The outline of the proof is attached below:-
Spectral decomposition proof

My question is that why can't $M^\perp$ have an eigenvector of its own after reduction? I can sort of understand that if you take the existing eigenvectors as a basis and add enough basis vector to complete the set, then in the new basis the matrix is block diagonal(due to reduction) and if you find an eigenvector of the $M^\perp$, then you can construct an eigenvector for the whole matrix by taking entries corresponding to the M space as $0$. To show that this is not possible, we have to prove that the eigenvalue you found is the same as one of the eigenvalues of the original matrix(without the basis transformation), i.e, eigenvalues don't change under basis transformation. So, I was wondering if you need this additional proof to complete the theorem or is it complete without it, or there is a flaw in my logic?

Best Answer

I think you are almost there, but you don't need to introduce a basis. Say A acts on $\mathbb{R}^n$. Then $M^\perp$ is a subspace of $\mathbb{R}^n$. We know $M$ reduces $A$ so the restriction of $A$ to $M^\perp$ maps $M^\perp$ to itself. Since we are in finite dimensions, if $M^\perp$ has positive dimension, then the restriction of $A$ to $M^\perp$ must have some eigenvector $v$. That is there is a nonzero $v \in M^\perp \subset \mathbb{R}^n$ and a $\lambda \in \mathbb{C}$ such that

$$ Av = \lambda v. $$

But then $v \in \mathbb{R}^n$ is an eigenvector of $A$, hence we must have $v \in M$ by the definition of $M$. So $v \in M \cap M^\perp$. But then we must have $v=0$ which is a contradiction.

Does this help clear it up?

Also note that there are basis independent characterizations of eigenvalues, e.g. $\lambda$ is an eigenvalue means $A-\lambda I$ has a nontrivial null space. Also, eigenvectors themselves don't really change when you change basis. The basis dependent representation looks different, but its still the same vector. The only thing that has changed is how you write it down.

Trying to address the question in the comment: Let $A_{M}$ and $A_{M^\perp}$ be the restriction of $A$ to $M$ and $M^\perp$, respectively. We assume that $A_{M^\perp}$ has an eigenvector $v \in M^\perp$. That is there is a $\lambda$ such that $$ A_{M^\perp} v = \lambda v. $$

Now $M^\perp$ is a subspace of $\mathbb{R}^n$ so that means $v \in \mathbb{R}^n$. Since $M$ and $M^\perp$ reduce $A$, we have $A= A_M \oplus A_{M^\perp}$. Since $v \in M^\perp$ it follows that $$ A v = (A_M \oplus A_{M^\perp})v= A_M (0) \oplus A_{M^\perp}v = 0 \oplus \lambda v= \lambda v. $$ That is $v$ is an eigenvector of $A$.

The whole point is that since $v$ lives in $M^\perp$, the action of $A$ on $v$ is the same as the action of $A_{M^\perp}$ on $v$. But we know that $v$ is an eigenvector of $A_{M^\perp}$, so it must also be an eigenvector of $A$.

This idea that the action of $A$ on $M^\perp$ is the same as the action of $A_{M^\perp}$ on $M^\perp$ is essentially what it means for a subspace to be reducing. I can figure out what A is doing on all of $\mathbb{R}^n$ just by looking at $A$ does on $M$ and $M^\perp$.