[Math] If $A$ is normal, then $A(M)\subset M \Rightarrow A(M^{\perp})\subset M^{\perp}$.

inner-productslinear algebra

If a linear transformation $A$ on a finite-dimensional unitary space $V$ is normal, then $A(M)\subset M$ implies $A(M^{\perp})\subset M^{\perp}$ for every subspace $M$ of $V$.

This problem was taken from Halmos's Finite Dimensional Vector Spaces (sec. 80).

I hope that you help me with a solution or a hint. Thanks.

Best Answer

Infinite-dimensional case: interestingly, this is not true in infinite dimension. Indeed, Sarason proved that if $A$ is a normal bounded linear operator on a Hilbert space $H$, then every (closed) invariant subspace $M$ of $A$ is a reducing subspace of $A$ (i.e. $M^\perp$ is invariant under $A$ as well) if and only if the weak closed algebra generated by $A$ is a $*$-algebra.

Here are my two preferred ways of doing this. They are very different. Note that this can be done by pure diagonalization as well, which is probably the most standard way. See here. Note that the converse is also shown in this link: if every invariant subspace $M$ of $A$ is reducing, then $A$ is normal.

  • Method 1:

If $A(M)\subseteq M$, note that $A^k(M)\subseteq M$ for every $k\in\mathbb{N}$, whence $p(A)(M)\subseteq M$ for every $p\in\mathbb{C}[X]$.

Fact 1: $A$ is normal if and only if $A^*=p(A)$ for some polynomial $p\in\mathbb{C}[X]$.

Proof: the if is clear and not needed here. For the only if, diagonalize $A$ in an orthonormal basis and use Lagrange interpolation to find a polynomial such that $p(\lambda)=\overline{\lambda}$ for every $\lambda $ in the spectrum of $A$. Then $A^*=p(A)$. $\Box$

So if $A$ is normal and $A(M)\subseteq M$, then $A^*(M)\subseteq M$.

Fact 2: for any $B\in L(V)$, $B(M)\subseteq M$ implies $B^*(M^\perp)\subseteq M^\perp$. And conversely.

Proof: this is just a routine verification. Take $x\in M^\perp$. Then for every $y\in M$, $(B^*x,y)=(x,By)=0$ since $By$ belongs to $M$. So $B^*x$ is orthogonal to $M$. The converse is just the property we have just proved with $B^*$ instead of $B$ and $M^\perp$ instead of $M$. $\Box$

Applying the latter to $B=A^*$ yields, given $B^*=(A^*)^*=A$, $A(M^\perp)\subseteq M^\perp$. QED.


  • Method 2:

If you are familiar with the Peirce decomposition with respect to an orthogonal decomposition, you can skip and go directly to the grey boxes.

We have an orthogonal decomposition $V=M\oplus M^\perp$. Denote by $P$ the projection onto $M$, and $P^\perp$ the projection onto $M^\perp$. Note that $P^\perp=I-P$.

Any $A\in L(V)$ can be represented by a $2\times 2 $ matrix $$ A=\pmatrix{S&T\\U&V}\quad S=PAP, T=PAP^\perp, U=P^\perp AP, V=P^\perp AP^\perp. $$ That's called the Peirce decomposittion. Note that for instance, $S$ can be identified with the restriction of $A$ to $M$ followed by the projection onto $M$.

Since we started we an orthgonal decomposition, $P$ and $P^\perp$ are self-adjoint idempotents. It follows that the representation of $A^*$ is $$ A^*=\pmatrix{S^*&U^*\\T^*&V^*}. $$

Now the assumption that $A(M)\subseteq M$ is equivalent to $U=0$, that is $P^\perp AP=0$. So we have

$$ A=\pmatrix{S&T\\0&V} \qquad A^*=\pmatrix{S^*&0\\T^*&V^*} $$

whence, by $2\times 2$ computation,

$$ AA^*=\pmatrix{SS^*+TT^*&TV^*\\VT^*&VV^*}\qquad A^*A=\pmatrix{S^*S&S^*T\\T^*S&T^*T+V^*V}. $$

So if we further assume that $A$ is normal, i.e. $A^*A=AA^*$, it follows in particular that

$$ TT^*=S^*S-SS^*\quad \Rightarrow \quad \mbox{tr}(TT^*)=\mbox{tr}(S^*S)-\mbox{tr}(SS^*)=0 $$

by commutativity of the trace. But since $TT^*$ is positive, this forces (diagonalization or separation of Hilbert-Schmidt norm) $TT^*=0$, whence $T=0$ (as $\|T\|^2=\|T^*\|^2=\|TT^*\|=\|T^*T\|$). So the upper-right block of $A$ is null, which means that $A(M^\perp)\subseteq M^\perp$. QED.