Self-adjoint operator and symmetric operator

eigenvalues-eigenvectorsinner-productslinear algebraself-adjoint-operatorssymmetric matrices

we recently learned about self-adjoined operator with the formal definition $ ⟨Tv, w⟩ = ⟨v, Tw⟩$ for every $v, w$ in $V.$

In the other side we talked that self-adjoined can be represented as a symmetric operator (or matrix).

can you explain the geometric interoperation of a symmetric operator (matrix) and what does it mean?

also we learned that symmetric operator also always has real eigenvalues, I understood the part on the real eigenvalues, but why always exists such eigenvalues.

also can you help understand why for every two column in a Symmetric matrix are orthogonal (for every C1,C2 in A symmetric $<C1, C2> = 0.$), I understood the algebraic proof but I will be happy for some geometric intuition.

and finally what is the connection between the eigenvalues and eigenvectors of A symmetric with the the linear operator that A represents? (we learned that somehow its related to the direction that the operator scale/ squeeze the plane).

thank you

Best Answer

Geometrically, it's probably best to think about self-adjoint operators in terms of their eigenspaces. An operator on a finite-dimensional inner product space is self-adjoint if and only if its eigenvalues are real and its eigenspaces are orthogonal and sum (directly) to the whole space.

The real eigenvalues means, roughly, there can't be any kind of rotation happening in any plane. All of the orthogonal spaces must stretch, shrink, and/or reflect.

Here's some examples, and geometric reasoning to support why/why not they are self adjoint:

Rotations in a plane

As stated before, there can't really be rotations while remaining self-adjoint, as these produce complex eigenvalues (of modulus $1$, in fact).

Projections onto a line/plane/subspace by least distance

Yep! These are self-adjoint. In essence, we are decomposing the space into the space we are projecting onto (the range), and its orthogonal complement (the kernel). We are leaving the vectors in the range alone (i.e. multiplying them by $1$), and shrinking the vectors in kernel to nothing (i.e. multiplying them by $0$).

Reflections, by least distance

Also self-adjoint. Rather than shrinking the complement to nothing, instead we are reflecting and multiplying the vectors by $-1$. This still makes them self-adjoint, but it will mean that the map is not positive-(semi)definite.

Projections onto one subspace, along a complementary subspace

This is a more general type of projection, which won't generally be self-adjoint, as the complementary subspace need not be orthogonal to the original subspace.

Hope that helps!


EDIT: Regarding orthogonal eigenspaces, suppose that $T : V \to V$ is self-adjoint, and $v_1, v_2$ are eigenvalues for distinct eigenvalues $\lambda_1, \lambda_2$. We simply need to show $\langle v_1, v_2 \rangle = 0$.

To prove this, consider \begin{align*} \lambda_1 \langle v_1, v_2 \rangle &= \langle \lambda_1 v_1, v_2 \rangle \\ &= \langle Tv_1, v_2 \rangle \\ &= \langle v_1, Tv_2 \rangle \\ &= \langle v_1, \lambda_2 v_2 \rangle \\ &= \overline{\lambda_2} \langle v_1, v_2 \rangle \\ &= \lambda_2 \langle v_1, v_2 \rangle, \end{align*} where the last line uses the fact that $\lambda_2$ is real. Thus, we have $$(\lambda_1 - \lambda_2)\langle v_1, v_2 \rangle = 0 \implies \langle v_1, v_2 \rangle = 0$$ since $\lambda_1 - \lambda_2 \neq 0$.

Related Question