An insight into the concept of eigenvectors

eigenvalues-eigenvectorslinear algebralinear-transformationsmatricesproof-verification

I am trying to interpret the concept of eigenvalues and eigenvectors geometrically. In order to ensure that my thinking is headed in the right direction, I am writing this up. I want a verification of the following and some further insight into the topic.

$\mathbf{———-}$

Let us consider the effect of rotation upon an image through a certain angle $\theta $. image_rotate

To apply this '$rotational$ ' effect on a vector $\begin{pmatrix} x \\ y \end{pmatrix}$ in $\mathbb{R}^2$ (the real plane), we have the matrix $A=\begin{pmatrix} \cos\theta \quad \sin\theta \\-\sin\theta \quad \cos \theta \end{pmatrix}$. $\bigg[$If we apply this to any vector, we can regard this as the entire real plane rotating counterclockwise$\bigg]$

Intuitively, the only vector that is at a 'standstill' is the null vector. $\bigg[$By vector in this context, we mean the directed line segment joining the point $(x,y)$ and the the origin $(0,0)\bigg]$.

Now, by definition, an eigenvector $X$ has to be a non-null vector such that $AX=\lambda X$, for some $\lambda \in \mathbb{R}$.

We get the characteristic equation of $A$ to be $x^2-2x\cos \theta +1=0$, which has no $\mathbf{real \ root}$, for $\theta \neq n\pi, n\in \mathbb{Z}$.

Thus, $\forall \lambda \in \mathbb{R}$, $rank(A-\lambda I_2)=2$, which implies the only solution for the homogeneous system $(A-\lambda I_2)X=\begin{pmatrix} 0 \\ 0 \end{pmatrix}$ is the trivial solution. $\bigg[$ The sole reason behind $\lambda$ to be a real valued scalar is to ensure that the $'amount'$ of the stretching remains meaningful. For example, stretching a vector by $\iota$ times won't make sense.$\bigg]$

Let us now turn our attention to those cases when $\theta =n \pi$ and $x=+ 1$ and $-1$, according as $n$ is even or odd. In those cases, the rotation matrix reduces to $\begin{pmatrix} 1 \quad 0 \\ 0 \quad 1 \end{pmatrix}$ and
$\displaystyle{\begin{pmatrix}-1 & 0 \\ 0 & -1 \end{pmatrix}}$ respectively.

Now, $1$ ( and $-1$) are their eigenvalues (in respective order) and the entire real plane(except the null vector) is the set of all eigenvectors corresponding to $1$ (and $-1$).

When $\theta=2m\pi$, the transformation is the identity transformation, and when $\theta=(2m+1)\pi$, the direction of each non-null vector is reversed.

$\mathbf{———-}$

Is there any mistake in the above? Any suggestion is welcome.

Thank You!

Best Answer

There are no mistakes, as far as I can tell. But

  • the phrase

    "The sole reason behind λ to be a real valued scalar is to ensure that the ′amount′ of the stretching remains meaningful. For example, stretching a vector by ι times won't make sense."

    could be very misleading to someone who doesn't know that complex eigenvalues (of real matrices) are a thing.

  • The whole text is based on the example of a rotation, while most linear operators are not rotations, and that's precisely when eigenvalues and eigenvectors are interesting.

Related Question