Is this proof that all elements of the eigenspace are eigenvectors with the same eigenvalue correct

linear algebrasoft-question

The below proof appears in Lang's Introduction to Linear Algebra and appears to contain the main idea of the argument but not all the typical boilerplate that I expected to accompany the proof. It is also included in his slightly-more-advanced text Linear Algebra verbatim.

Theorem

Let $V$ be a vector space, $A : V \rightarrow V$ a linear map, and $\lambda \in \mathbb{R}$. If $V_{\lambda}$ is the subspace of $V$ generated by the eigenvectors of $A$ with eigenvalue $\lambda$, then every element of $V_{\lambda}$ is an eigenvector of $A$ with eigenvalue $\lambda$.

Proof

Let $v_1, v_2 \in V$ be eigenvectors of $A$ with eigenvalue $\lambda$.

Then $A(v_1 + v_2) = Av_1 + Av_2 = \lambda v_1 + \lambda v_2 = \lambda (v_1 + v_2) $.

If $c \in K$ then $A(cv_1) = cAv_1 = c\lambda v_1 = \lambda c v_1$. $\square$

My Impression

The proof seems a little sloppy but establishes the main idea of the theorem. The set $K$ is not defined (I did not see anywhere prior in the book where $K$ is defined throughout as a field of characteristic zero or otherwise). More importantly, it doesn't go through some of the motions I expected (assume $V$ finite-dimensional with dimension $n$, then as a subspace of $V$, $V_{\lambda}$ has dimension $\le n$, there exists a basis of $V_\lambda$, we can exhibit $v \in V_{\lambda} $ as a linear combination of the basis elements and by linearity of $A$, yadda yadda yadda…).

It seems like the proof given is actually a proof of the statement: "Let $E \subseteq V$ be the set of all eigenvectors of $A$ with eigenvalue $\lambda$. Then $E$ is a subspace of $V$."

The Bottom-Line Question: Does the proof actually establish the stated theorem under the typical evaluative standards of an undergraduate linear algebra class? I am trying to use this book to self-study and have been looking at some of the proofs as models for my own writing, but it seemed like either the theorem was not stated correctly here or the proof was deliberately leaving a lot of (routine) blanks for the reader to fill in.

Best Answer

The proof does give the main idea, but I would not call the proof "complete". The following more details would help:

I suppose you are dealing with finite-dimensional vector spaces, since this is an introductory course in Linear Algebra. Nevertheless, you can take a look at how linear span is defined more generally (just take finite linear combinations).

Consider $x \in V_\lambda$. Then, $$x = \sum_{i=1}^m a_i v_i$$ for some $m\in \mathbb N$, scalars $\{a_i\} \subset K$ and eigenvectors $\{v_i\}$ corresponding to $\lambda$. We have $$Ax = A \left(\sum_{i=1}^m a_i v_i \right) = \sum_{i=1}^m a_i Av_i = \lambda\sum_{i=1}^m a_i v_i = \lambda x$$ showing that $x$ is indeed an eigenvector corresponding to $\lambda$.


Remark. A concrete way to understand why the given proof is incomplete is as follows. Suppose $\dim V = 5$, and $\lambda$ is an eigenvalue of $A:V\to V$ with corresponding eigenspace $V_\lambda$. Further suppose $\dim V_\lambda = 3$. You can find an orthonormal (eigen)basis $\{v_1,v_2,v_3\}$ for $V_\lambda$, and any general element of $V_\lambda$ will be a linear combination of $\{v_1,v_2,v_3\}$ (and not just two vectors). To be more explicit, if $x\in V_\lambda$, then $x = a_1v_1 + a_2v_2 + a_3v_3$ for scalars $\{a_i\} \subset K$.

Remark $2$. Therefore, in the above proof, I would say $m := \dim V_\lambda$.