Every operator on a nonzero, finite, complex vector space has an eigenvalue. (Question about Axler’s proof)

linear algebra

I've read through several posts regarding this same proof in the same book (theorem $5.21,$ pg $145,$ Axler, $3$rd), but none of them addressed my particular issue.
For reference (This is the same proof but from an earlier edition)

I'm having trouble understanding the reasoning behind the very first step, i.e.
$(v,Tv,\dots,T^nv)$
Why are we starting with a list of powers of T? I understand the flow of the proof after this, but this first step just seems completely arbitrary to me.
Again, the theorem states: Every operator on a finite-dimensional, nonzero, complex vector space has an eigenvalue. …and I just can't see where a list of powers of T, or even polynomials in general, fit into that.
I'm sorry this question is so vague, but I'm finding it difficult to express precisely in words. Hopefully this is just some ridiculously simple thing that I'm either overlooking or overthinking. I appreciate the help!

Best Answer

The first "trick" is that $\{v,Tv,\ldots, T^n v\}$ is a set of $n+1$ elements in a space of dimension $n$ and thus must be linearly dependendent. So there exist complex numbers $a_0,a_1,\ldots,a_n$ not all zero such that $$ a_0v + a_1Tv + \cdots + a_nT^nv = 0. $$ Write the $a_i$ as coefficients of a polynomial, i.e. $$ p(z) = a_0+a_1z+\cdots+a_nz^n, $$ with $0<m\leqslant n$ the largest index such that $a_m\ne 0$. Then we can factor $$ p(z) = a_0+a_1z+\cdots+a_mz^m = c(z-\lambda_1)\cdots(z-\lambda_m). $$ The second "trick" is that this polynomial also holds true for $z=T$ and $\lambda_i = \lambda_i I$. Then as $p(z)=0$ for all $z\in\mathbb C$ we have $$ 0 = p(T)v = c(T-\lambda_1I)\cdots(T-\lambda_mI)v, $$ which means that for some $\lambda_i$ we have $v\in\ker(T-\lambda_i)$, which means that $T-\lambda_i$ is not injective. In other words, $\lambda_i$ is an eigenvalue for $T$.