Linear Algebra – Proof That Every Operator in a Complex Vector Space Has an Eigenvalue

eigenvalues-eigenvectorslinear algebrapolynomialsvector-spaces

I'm currently going through Sheldon Axler's Linear Algebra Done Right. I am confused about three specific lines in Axler's proof that every operator on a finite-dimensional, nonzero, complex vector space has an eigenvalue. The full proof is here, listed as theorem 5.10 on page 81

Axler starts with the following result, where $T$ is a linear operator on vector space $V$, and each $a_{i}$ is a complex number:

$0 = a_{0}v +a_{1}Tv + ….. a_{n}T^nv $

Two lines follow from this result:

$0 = (a_{0}I +a_{1}T + ….. a_{n}T^n)v $

$0 = c(T – 𝜆_{1}I)……(T – 𝜆_{m}I)v$

Where c is a nonzero complex number, and each $𝜆_{i}$ also belongs to the set of complex numbers.

What's confusing me is three things:

  1. First, how do we get from the first equality to the second equality?
  2. Secondly, what does $(T – 𝜆_{1}I)……(T – 𝜆_{m}I)$ actually mean? Is it the function composition of $(T – 𝜆_{1}I)……(T – 𝜆_{m}I)$?
  3. How do we get from the second to third equality? I get why an analogous result would hold if we're dealing with a polynomial in the form of $a_{0}z +a_{1}z + ….. a_{n}z^n $, where $z$ is a complex number. But here T is a linear operator – it's not immediately apparent to me that we can factor $a_{0}I +a_{1}T + ….. a_{n}T^n $ as though it were a regular polynomial.

Thanks in advance.

Best Answer

  1. It's how you define operations between operators. It's the property $f(t)+g(t)=(f+g)(t)$

  2. The product between operators is indeed composition. When you think of them as matrices, it is the usual matrix product.

  3. When you factor a polynomial, like $x^2-3x+2=(x-1)(x-2)$, you get an algebraic relation that will hold in any commutative ring (because if you distribute on the right, you get the left). So, the polynomial factorization also applies when the polynomial is applied to $T$.