The proof is by induction on the dimension of $V$; as with all proofs by induction, that means that we need to explicitly show that the statement is true for some base case(s) (in this case, when the dimension of $V$ is $1$), and that if the statement is true up to some dimension $n$ then it remains true in dimension $n+1$.
The approach is to take a vector space $V$ of dimension $n+1$ and breaking it up into two pieces, namely the subspace $U$ spanned by an eigenvector of $T$ and the subspace $U^\perp$ that is orthogonal to $U$. If $\alpha$ is an orthonormal basis of $U$ and $\beta$ is an orthonormal basis for $U^\perp$, then $\alpha \cup \beta$ is an orthonormal basis for $V$, so all we need to do is find $\alpha$ and $\beta$, each consisting of eigenvectors of $T$.
Finding $\alpha$ is easy, because $U$ is 1-dimensional and spanned by an eigenvector of $T$; just take any vector in $U$ and scale it to have norm $1$.
To find $\beta$ we'd like to apply the induction hypothesis. We do have $\dim(U^\perp) = n < \dim(V) = n+1$, which is good: If we have a self-adjoint operator from $U^\perp$ to $U^\perp$ then the induction hypothesis will give us the basis for $U^\perp$ that we're looking for. The operator we'd like to use is $T$, but $T$ is an operator from $V$ to $V$, not from $U^\perp$ to $U^\perp$. It would be nice, though, if we could think of $T$ as an operator from $U^\perp$ to $U^\perp$. For that reason we define $S : U^\perp \to U^\perp$ to do the same thing as $T$: for all $v \in U^\perp$, $S(v) = T(v)$. There's a little checking to do to make sure this makes sense (specifically, that if $v \in U^\perp$ then $T(v) \in U^\perp$ too), and that $S$ is self-adjoint.
Once those steps are done, we've now got a space ($U^\perp$) of dimension strictly less than the dimension of $V$, and a self-adjoint operator on that space. By induction hypothesis there is an orthonormal basis, call it $\beta$, for $U^\perp$ consisting of eigenvectors of $S$. But $S$ does the same thing as $T$, so the vectors in $\beta$ are also eigenvectors of $T$, which is what we wanted.
You cannot conclude that $v$ is an eigenvector and indeed the author doesn't state it.
You can recover an eigenvector, though. For simplicity, let $S_k=T-\lambda_kI$ and note that these operators commute with each other. Now consider
$$
S_1v,\quad S_2S_1v,\quad \dots,\quad S_{m-1}\dotsm S_2S_1v,\quad S_m\dotsm S_2S_1v=0
$$
Let $r$ be the first index such that $S_r\dotsm S_2S_1v=0$. If $r=1$, you're done. Otherwise $w=S_{r-1}\dotsm S_1v\ne0$ but $S_rw=0$. Hence $w$ is an eigenvalue relative to $\lambda_r$.
With your example,
$$
Tv=(-2+8i,3+i),\qquad T^2v=(-46+14i,1+7i)
$$
and it turns out that
$$
T^2v=(7-11i)v+(4+7i)Tv
$$
so the polynomial is
$$
x^2-(4+7i)x-(7-11i)=(x-(3+5i))(x-(1+2i))
$$
Now
$$
(T-(1+2i)I)v=Tv-(1+2i)v=(-2+8i,3+i)-(-1+3i,3+i)=(-1+5i,0)
$$
which is clearly an eigenvector relative to $3+5i$.
Best Answer