For some reason, people teaching Linear Algebra forgot to connect inner product with the ordinary dot product in $\mathbb{R}^{3}$ or $\mathbb{R}^{n}$. If you have a vector $u$ and a line through the origin with direction vector $v$, then how would you find the point on that line closest to $u$? Answer: orthogonal projection. The same holds in infinite dimensional inner-product spaces, and even for complex spaces. You want to find a scalar $\alpha$ such that $(u-\alpha v)\perp v$, which gives
$$
(u,v)-\alpha(v,v) = 0,\\
\alpha = \frac{(u,v)}{(v,v)}
$$
Now you can decompose into a right triangle where $u$ is the hypotenuse, $\alpha v$ is the leg along the line with direction vector $v$ and $u-\alpha v$ is the other leg. Explicitly, you have the following orthogonal decomposition:
$$
u = \alpha v + (u-\alpha v),\\
(\alpha v,u-\alpha v) = 0.
$$
By the Pythagorean Theorem (which is a direct computation using inner product axioms):
$$
\|u\|^{2} =\|\alpha v\|^{2}+\|(u-\alpha v)\|^{2}.
$$
The Cauchy-Schwarz inequality is exactly the following
$$
\|\alpha v\|^{2} \le \|u\|^{2} \\
\mbox{ with equality iff } \|u-\alpha v\| = 0.
$$
Write this out:
$$
|\alpha|^{2}\|v\|^{2} \le \|u\|^{2} \\
\frac{|(u,v)|^{2}}{(v,v)^{2}}\|v\|^{2} \le (u,u) \\
|(u,v)|^{2} \le (u,u)(v,v).
$$
A self-adjoint operator $S : X \to X$ (where $X$ is an inner product space) is an operator such that for all $x,y \in X$, we have $$\langle Sx,y \rangle = \langle x,Sy\rangle.$$ This is a generalization of a real, symmetric matrix.
One important property of such operators is that the eigenvalues of a self-adjoint operator are necessarily real. Indeed, if $k$ is any eigenvalue with corresponding (normalized) eigenvector $v$, we see $$k = k\langle v,v \rangle = \langle kv, v \rangle = \langle Sv, v \rangle = \langle v,Sv \rangle = \langle v, kv \rangle = \overline k \langle v, v \rangle = \overline k$$ showing that $k$ is real.
Another important property (perhaps the most important property) of self-adjoint operators is that the eigenvectors of a self-adjoint operator can be taken to form an orthonormal basis for the ambient space (here I am assuming you are working in a finite dimensional space, but a similar statement still holds in infinite dimension, we just need to generalize the idea of a basis a bit and we need completeness). That is, we can take $k_1, \ldots, k_n$ to be the eigenvalues of $S$ (possible with repetitions) with corresponding orthonormal eigenvectors $v_1,\ldots, v_n$ forming a basis for $X$. Then for any $v \in X$, there are scalars $\alpha_1, \ldots, \alpha_n$ so that $v = \alpha_1 v_1 + \cdots + \alpha_nv_n.$ Using linearity of the inner product, we see $$\langle v, v\rangle = \sum^n_{i=1} \sum^n_{j=1} \alpha_i \overline \alpha_j \langle v_i, v_j \rangle.$$ But by orthonormality, $\langle v_i, v_j \rangle = 0$ when $i \neq j$ and $\langle v_i, v_i \rangle = 1$. Thus the above sum becomes $$\langle v, v\rangle = \sum^n_{i=1} \alpha_i \overline \alpha_i = \sum^n_{i=1} \lvert \alpha_i \rvert^2.$$ Similarly, since $$Sv = S(\alpha_1v_1 + \cdots \alpha_n v_n) = \alpha_1 k_1 v_1 + \cdots + \alpha_n k_n v_n $$we have $$\langle Sv, v\rangle = \sum^n_{i=1} \sum^n_{j=1} k_i \alpha_i \overline \alpha_j \langle v_i, v_j \rangle = \sum^n_{i=1} k_i \lvert \alpha_i \rvert^2.$$ Clearly if $k_i \ge 0$ for all $i=1,\ldots, n$ then $$\langle Sv, v\rangle = \sum^n_{i=1} k_i \lvert \alpha_i \rvert^2 \ge 0.$$ Also, if $k_i \le 1$ for all $i = 1,\ldots, n$, then $$\langle Sv, v\rangle = \sum^n_{i=1} k_i \lvert \alpha_i \rvert^2 \le \sum^n_{i=1} \lvert \alpha_i \rvert^2 = \langle v , v \rangle.$$ Conversely, if the given condition holds for all vectors $v$, then applying the condition to the eigenvectors gives $$0 \le \langle Sv_i, v_i \rangle \le \langle v_i, v_i \rangle \,\,\,\, \implies \,\,\,\, 0 \le \langle k_i v_i, v_i \rangle \le \langle v_i, v_i \rangle$$ whence pulling the $k_i$ out of the inner product gives $0 \le k_i \le 1.$
Best Answer
It is not entirely clear to me how the proof above is meant to work, perhaps it is assumed that $V$ is finite dimensional?
If $V$ is any Hilbert space then the result is true. The following proof depends on the fact that a normal operator $A$ is invertible iff it is bounded below (that is, there is some $\mu>0$ such that $\|Ax\| \ge \mu \|x\|$).
Let $u\pm iv$ solve $x^2+bx+c=0$ (in particular, since $b^2<4c$ we see that $v \neq 0$) and note that $T+bT+cI = (T-uI-ivI)(T-uI+ivI)$. It is straightforward to check that $B=T-uI- ivI$ is normal and we see that $T+bT+cI = B B^*$.
$\|Bx\|^2 = \langle Bx, Bx \rangle = \|(T-u)x\|^2 + |v^2|\|x\|^2 \ge |v|^2 \|x\|^2$ and so $B$ is invertible. Similarly $B^*$ is invertible and so $BB^*$ is invertible.