A self-adjoint operator $S : X \to X$ (where $X$ is an inner product space) is an operator such that for all $x,y \in X$, we have $$\langle Sx,y \rangle = \langle x,Sy\rangle.$$ This is a generalization of a real, symmetric matrix.
One important property of such operators is that the eigenvalues of a self-adjoint operator are necessarily real. Indeed, if $k$ is any eigenvalue with corresponding (normalized) eigenvector $v$, we see $$k = k\langle v,v \rangle = \langle kv, v \rangle = \langle Sv, v \rangle = \langle v,Sv \rangle = \langle v, kv \rangle = \overline k \langle v, v \rangle = \overline k$$ showing that $k$ is real.
Another important property (perhaps the most important property) of self-adjoint operators is that the eigenvectors of a self-adjoint operator can be taken to form an orthonormal basis for the ambient space (here I am assuming you are working in a finite dimensional space, but a similar statement still holds in infinite dimension, we just need to generalize the idea of a basis a bit and we need completeness). That is, we can take $k_1, \ldots, k_n$ to be the eigenvalues of $S$ (possible with repetitions) with corresponding orthonormal eigenvectors $v_1,\ldots, v_n$ forming a basis for $X$. Then for any $v \in X$, there are scalars $\alpha_1, \ldots, \alpha_n$ so that $v = \alpha_1 v_1 + \cdots + \alpha_nv_n.$ Using linearity of the inner product, we see $$\langle v, v\rangle = \sum^n_{i=1} \sum^n_{j=1} \alpha_i \overline \alpha_j \langle v_i, v_j \rangle.$$ But by orthonormality, $\langle v_i, v_j \rangle = 0$ when $i \neq j$ and $\langle v_i, v_i \rangle = 1$. Thus the above sum becomes $$\langle v, v\rangle = \sum^n_{i=1} \alpha_i \overline \alpha_i = \sum^n_{i=1} \lvert \alpha_i \rvert^2.$$ Similarly, since $$Sv = S(\alpha_1v_1 + \cdots \alpha_n v_n) = \alpha_1 k_1 v_1 + \cdots + \alpha_n k_n v_n $$we have $$\langle Sv, v\rangle = \sum^n_{i=1} \sum^n_{j=1} k_i \alpha_i \overline \alpha_j \langle v_i, v_j \rangle = \sum^n_{i=1} k_i \lvert \alpha_i \rvert^2.$$ Clearly if $k_i \ge 0$ for all $i=1,\ldots, n$ then $$\langle Sv, v\rangle = \sum^n_{i=1} k_i \lvert \alpha_i \rvert^2 \ge 0.$$ Also, if $k_i \le 1$ for all $i = 1,\ldots, n$, then $$\langle Sv, v\rangle = \sum^n_{i=1} k_i \lvert \alpha_i \rvert^2 \le \sum^n_{i=1} \lvert \alpha_i \rvert^2 = \langle v , v \rangle.$$ Conversely, if the given condition holds for all vectors $v$, then applying the condition to the eigenvectors gives $$0 \le \langle Sv_i, v_i \rangle \le \langle v_i, v_i \rangle \,\,\,\, \implies \,\,\,\, 0 \le \langle k_i v_i, v_i \rangle \le \langle v_i, v_i \rangle$$ whence pulling the $k_i$ out of the inner product gives $0 \le k_i \le 1.$
The proof is by induction on the dimension of $V$; as with all proofs by induction, that means that we need to explicitly show that the statement is true for some base case(s) (in this case, when the dimension of $V$ is $1$), and that if the statement is true up to some dimension $n$ then it remains true in dimension $n+1$.
The approach is to take a vector space $V$ of dimension $n+1$ and breaking it up into two pieces, namely the subspace $U$ spanned by an eigenvector of $T$ and the subspace $U^\perp$ that is orthogonal to $U$. If $\alpha$ is an orthonormal basis of $U$ and $\beta$ is an orthonormal basis for $U^\perp$, then $\alpha \cup \beta$ is an orthonormal basis for $V$, so all we need to do is find $\alpha$ and $\beta$, each consisting of eigenvectors of $T$.
Finding $\alpha$ is easy, because $U$ is 1-dimensional and spanned by an eigenvector of $T$; just take any vector in $U$ and scale it to have norm $1$.
To find $\beta$ we'd like to apply the induction hypothesis. We do have $\dim(U^\perp) = n < \dim(V) = n+1$, which is good: If we have a self-adjoint operator from $U^\perp$ to $U^\perp$ then the induction hypothesis will give us the basis for $U^\perp$ that we're looking for. The operator we'd like to use is $T$, but $T$ is an operator from $V$ to $V$, not from $U^\perp$ to $U^\perp$. It would be nice, though, if we could think of $T$ as an operator from $U^\perp$ to $U^\perp$. For that reason we define $S : U^\perp \to U^\perp$ to do the same thing as $T$: for all $v \in U^\perp$, $S(v) = T(v)$. There's a little checking to do to make sure this makes sense (specifically, that if $v \in U^\perp$ then $T(v) \in U^\perp$ too), and that $S$ is self-adjoint.
Once those steps are done, we've now got a space ($U^\perp$) of dimension strictly less than the dimension of $V$, and a self-adjoint operator on that space. By induction hypothesis there is an orthonormal basis, call it $\beta$, for $U^\perp$ consisting of eigenvectors of $S$. But $S$ does the same thing as $T$, so the vectors in $\beta$ are also eigenvectors of $T$, which is what we wanted.
Best Answer
For the sake of not leaving the question unanswered,
The definition of the adjoint $T^*: W \rightarrow V$ of an operator $T:V \rightarrow W$ is the relation
$$\forall v\in V, w \in W : \, \langle Tv,w \rangle_W = \langle v, T^*w\rangle_V$$
Here, $V$ and $W$ are inner-product spaces (real, complex, Hermitian or Hilbert - choose your favorite), while $\langle\cdot,\cdot\rangle_V$ and $\langle\cdot,\cdot\rangle_W$ denote the inner product in $V$ and $W$, respectively.
Next, the definition of self-adjoint operator $T : V \rightarrow V$ is just $T = T^*$. In this case, for any $v \in V$, we get
$$\langle Tv, v\rangle = \langle v, T^*v\rangle = \langle v, Tv \rangle$$