A self-adjoint operator $S : X \to X$ (where $X$ is an inner product space) is an operator such that for all $x,y \in X$, we have $$\langle Sx,y \rangle = \langle x,Sy\rangle.$$ This is a generalization of a real, symmetric matrix.
One important property of such operators is that the eigenvalues of a self-adjoint operator are necessarily real. Indeed, if $k$ is any eigenvalue with corresponding (normalized) eigenvector $v$, we see $$k = k\langle v,v \rangle = \langle kv, v \rangle = \langle Sv, v \rangle = \langle v,Sv \rangle = \langle v, kv \rangle = \overline k \langle v, v \rangle = \overline k$$ showing that $k$ is real.
Another important property (perhaps the most important property) of self-adjoint operators is that the eigenvectors of a self-adjoint operator can be taken to form an orthonormal basis for the ambient space (here I am assuming you are working in a finite dimensional space, but a similar statement still holds in infinite dimension, we just need to generalize the idea of a basis a bit and we need completeness). That is, we can take $k_1, \ldots, k_n$ to be the eigenvalues of $S$ (possible with repetitions) with corresponding orthonormal eigenvectors $v_1,\ldots, v_n$ forming a basis for $X$. Then for any $v \in X$, there are scalars $\alpha_1, \ldots, \alpha_n$ so that $v = \alpha_1 v_1 + \cdots + \alpha_nv_n.$ Using linearity of the inner product, we see $$\langle v, v\rangle = \sum^n_{i=1} \sum^n_{j=1} \alpha_i \overline \alpha_j \langle v_i, v_j \rangle.$$ But by orthonormality, $\langle v_i, v_j \rangle = 0$ when $i \neq j$ and $\langle v_i, v_i \rangle = 1$. Thus the above sum becomes $$\langle v, v\rangle = \sum^n_{i=1} \alpha_i \overline \alpha_i = \sum^n_{i=1} \lvert \alpha_i \rvert^2.$$ Similarly, since $$Sv = S(\alpha_1v_1 + \cdots \alpha_n v_n) = \alpha_1 k_1 v_1 + \cdots + \alpha_n k_n v_n $$we have $$\langle Sv, v\rangle = \sum^n_{i=1} \sum^n_{j=1} k_i \alpha_i \overline \alpha_j \langle v_i, v_j \rangle = \sum^n_{i=1} k_i \lvert \alpha_i \rvert^2.$$ Clearly if $k_i \ge 0$ for all $i=1,\ldots, n$ then $$\langle Sv, v\rangle = \sum^n_{i=1} k_i \lvert \alpha_i \rvert^2 \ge 0.$$ Also, if $k_i \le 1$ for all $i = 1,\ldots, n$, then $$\langle Sv, v\rangle = \sum^n_{i=1} k_i \lvert \alpha_i \rvert^2 \le \sum^n_{i=1} \lvert \alpha_i \rvert^2 = \langle v , v \rangle.$$ Conversely, if the given condition holds for all vectors $v$, then applying the condition to the eigenvectors gives $$0 \le \langle Sv_i, v_i \rangle \le \langle v_i, v_i \rangle \,\,\,\, \implies \,\,\,\, 0 \le \langle k_i v_i, v_i \rangle \le \langle v_i, v_i \rangle$$ whence pulling the $k_i$ out of the inner product gives $0 \le k_i \le 1.$
EDIT: Oops, it's not true. In dimension $2$, consider the indefinite inner product
$$ \langle u, v \rangle = u_1 v_1 - u_2 v_2$$
The matrix $$A = \pmatrix{1 & -1\cr 1 & -1\cr}$$ is "self-adjoint" with respect to this, i.e.
$$ \langle u, A v \rangle = \langle A u, v \rangle = (u_1 - u_2)(v_1 - v_2)$$
but it is not diagonalizable: its eigenvalue $0$ has algebraic multiplicity $2$ but
geometric multiplicity $1$, its only eigenvectors being scalar multiples of $\pmatrix{1\cr 1\cr}$.
Best Answer
Note: over a finite-(nonzero)-dimensional complex vector space, every linear transformation has at least one eigenvalue, since the characteristic polynomial splits. It follows by induction that it is upper trianguarizable. But not every linear transformation, even over $\mathbb{C}$, is diagonalizable. Typically, consider the Jordan block $$ T=\pmatrix{0&1\\0&0}. $$
Key point: for a self-adjoint transformation, every eigenvalue is actually real. Indeed, if $T^*=T$ and if $Tx=\lambda x$ with $x\neq 0$, then $$ \lambda (x,x)=(x,\lambda x)=(x,Tx)=(Tx,x)=(\lambda x,x)=\overline{\lambda}(x,x)\quad\Rightarrow\quad \lambda=\overline{\lambda} $$ where I took an inner-product antilinear in the first variable.
Conclusion: it suffices to prove the result for matrices, up to taking the matrix $A$ of $T$ in an orthonormal basis. Since the characteristic polynomial of $A$ splits over $\mathbb{C}$ and since every root is real, it splits over $\mathbb{R}$. And the characteristic polynomial of $A$ is the same, whether you look at it as an element of $M_n(\mathbb{R})$, or as an element of $M_n(\mathbb{C})$. So the charateristic polynomial of $T$ splits over $\mathbb{R}$. And in general, for a linear transformation $T$ over a finite-dimensional $K$-vector space, every root $\lambda$ in $K$ of the characteristic polynomial in $K[X]$ is an eigenvalue of $T$. And conversely. Since both are equivalent to $T-\lambda \mbox{Id}$ not being invertible. Via the determinant for the root side. And via the rank-nullity theorem on the eigenvalue side.
Another way to reach the conclusion: now take $A$ the matrix of $T$ in an orthonormal basis. Then $T$ is self-adjoint if and only if $A^*=A$ is Hermitian (equal to its transconjugate). Over $\mathbb{R}$, this is just $A^T=A$ (i.e. $A$ symmetric, equal to its transpose). But we can see $A$, even if it is in $M_n(\mathbb{R})$, as an element of $M_n(\mathbb{C})$. So it has at least an an eigenpair $(\lambda,x)$ in $\mathbb{C}\times \mathbb{C}^n$. But as we have just shown, $\lambda $ must be real. Therefore, writing $x=y+iz$ where $y$ and $z$ are the real and imaginary parts of $x\in\mathbb{C}^n$ in $\mathbb{R}^n$, we get $$ Ay+iAz=A(y+iz)=Ax=\lambda x=\lambda (y+iz)=\lambda y+i\lambda z\quad \Rightarrow \quad Ay=\lambda y\quad Az=\lambda z. $$ Si $x\neq 0$, one of $y,z$ must be nonzero, giving a real eigenpair for $A$. Whence for $T$ going back from the matrix to the operator.